content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Questions? Looking for parts? Parts for sale? or just for a chat, The WD Motorcycle forum I had to source two new bolts that secure the damper tubes at the bottom if the sliders. They are slightly longer than the ones that were fitted to the forks. Are they ok to use, or should I shorten them to match the length if the ones I took out? I'm not sure what is the correct length. Long ones are 18mm on the shank. 25mm total length. Short ones are 12mm on the shank 19mm total length. Cheers Al Hello Alun Re. Damper bolts, Just measured some used original bolt, the thread is 5/16 Cycle engineers thread by 26 threads per inch x thread length of 0.460 (11.7 mm)- Old part number 40-G12M-FF94. New part number 01 0699 This bolt has a fibre washer under the bolt head of 5/16 I/d. Old part No. 40-G12M-FF80. New part No.01 0706 The hexaganol head is 1/4 BSW size. You will need to use a 1/4 BSW box/tube spanner if you can find one, I personally use a 13 mm 1/4 drive socket (the only size that I can get in the hole in the bottom of the slider Possabally the longer bolt are used on later AMC heavyweight forks - I am not sure Hope this helps you out Cheers Chris Moore Chris I decided to cut them and they look just like the originals. They were identical except for length.
http://pub37.bravenet.com/forum/static/show.php?usernum=3155626639&frmid=16&msgid=1456223&cmd=show
Ward Six Fire Protection District No. One was audited this week for their five-year fire department rating. To prepare for the audit, Ward Six Firefighters set up several scenarios over several days, using three different trucks with only three people working the scenario each time. They had to complete each scenario in less than five minutes from “go” until water was flowing at 250 gpm from the deck gun on top of the trucks. At least one scenario was completed in 2:57. The audit results will determine every home and business owner’s fire insurance rates in our fire district for the next five years. Process Explained National Fire Protection Association estimates there were approximately 1,160,450 firefighters in the United States in 2015. Of the total number of firefighters 345,600 (30%) were career firefighters and 814,850 (70%) were volunteer firefighters. To read the rest of this story, please see page four of the print edition of The DeQuincy News.
https://www.dequincynews.com/2019/09/29/ward-six-fire-dept-audit-taking-place/
Section IV of Part I Physics question paper of IIT-JEE 2007 contained 3 Matrix-Match Type questions. Each question contained statements given in two columns. Statements A,B,C,D in Column I had to be matched with statements p,q,r,s in Column II. The answers to the questions had to be appropriately bubbled as shown at the end of this post. Characteristic X-rays and hydrogen spectrum are produced by electron transition between two energy levels in an atom. So, (A) is to be matched with (p) and (r). In photo electric effect and β-decay, electrons are emitted from a material. So, (B) is to be matched with (q) and (s). Mosley’s law [√f α Z ] relates the frequency ‘f’ of a particular characteristic X-ray (e.g., Kα ) to the atomic number Z of the target in the X-ray tube. So, (C) is to be matched with (p). In photoelectric effect, the energy of the incident photon is used in dislodging electrons from a photo sensitive surface. So, (D) is to be matched with (q). The appropriate bubbles darkened are shown in figure. The NAND gate will give low output when all inputs are high. The output will be high in all other cases. The inputs A and B are high during the interval from 6 seconds to 8 seconds only. So, the output must be low during this interval only. This is indicated in option (b). The collector current is given by Ic = βIB with usual notations. But, the base current IB = Vi/Ri where Vi and Ri are the base voltage (input voltage) and input resistance respectively. Therefore, IB = 0.01/ 500 = 2 ×10–5 A. The output voltage is the voltage drop across the collector resistance (output resistance) Ro. So, output voltage = IcRo = 1.24×10–3×5000 = 6.2 V. Voltage gain Av = βRo/Ri = 62×5000/500 = 620. Therefore output voltage = 620×0.01V = 6.2 V]. The expression for the current gain in common emitter mode is β = α /(1–α) where α is the current gain in common base mode. Therefore, β = 0.995 /(1–0.995) = 199. Forward resistance, R = (Change in forward biasing voltage) /(Change in forward current) = (0.1 V)/ 10 mA = 0.1/(10×10–3) Ω = 10 Ω. If the energy gap between the conduction band and the valence band is less than 3 eV, the substance is a semiconductor. So, the correct option is (c). You should remember that the values of molar specific heats at constant volume Cv) for mono atomic and diatomic gases are respectively (3/2)R and (5/2)R where R is universal gas constant. The values of molar specific heat at constant pressure Cp) are therefore (5/2)R and (7/2)R respectively, in accordance with Meyer’s relation [Cp – Cv =R]. Cp of the mixture = Cv + R = (19/6)R. Ratio of specific heats of the mixture, γ = Cp/Cv = 19/13. [Generally, if n1 moles of a gas having ratio of specific heats γ1 is mixed with n2 moles of a gas having ratio of specific heats γ2, the ratio of specific heats of the mixture is given by the relation, (n1+ n2)/(γ– 1) = n1/( γ1 –1) + n2/( γ2 –1). You can easily arrive at this result]. If one mole of an ideal mono atomic gas is mixed with one mole of an ideal diatomic gas, the ratio of specific heats of the mixture is 1.5. As an exercise, check this. The following sets of experimental values of Cv and Cp of a given sample of gas were reported by five groups of students. The unit used is calorie mole–1 K–1. Which set gives the most reliable values? Since the minimum value of Cv is (3/2)R which is the value for a mono atomic gas, when you express it in calorie mole–1 K–1, the minimum value is approximately 3. [R = 8.3 J mole–1 K–1 = 2 calorie mole–1 K–1, approximately]. Options (b) and (d) are therefore not acceptable. Out of the remaining three options, (c) is the most reliable since Cp – Cv = R, which should be 2 calorie mole–1 K–1 very nearly. When S is open, the P.D. across 3 μF and 6 μF capacitors are 6V and 3V respectively. [The charges on them are equal and the P.D. across them are inversely proportional to their capacities]. When S is closed, the P.D. across them become 3V and 6V respectively since the PD across the 3Ω and 6Ω resistors are 3V and 6V respectively. Positive charges have to flow from Y to X to achieve this condition. Since the potential at point X is to be made 6V for this, the charge flowing to the 3 μF capacitor is 3μF×3V = 9μC. The charge flowing to the 6μF capacitor is 6μF×3V = 18μC. The total charge = 9 μC +18 μC =27 μC. [You may have certain doubts regarding this solution. Once you note that the potentials of the lower potential plate of the 3 μF capacitor and the higher potential plate of the 6 μF capacitor are raised by 3 volts, by the charges flowing from Y to X, your doubts will be cleared]. If the balancing length is measured initially on the side of the unknown resistance X, it will shift from 60 cm to 40 cm. [Remember that the balancing length is measured from the same side before and after interchanging. You might have noted that the balance point in a meter bridge shifts symmetrically with respect to the mid point of the bridge wire]. Therefore, we have X/2 = 60/40, from which X = 3Ω. [If the balancing length is measured on the side of the known resistance, it will change from 40 cm to 60 cm. In this case, 2/X = 40/60, from which X =3Ω]. The following three questions which appeared in Karnataka CET 2005 are aimed at testing your understanding of basic principles in electrostatics. The Gaussian surface B is just for distracting you. Remember, Gaussian surface is an imagined surface and it has no action on the charge configuration. When you consider the Gaussian surface A, the net charge enclosed by the surface is –14 nC + 78.85 nC + (– 56 nC) = 8.85 nC. The electric flux over the closed surface A, according to Gauss theorem, is q/ε0 where q is the net charge enclosed by the surface. Therefore, electric flux = 8.85×10–9/8.85×10–12 = 103 Nm2C–1 [Option (a)]. [You should note that the electric flux is the product of electric field and area so that its unit is (N/C)×m2 = Nm2C–1]. The work done is zero since there is no potential difference between the initial and final positions of the charge q. The correct option therefore is (d). Note that this is the case for any closed path of any shape. The capacitance of air cored capacitor is ε0A/d where A is the area of the plates and ‘d’ is the separation between the plates. When the separation is doubled, the capacitance is halved and becomes 1 pF. The capacitance when the inter space is filled with a dielectric of dielectric constant ‘K’, the capacitance is Kε0A/d so that it is increased to K times the value with air as the dielectric. Since the increment is from 1 to 6, K = 6. The following two questions are similar in that both require the calculation of angular momentum in central field motion under inverse square law forces. The orbital angular momentum of a satellite is mvr where ‘v’ is the orbital speed. [Angular momentum = Iω = mr2ω = mr2v/r = mvr where ‘I’ is the moment of inertia and ‘ω’ is the angular velocity of the satellite]. The steps for finding the orbital angular momentum of the electron are similar to those in question No.1, with the difference that the centripetal force is supplied in this case by the electrostatic attractive force between the proton and the electron. If both inputs A and B are zero, the diodes will not conduct and the output point will be at ground potential so that the output Y = 0. If at least one input is at logic 1 level (+5 volts), the diode connected to that input will conduct. The diode connected to the output also will conduct making the output high (+5V). Thus the output Y=1. The same thing happens if both inputs are high. So, the circuit is an OR gate. The first gate is a NAND gate. The second gate also is a NAND gate whose inputs are shorted. But when the inputs of a NAND gate are shorted, it becomes an inverter (NOT gate). So, the circuit is a NAND followed by an inverter which is altogether an AND gate [Option (e)]. The linear momentum of an isolated system remains constant. This is a very simple question meant for checking your understanding of basic principles. But if you are not careful, you are liable to pick out a wrong answer! Generally, if there are external torques or forces or both, the velocity of the centre of mass will change. So, the condition of no external torque alone is not sufficient to ensure the constancy of the velocity of the centre of mass. Statement-1 is therefore false. [You should also note that an external torque about the centre of mass will not change the velocity of the centre of mass]. The correct option therefore is (d). From this, I = MR2/2, which is the value for a disc [Option (d)]. Two discs A and B are mounted coaxially on a vertical axle. The discs have moments of inertia I and 2I respectively about the common axis. Disc A is imparted an initial angular velocity 2ω using the entire potential enargy of a spring compressed by a distance x1. Disc B is imparted an angular velocity ω by a spring having the same spring constant and compressed by a distance x2. Both the discs rotate in the clockwise direction. ½ k x22 = ½ ×2I×ω2 for the two cases. Here ‘k’ is the spring constant. I×2ω + 2I×ω = (I+2I)×ω’ where ω’ is the common angular velocity of the discs. [We have added the angular momenta since they are in the same direction]. Disc A will have an angular retardation of magnitude ‘α1’ during the time ‘t’ where as disc B will have an angular acceleration of different magnitude ‘α2’ during the time ‘t’. Considering disc A, we have ω’ = 2ω – α1t from which α1 = (2ω – ω’)/t = 2ω/3t since ω’ = (4/3)ω. [An equal and opposite torque will be exerted on B by A. Check by finding α2 and hence 2Iα2]. Don’t be scared by the relatively large number of forks. This is a very simple question. n31 = 2n1 since the frequency of the last fork is double that of the first. Further, n31 = n1 + 30×5 since there are 30 increments in frequency (each of 5 Hz) from the first fork to the 31st fork. Thus we have 2n1 = n1 + 150 from which n1 = 150 Hz. The frequency of the 3rd fork, n3 = n1 + 2×5 = 150 + 10 = 160 Hz. When the first tuning fork is excited, the vibrations of the air molecules are simple harmonic with angular frequency ω = 512π as is evident from the form of the equation, x = A cos(512πt). The linear frequency of vibration of the first fork is n = ω/2π = 512π/2π = 256 Hz. The frequency of the 2nd tuning fork before loading with wax was therefore 256 Hz. After loading with wax, its frequency is lowered. Since the beat frequency is 4 Hz, its frequency (after loading) is 256 – 4 = 252 Hz. 1/λ = R(1/n12 – 1/n22) where R is Rydberg’s constant and n1 and n2 are integers. Ultraviolet radiations are obtained in the Lyman series of hydrogen spectrum when electron transitions take place from higher orbits (of quantum number n>1)to the innermost orbit (of quantum number n=1). So, for the Lyman series, n1=1 and n2 = 2,3,4,…etc. The largest wave length in the Lyman series is obtained when the transition is from 2nd orbit (n2=2) to the first orbit (n1=1). The smallest wave length in the infra red region is obtained when electron transition occurs from the outermost orbit (n2 = ∞) to the third orbit (n1 = 3) and this spectral line is the shortest wave length line in the Paschen series ( for which n1 = 3 and n2 = 4,5,6….etc.). Dividing the first equation by the second, λ'/122 = 3×9/4, from which λ' = 823 nm. In the Rydberg’s relation, 1/λ = R(1/n12 – 1/n22), n1=1 and n2 = 2,3,4,…etc., for the Lyman series. For the Balmer series (which is in the visible region), n1=2 and n2 = 3,4,5….etc. The longest wave length in Lyman series is obtained for n2 = 2 and the highest frequency (shortest wavelength) in Balmer series is obtained for n2 = ∞. λ'/1240 = 3, from which λ' = 3720 Ǻ. ν = c/λ' = (3×108) /(3720×10–10) = 8×1014 Hz. This question is a simple one and is intended to check your understanding of techniques used in experimental physics in addition to your theoretical knowledge. Since the image appears to the right of the object when the student shifts his eye towards the left, the image is nearer to the student and hence the image distance ‘v’ is greater than the object distance ‘u’. This is possible only if the object is placed between f and 2f. So the correct option is (b). [The image is real since it is inverted as mentioned in the question. The problem can be worked out even if this fact is not mentioned in this problem]. If the angle of incidence is not equal to or greater than the critical angle, there will be partial transmission and partial reflection. As is evident from the figure, the angle between the reflected ray and the refracted ray (angle SQR) is less than 180° – 2θ. The correct option is (c). Laws of reflection are strictly valid for plane surfaces, but not for large spherical surfaces. This is an assertion-reason type MCQ. You might have noted that the formula connecting u, v and f was derived considering rays close to the principal axis so that only a small portion of the mirror surrounding the pole is involved. Statement-1 is therefore true. Statement-2 is false since the laws of reflection are applied to a ray which is incident at a given point. The size of the mirror and its curvature are not involved here. You can find all the posts in Optics on this site by clicking on the label 'optics' below this post. This is a very simple question and the correct option is (e). Imagine the wire to be made of a large number of horizontal and vertical elements as shown. There are as many vertical elements carrying current upwards as there are those carrying currents downwards. The magnetic forces on them will be leftwards and rightwards and they will get canceled. But the magnetic forces on the horizontal elements will be in the same direction ( either upwards or downwards, depending in the directions of the magnetic field and the current) and they will get added to produce a net force IλB. The magnetic force everywhere on the loop is radially outwards as given by Fleming’s left hand rule. So, the loop has a tendency to expand [Option (b)]. In the above question, suppose that in place of the option ‘expand’, you had the option, ‘move towards the positive Z-direction’. In that case also, the correct option would be (b) because the loop will act as a magnetic dipole whose south pole is the nearer face. The loop will therefore move towards the reader. V = (πPr4t)/ (8Lη) where ‘η’ is the coefficient of viscosity of the liquid. [You should note that this formula holds good only if the flow is slow and steady (stream-lined)]. πPr4/8Lη = πP'(r/2)4/[8(L/2)η], where P’ is the pressure difference between the ends of the second tube. From this P' = 16P/2 = 8P. 1/Qnet = 1/Q1 + 1/Q2 + 1/Q3 +…..etc. where Q1, Q2, Q3.....etc. are the individual rates of flow when the tubes are connected separately to the same pressure heaed. Qnet = Q1Q2/(Q1+Q2). Here Q1 = 4 cm3. Since the rate of flow through a tube is given by Q = (πPr4)/ (8Lη), we have Q α r4/L. Q2 = Q1×(2/16) = Q1/8 = 4/8 cm3 = 0.5 cm3. The net rate of flow is therefore given by Qnet = (4×0.5)/(4+0.5) = 4/9 cm3 [Option (e)]. As the temperature is constant, we have P1V1+ P2V2 =.PV where P1 and P2 are the pressures inside the separate bubbles, V1 and V2 are their volumes, P is the pressure inside the combined bubble and V is its volume. Since the bubbles are located in vacuum, the pressure inside the bubble is equal to the excess of pressure 4T/r, where T is the surface tension and ‘r’ is the radius so that we have (4T/r1)×[(4/3)πr13]+ (4T/r2)×[(4/3)πr23] = (4T/R)×[(4/3)πR3] where R is the radius of the combined bubble. This yields R = √(r12+r22). If r1 and r2 are the radii of the bubbles and T is the surface tension of soap solution, we have (4T/r1)/(4T/r2) = 4. Therefore, r1/r2 = ¼. Since the volume is directly proportional to the cube of the radius, the ratio of volumes V1/V2 = (r1/r2)3 = (¼)3 = 1/64 [Option (d)]. The excess of pressure inside the smaller bubble is greater than that inside the bigger bubble. (Remember, P = 4T/r and hence excess of pressure ‘P’ is inversely proportional to the radius ‘r’). Therefore, air will flow from the smaller bubble to the bigger bubble [Option (c)].
http://www.physicsplus.in/2007/04/
Specifies that nodal body force loads are to be accumulated. APDL Command: BFCUM - Parameters - lab Valid body load label. If ALL, use all appropriate labels. - oper Accumulation key: REPL - Subsequent values replace the previous values (default). ADD - Subsequent values are added to the previous values. IGNO - Subsequent values are ignored. - fact Scale factor for the nodal body load values. Zero (or blank) defaults to 1.0. Use a small number for a zero scale factor. The scale factor is not applied to body load phase angles. - tbase Used (only with Lab = TEMP) to calculate the temperature used in the add or replace operation (see Oper) as: Notes Allows repeated nodal body force loads to be replaced, added, or ignored. Nodal body loads are applied with the BF command. Issue the BFLIST command to list the nodal body loads. The operations occur when the next body loads are defined. For example, issuing the BF command with a temperature of 250 after a previous BF command with a temperature of 200 causes the new value of the temperature to be 450 with the add operation, 250 with the replace operation, or 200 with the ignore operation. A scale factor is also available to multiply the next value before the add or replace operation. A scale factor of 2.0 with the previous “add” example results in a temperature of 700. The scale factor is applied even if no previous values exist. Issue BFCUM,STAT to show the current label, operation, and scale factors. Solid model boundary conditions are not affected by this command, but boundary conditions on the FE model are affected. Note:: : FE boundary conditions may still be overwritten by existing solid model boundary conditions if a subsequent boundary condition transfer occurs. BFCUM does not work for tabular boundary conditions. This command is also valid in PREP7.
https://mapdldocs.pyansys.com/mapdl_commands/solution/_autosummary/ansys.mapdl.core.Mapdl.bfcum.html
How to Write a Claim – Step by Step Explanation ... There may be many sub- arguments or claims in your essay, but your position can be strongly proved with the ... Claims, Claims, Claims A claim is the main argument of an essay. It is probably the single most ... A claim must be argumentative. When you make a claim, you are arguing for a certain. A thesis statement, on the other hand, is a claim, fact or argument that you intend to approve or disapprove in your essay. Writing a good thesis statement all boils down to thoroughly understanding the type of 'claim' that you're trying to assert to your readers. Write an essay of at least three pages in which you make a claim about the state of the health care industry, from either the patient or the provider's perspective. Support your claim with references to case studies written by members of the class and available on the class web page. What Does It Mean to Make a Claim During an Argument? An academic claim—a claim you make in an argument—is considered debatable or up for inquiry. James Jasinski explains in "Argument: Sourcebook on Rhetoric" that a claim "expresses a specific position on some doubtful or controversial issue that the arguer wants the audience to accept." What Is a Claim in an Essay? - writingbee.com Is it possible to make a claim more precise and specific? Ensure that your claim conveys exactly what you intend to argue and that the evidence that you have presented is directly linked to the claim. To summarize, the goal of this article is to improve students' understanding of claims in an essay and how they can be formulated. Argumentative Claims - mesacc.edu How to Write a Rhetorical Essay Two Types of Rhetorical Essays There are basically two different types of rhetorical essays. One is an expression of your opinion on a text you read, such as a book or article. This is sometimes called a rhetorical analysis essay.
https://coursezqlnh.netlify.app/aldonza11408he/how-to-make-a-claim-in-an-essay-gob.html
[The structure of the nourishment of preschoolers during the weekend (short report)]. To study the feeding of 190 children aged 3-7 years attending 4 preschool educational institutions in the city Mezhdurechensk, Kemerovo region, at home on weekends questionnaires were conducted by the parents. Studies have shown that multiple meals in 57% of children was 5 times a day, 43% of children--4 times a day, including snacks; 97% of children have at breakfast main dish, drinks and sandwiches and only 3% of children have breakfast that did not meet standards for structure and was insufficient and inadequate to cover subsistence expenses of energy, because consisted of a sandwich and a drink. Only 7% of preschooler's lunch met recommendations on its structure and included the first, main dish, salad and drink, lunch of 93% of children included only one hot dish (first course, or the second). Dinner in volume, compared with the recommended standards, was redundant on average 57% of preschoolers since consisted of sandwiches, salad, meat dish and side, drink. Children's snacking during the day included the following suite of products: sandwiches (with sausage, cheese, butter), tea with sweets (candies, chocolate), fermented milk drink, fruits. When examining the frequency of meals during the day on weekends, there is a tendency to use the same dishes (what children ate for lunch, they eat at dinner on the same day, and then for breakfast on Sunday). The study of the structure of meals in pre-school children at home during the days of the week (evening meal) found that in most cases (67.8%) their dinner was irrational, the structure was identical to the structure of the adults of the family (fried dishes: fried pies, fried potatoes, sodas). Irrationally organized meals for children at the weekend showed inadequate knowledge of parents on the healthy diet for their child at home.
BACKGROUND OF THE INVENTION The present invention relates to bread crumbs. In particular, the invention relates to a process and apparatus for the manufacture of bread crumbs and their coating onto a substrate food such as fish or poultry. The food manufacturing industry produces bread crumbs for coating many food items, for example fish portions, veal and chicken. The conventional broad crumb is small and gritty. The conventional system for manufacturing this type of crumb uses the steps listed below. In the following discussion and description, all percentages are on a by weight basis. 1. Wheat is made into flour. Typically, the extraction rate is of the order of 76-78%. 2. Flour is baked into bread. 3. The bread is made into crumbs, which contain approximately 30% water. 4. The crumbs are dried to reduce the moisture content to approximately 8-12%. 5. The crumbs are begged. 6. The crumbs are stored. 7. The crumbs are transported. 8. The crumbs are stored. 9. The crumbs are transferred to the hopper of an enrobing machine and coated onto a food product. Stages 1 to 6 are carried out by a bread crumb manufacturer. Stages 8 and 9 are carried out by the manufacturer of the final product. For the more sophisticated adult market, there is a need for bigger crumbs that are fresher and have improved qualities, such as texture and taste. To date, it has proved difficult to satisfy this demand, as the use of a fresher bread for creating the crumbs leads to processing problems, for example balling-up of the bread during the crumb producing steps. Furthermore, during transportation and storage, crumbs with a higher moisture content would tend to go mouldy. SUMMARY OF THE INVENTION In order to overcome these problems, the present applicants provide a new system for the manufacture of bread crumbs and their use for coating food. Thus, in one aspect the present invention provides a new process and apparatus in which the bread crumbs are made and then substantially immediately coated onto a substrate food. Thus the apparatus provided is able to carry out all the process steps relating to crumb manufacture from loaves of bread and to coating the crumbs onto a substrate food. In a second aspect the invention provides a process and apparatus for producing bread crumbs which are relatively moist, e.g. having a water content of 25-40% by weight. In another aspect the invention concerns a process according to the first or second aspects which includes preliminary steps of providing bread to be reduced to crumbs. Such a process may comprise the steps of: (a) baking bread from flour which has been aged; (b) staling bread; and (c) making crumbs from the bread and, substantially immediately, coating the crumbs onto a substrate food. The flour may be manufactured from wheat at an extraction rate of 59- 67%. The flour may be aged for not substantially less than 14 days and the aging may be carried out at ambient humidity at a temperature of 15. degree.-22° C. The bread may be allowed to stale for at least one day. Staling may be at ambient humidity and at a temperature of 15. degree.-22° C. Preferably, the bread should not be allowed to stale for more than 5 days. The resulting crumbs coated on the substrate preferably contain approximately 25-40% of water. In order that the present invention is more readily understood, an embodiment will now be described in more detail for the purposes of illustration only. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows a flow chart of the steps involved in a process for the manufacture of bread crumbs according to the present invention; FIG. 2 is a diagrammatic end-on view of an apparatus for the manufacture and coating of bread crumbs and the coating of food therewith; FIG. 3 is a diagrammatic view of a f ace of the wheel in the apparatus shown in FIG. 2 which functions to recirculate bread crumbs not coated onto a substrate food; and FIG. 4 is a diagrammatic side view of the apparatus shown in FIG. 2. DESCRIPTION OF THE PREFERRED EMBODIMENT A process embodying the present invention for the manufacture and use of bread crumbs, uses the steps 1 to 5 shown in FIG. 1 and described in more detail below. 1. Bread is baked from flour. Typically the flour is made from wheat at an extraction rate of 59-67% and then aged for at least 14 days, at a temperature of 15°-22° C. and at ambient humidity. 2. The bread is allowed to stale. Typically the bread is staled for a minimum of 1 day at a temperature of 15°-22° C. and at ambient humidity. Preferably, the bread should not stale for longer than 5 days. 3. The staled bread is fed into the crumb-producing part of the apparatus described in more detail below. 4. The apparatus is operated to convert the bread into crumbs and to coat the crumbs as they are produced onto a substrate food. 5. The coated food is then transported away from the apparatus to e.g. a packaging station. The crumbs produced and applied by the operation of such a process contain approximately 25-40% water. They are large with a low bulk density and improved eating characteristics as compared to the bread crumb which results from operation of the conventional system. Since crumb production and coating are carried out almost simultaneously, the food manufacturer no longer needs to obtain ready prepared crumbs from a bread crumb manufacturer. For example, the food manufacturer may simply obtain ready baked bread (in which case the food manufacturer does not carry out step 1 and optionally step 2) or, alternatively, he may manufacture his own bread (in which case the manufacturer carries out steps 1 to 5). Thus, the system is advantageous as it is economic to operate in terms of time and cost. The apparatus for carrying out step 4 of the process is shown in FIGS. 2, 3 and 4. The apparatus 10 has a food conveyor system 14, in this example comprising a conveyor belt 15, for conveying food through a crumb producing and coating station. This includes a crumb producing apparatus 12 cantilevered over the belt 15. The conveyor system 14 transports pieces of substrate food 16 beneath the crumb producing apparatus 12 to be coated in freshly produced bread crumbs. The crumb producing apparatus 12 is cantilevered over a portion of the conveyor belt 15 by at least one arm 18 which is swivellably mounted upon an upright supporting post 20 located to one side of the conveyor belt 15, The swivellable mounting of the arm 18 upon the post 20 is such that the crumb producing apparatus 12 can be partially rotated around the swivellable mounting in either a vertical or horizontal plane. Either type of rotation will allow an operator to alter the width of belt overlain by crumb producing apparatus 12. Thus it can be adapted to different belt widths. The crumb producing apparatus 12 has a casing 22 which houses the mechanism 24. Loaves of bread 26 are fed into an inlet 25 of the casing 22, via a chute 28. The chute 28 is longer than a human arm in order to protect an operator's hand from the mechanism 24. The mechanism 24 is in the form of drum 30 which is rotatable about a central axis 32 by a motor 33. A plurality of teeth 34 project outwardly from the cylindrical surface of the drum 30. In the embodiment being described, the teeth 34 are arranged in a spiral around the drum 30. They may be provided by a toothed strip (as used for bandsaws) which has been secured to the drum surface, e.g. by welding. Other forms of teeth may alternatively be used. For example, the teeth 34 could be replaced with spikes. The loaves of bread 26 being converted into crumbs are held a short distance away f rom the drum 30 by a plurality of spaced apart rods 36 which extend across the inlet 25 of the casing to provide a platform just above the drum 30. The teeth 34 can project between adjacent rods in order to bite off small crumbs from the loaves of bread 26. Thus crumbs are generated by a "pecking" action, which can operate perfectly well even with quite moist bread, which could not be comminuted by a conventional grinder. Loaves of bread being converted into crumbs are kept in engagement with the teeth 34 of the drum 30, by virtue of their own weight and the weight of other loaves 26 of bread in the chute 28. A mesh 38 is located within the casing 22 beneath the drum 30. It is arcuate and spaced slightly from the teeth 34. The mesh 38 functions to screen for a desired crumb size, i.e. it only allows crumbs of a given size or less to pass through and coat the substrate food 16. The mesh 38 is removable, so that it can be replaced with a different mesh. This enables an operator to select and control the size of crumb for coating a particular substrate food. As they are produced, the crumbs fall into a trough-like depression 40 defined in the conveyor belt 15 by pairs of nip rolls 39, 41. (The upper nip rolls 39 engage only edge regions of the belt 15 so as not to interfere with the transport of food items). The trough-like depression 40 is located immediately beneath the crumb producing apparatus 12 and provides a coating location. Pieces of substrate food 16 moving along the conveyor belt 15 fall onto the crumbs in the trough- like depression 40. This coats a lower surface of the substrate food 16 with crumbs. Whilst in the trough-like depression 40, exposed surfaces of the substrate food 16 are coated by crumbs falling directly upon it from the crumb producing apparatus. The conveyor belt 15 can be made of a chain or mesh e.g. a cross-link belt. Excess crumbs not used for coating drop through the belt 15 onto a sloping tray 56 and are re- circulated for coating. The crumbs are re-circulated by a wheel 42 shown in FIGS. 2 and 4 and in FIG. 3 in more detail. The wheel 42 is substantially like a water- wheel. It has the form of a shallow drum with a horizontal axis. It has a circular back-wall 44 and a cylindrical side-wall 46. A plurality of short radial vanes 48 or cups are spaced around the rim 50 of the wheel 42, connected to the back 44 and circumferential 46 walls. The wheel 42 is located to one side of the conveyor system 14 and substantially opposite the post 20 supporting the bread grinding station 12. The vanes 48 of the wheel 42 point towards the conveyor system 14. The tray 56 which is located beneath the conveyor belt 15 to capture crumbs falling through slopes downwardly away from the conveyor belt 15 towards the wheel 42. A lowermost edge 58 of the tray 56 ends slightly spaced from the back-wall 44 and beneath the hub 52 and above the lowermost vanes 48 of the wheel 42. Crumbs fall from the edge 58 of the tray 56 into the rotating wheel 42. As the wheel rotates, the vanes are loaded with crumbs from the tray, are raised and gradually inverted. The crumbs fall out and onto a slide 60 which is located above and inclines towards the conveyor belt 15. The start of the slide 60 is slightly spaced from the backwall 44 and above the hub 52 and below the uppermost vanes 48 of the wheel 42. The spacing from the backwall 44, prevents the slide from impeding the rotation of the wheel. The crumbs then pass down the slide 60 and fall off its end 62 located above the trough-like depression 40 of the conveyor belt 15. Hence, the crumbs fall back onto the conveyor belt 15 at the coating location. While the invention has been described above with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes can be made without departing from the spirit and scope of the invention and it is intended to cover all such changes and modifications by the accompanying claims.
DRIFT CAR PRACTICE: Held every THURSDAY NIGHT FROM 4PM TILL 8PM. Cost is $65 per Driver, $40 for second driver and $5 for spectators. Rides are available from $20 lap. Safety Requirements: Muffler fitted - no straight-through. (95dcb limit) Small fire extinguisher fitted in car All wheel nuts fitted All panels fitted Driver /passenger must wear long sleeved shirt and long pants with fully enclosed shoes. AS approved helmet. Driver is responsible for mechanical safety and reliability of their car. DRIFT CAR EVENTS:
http://archymotorsport.com.au/DRIFTING
Because Kazakstan is so far from the oceans, the climate is sharply continental and very dry. Precipitation in the mountains of the east averages as much as 600 millimeters per year, mostly in the form of snow, but most of the republic receives only 100 to 200 millimeters per year. Precipitation totals less than 100 millimeters in the south-central regions around Qyzylorda. A lack of precipitation makes Kazakstan a sunny republic; the north averages 120 clear days a year, and the south averages 260. The lack of moderating bodies of water also means that temperatures can vary widely. Average winter temperatures are -3�C in the north and 18�C in the south; summer temperatures average 19�C in the north and 28�-30�C in the south. Within locations differences are extreme, and temperature can change very suddenly. The winter air temperature can fall to -50�C, and in summer the ground temperature can reach as high as 70�C. NOTE: The information regarding Kazakhstan on this page is re-published from The Library of Congress Country Studies and the CIA World Factbook. No claims are made regarding the accuracy of Kazakhstan Climate information contained here. All suggestions for corrections of any errors about Kazakhstan Climate should be addressed to the Library of Congress and the CIA.
https://www.photius.com/countries/kazakhstan/climate/kazakhstan_climate_climate.html
WASHINGTON — The Environmental Working Group (EWG) published a searchable database on Oct. 27 that rates food and beverage products sold at retail on such criteria as nutrition, ingredients of concern and degree of processing. The Food Scores system is designed to “guide people to greener, healthier and cleaner food choices,” according to the EWG. Product profiles in the database include information on how products compare in terms of nutritional content and whether they contain what the group calls “questionable additives.” Ingredients listed as questionable include nitrites, potassium bromate among others. The database also lists meat and dairy products that are likely produced with antibiotics and hormones, and it lists fruits and vegetables likely to be contaminated with pesticide residues. “When you think about healthy food, you have to think beyond the Nutrition Facts Panel,” said Renee Sharp, the EWG’s director of research. “It doesn’t always tell the whole story. EWG’s Food Scores shows that certain foods that we think are good for us may actually be much less so because they contain questionable food additives or toxic contaminants.” The Grocery Manufacturers Association (GMA) called the Food Score database “severely flawed.” “The methodology employed by EWG to develop their new food ratings is void of the scientific rigor and objectivity that should be devoted to any effort to provide consumers with reliable nutrition and food safety information,” the GMA said in a statement. “Their ratings are based almost entirely on assumptions they made about the amount, value and safety of ingredients in the products they rate. Adding insult to injury, EWG conducted no tests to confirm the validity of any of their assumptions.
https://www.meatpoultry.com/articles/10179-food-scores-database-makes-its-debut
On which continent is Venezuela located? Is it Hawii? 18 Answers - Anonymous1 decade agoFavorite Answer Venezuela is a country on the northern coast of South America. Comprising a continental mainland and numerous islands in the Caribbean Sea, Venezuela borders Guyana to the east, Brazil to the south, and Colombia to the west. Trinidad and Tobago, Aruba, and the Leeward Antilles lie just north of the Venezuelan coast. A former Spanish colony, Venezuela is a federal republic. Historically, Venezuela has had territorial disputes with Guyana, largely concerning the Essequibo area, and with Colombia concerning the Gulf of Venezuela. Today, Venezuela is known widely for its petroleum industry, the environmental diversity of its territory, and its sheer natural beauty-.Source(s): -http://en.wikipedia.org/wiki/Venezuela - mackleyLv 44 years ago Venezuela ContinentSource(s): https://shrinks.im/a8NCj - RikaLv 41 decade ago South America - 1 decade ago South America - How do you think about the answers? You can sign in to vote the answer. - eaglebargerLv 44 years ago Where Is Venezuela LocatedSource(s): https://shrink.im/a8YQ1 - Daiquiri DreamLv 61 decade ago South America - Anonymous5 years ago This Site Might Help You. RE: On which continent is Venezuela located?Source(s): continent venezuela located: https://biturl.im/0mqrq - 1 decade ago Venezuela is located in the Northern part of South America. Hawaii would be considered part of North America. Venezuela borders the Atlantic Ocean. Hawaii is a series of islands, located to the west of California in the Pacific ocean. I hope this was helpful! - Anonymous4 years ago Appalachian Mountains: US and Canada Georgia, Alabama, South North Carolina, Tennessee, Kentucky, Virginia, West Virginia, Maryland, Pennsylvania, New York, CT, MA, VT, NH, Maine, Quebec, New Brunswick, Newfoundland Himalayas Bhutan, China, India, Pakistan, Nepal, Tibet, Afghanistan in Asia. Pyrennes Spain, France, Andorra Europe Alps France, Switzerland, Germany, Austria, Leichtenstein, Italy Europe - Dr. J.Lv 61 decade ago Venezuela is located In the American continent, the south part of it, usually called south america.
https://answers.yahoo.com/question/index?qid=20061116204003AAIl8ao
BACKGROUND OF THE INVENTION SUMMARY DETAILED DESCRIPTION Conventional managed information environments typically include a plurality of interconnected manageable entities, or nodes. In such an environment having a storage area network (SAN), the manageable entities may include storage arrays, connectivity devices and host entities, collectively operable to provide information storage and retrieval services to users. In a large storage area network, the number of nodes may be substantial. In such a storage area network, software entities known as agents are responsive to a management application for managing the nodes in the SAN. The agents typically execute on a host computer and are in communication with manageable entities in the SAN responsive to the agent for providing configuration and status information, and for receiving instructions from the management application. In a typical conventional SAN, the agents manage and monitor a variety of manageable entities having different functions, and often emanating from different vendors. Further, connectivity, equipment changes and maintenance may affect the presence and status of the various manageable entities. Therefore, the configuration of the SAN may be complex and dynamic. Accordingly, the SAN may adapt agents to a particular configuration in the SAN. The agents may be responsible for general tasks of a large number of nodes or manageable entities, or may have a more specialized role in handling a smaller number of specialized or vendor specific manageable entities. Nonetheless, the agent is communicative with the manageable entities for which it is responsible. Therefore, conventional SAN agents typically employ an application programming interface (API) conversant with a particular manageable entity and operable to manage and monitor the manageable entity. Often, it is desirable to simulate the SAN for testing and development purposes. Simulation may avoid the need to duplicate a possibly large configuration of physical nodes. Simulation agents may be developed or modified to emulate the behavior of a storage array to the invoking SAN server. The conventional simulation agents may be configured with information similar to that which is available via the actual API employed for accessing the actual counterpart storage array. Therefore, a conventional server may be communicative with a set of simulation agents, each operable to receive requests and send responses emulative of an agent serving an actual storage array. In a storage area network, agents typically manage and monitor a plurality of manageable entities, or resources, by employing an application programming interface (API) known to both the agent and the manageable entity. As a typical conventional agent may manage multiple manageable entities, or manageable resources, an agent may employ a particular API specific to each manageable resource or type of manageable resource which it manages. Often, such conventional APIs are specific to a particular vendor of the type of manageable resource concerned. For example, a conventional SAN often employs a plurality of connectivity devices, or switches, between hosts and storage arrays. The switches manage the ports which physically interconnect the hosts and storage arrays for delivering data from the storage arrays to users. A typical agent may therefore be a switch agent operable to manage a plurality of switches, each from a different vendor. Therefore, the agent employs a particular device specific API corresponding to the switch of each vendor. An agent, therefore, may employ an API having a set of interface modules, or so-called “plug-ins,” each adapted to the switch of each particular vendor. Accordingly, each plug-in is operable between the agent and the particular switch. The plug-ins are operable to share a common set of commands or instructions (i.e. parameters) with the agent side of the API, and communicate in vendor specific parameters on the switch side. Each API typically employs multiple plug-ins to cover the switches which it manages. For each call to a switch, therefore, the API selectively employs the corresponding plug-in specific to the switch, such as a vendor specific or storage array type specific API. In a SAN simulation environment, simulation agents emulate the operation of agents coupled to actual storage arrays, without requiring the actual use of the counterpart storage array. One particular configuration of SAN simulation is described in copending U.S. patent application Ser. No. 10/954,015 entitled “SYSTEM AND METHODS FOR STORAGE AREA NETWORK SIMULATION,” filed Sep. 29, 2004, now U.S. Pat. No. 7,315,807, and assigned to the assignee of the present invention, incorporated herein by reference. In such a simulation environment, the simulation agent receives calls, or requests, made by a server, and responds with responses to simulate actual results emanating from the counterpart storage array. However, conventional simulators respond in a predetermined manner to expected inputs, or calls. Such predetermined simulated responses typically do not need to actually perform the corresponding processing by a storage array which the responses purport to represent. Accordingly, conventional simulators may respond with a programmed response more quickly than the counterpart actual response. In the SAN, however, requests or calls typically arrive on a demand basis from the server based on client requests. The order and frequency of such requests may be a variable subject to operator driven requirements. Such concurrent requests are often processed simultaneously by the recipient, such as the agent serving the storage array of which the request is made. A multitude of requests results in a load on the agent, and typically the agent prorates processing resources among the requests in a scheduled or context switched manner. In such a scheduled system under load, the response time of individual requests typically varies based on the total load, typically imposing a latency delay on the response proportional to the total pending call load. Accordingly, in a simulation scenario, it is beneficial to simulate a load of calls to the storage array. Configurations of the invention are based, in part, on the observation that conventional simulations provide predetermined responses according to automated or preprogrammed logic. Such automated responses may occur much more quickly than their actual counterparts, particularly in a loaded system. Therefore, a barrage of simultaneous calls to a simulator may return more quickly than the same barrage of calls in the actual target system, possibly providing an inaccurate indication of the ability of the system to perform under load. The present invention substantially overcomes the shortcomings presented by such conventional simulations by identifying a processing work burden and corresponding completion time associated with each of the simulated calls (agent requests). The simulation agent aggregates the work burden of concurrent requests and computes a latency factor associated with each call based on the load presented by all pending calls. Accordingly, the simulation agent sends the corresponding responses from the simulation calls following the computed latency period. In this manner, the latency computation and associated latency extension of the response cycle provides an accurate view of the agent under load from a plurality of multiple competing requests contending for processing. In the exemplary configuration discussed herein, the SAN simulator is operable to emulate a SAN agent receiving commands from a console and issuing calls to a simulated SAN represented by the emulated resource. Contention is an effect of multiple clients accessing the same target (proxy, Service Processor or switch) at the same time. Clients may use different interfaces for communication with the target, for example SNMP, SMI-S, native API. When multiple clients send requests to the same target via the same interface, the requests are usually serialized at the target port, meaning target processes the first request, and starts processing the next request only after completion of the previous one. Even if multiple clients use different interfaces, which may not necessarily become serialized, the requests have an effect on each other, since they compete on shared resources such as CPU. In either case, execution of the requests on the target gets delayed when multiple clients send requests at the same time. Therefore, configurations of the invention substantially overcome the above described shortcomings with respect to observing configuration symptoms of large installations by providing a storage area network simulator operable to simulate an exchange of calls emanating from a SAN management application to a plurality of manageable entities, or resources, such as switches. The simulated call load provides simultaneous simulated calls for observing and analyzing SAN management application responses to various loading scenarios characterized by multiple pending calls to the simulated connectivity device. By adjusting the latency time of each pending call, and adjusting the latency to represent the additional processing burden imposed by successive pending calls, the simulated responses provide an accurate set of responses to the calls, rather than merely returning an unencumbered rapid response of expected content which may elusively represent the actual timeliness of responses in a loaded system. In the exemplary SAN simulation environment discussed herein, one particular aspect of the contention simulation is to simulate the effect of contention on the performance of the clients accessing the target. Specifically, it is to estimate expected time of a task return in the contention environment. Some tasks may be broken to multiple smaller requests or queries being sent to a target. For example, discovery of a switch port list via SNMP interface is broken to multiple SNMP queries being sent to a target one after another, while discovery of zoning via another interface may utilize a single large query to the target (i.e. host responsive to the call). Non-breakable tasks Breakable tasks In the first approximation, the tasks can typically be divided to two groups: Non-breakable tasks are the ones that are executed utilizing one big query to the target. Breakable tasks are the ones that are executed utilizing multiple, usually light queries to the target. Another assumption is that all the low level requests or queries via the same interface processed by the target are serialized on the target. Requests via different interfaces to the same target are not serialized, but they compete on the shared resources, and the corresponding behavior becomes somewhat similar to breakable tasks. To demonstrate behavior of contention with breakable and non-breakable tasks, consider the following examples. Two clients are performing almost simultaneous non-breakable tasks against the same target, while each task takes 10 seconds being executed separately. In this case, the first task query and the task itself will complete after 10 seconds, while the other task's query and the task itself will complete only after 20 seconds since the second task will commence execution only after completion of the first task. Similarly to the previous example, let us assume there are two clients performing almost simultaneous tasks against the same target, while each task takes 10 seconds being executed separately. However, this time the tasks are breakable, which means the clients send many small queries one after another to the target. In this case, the clients will send almost simultaneously the first query; the target will process the query from the first client and reply to it, then it will start processing the query from the other client. Meantime, the first client receives the response, and sends the second query to the target. The target starts execution of the second request from the first client after completion of the request from the second client, and so on. This way, queries from the two clients get interleaved on the target, which mean that tasks for both clients will complete after 20 seconds. The overall amount of work performed on the target is the same in both cases, which is 20 seconds in our examples. The last client always completes after the period required to perform the work for all clients together. The difference is in completion of the first task (or other pending tasks in case of more than two clients). In case of breakable tasks, all tasks complete at the same time assuming the clients perform equal tasks. In case of non-breakable tasks, the first client completes the task as it would run along (after 10 seconds in our example), followed by completion of the other clients' tasks up to the last client's task completion at the end (after 20 seconds in the above example). This exemplary configuration is also applicable to multiple threads running on the same client and performing tasks against the same target. Significant aspects between breakable and non-breakable tasks are: In further detail, the method of simulating responses to request calls includes receiving a call to perform a task by a simulation agent, and identifying currently pending requests already being processed as tasks and operable for concurrent processing by the simulation agent. The latency simulator employs scheduling logic to compute, based on the collective load burden of the identified currently pending requests, a completion time for the processing of the pending request, and recalculates, based on the received call, the completion time of the currently pending requests, therefore modifying the previously computed completion time of the pending requests to reflect new tasks. The latency simulator then transmits the response to the received task call at the recalculated completion time. Further, the latency simulator may receive successive task calls requesting an associated response, recalculate the completion time of each of the currently pending requests responsive to the burden imposed by each of the successive tasks, and transmit the associated response for each of the currently pending requests at the corresponding computed completion time. Computing further comprises computing a contention period corresponding to a period of simultaneous processing by a set of tasks. Therefore, computing the completion time of the received task includes determining a contention period for each interval during which a fixed number of currently executing tasks are executing concurrently with the received task, and computing, for each contention period, the work burden completed for the received task by dividing the duration of the interval by the number of concurrent tasks and the received task contending for processing. The latency simulator, therefore, recalculates the completion time by identifying, for each contention period, the number of concurrent tasks, and extends the completion time for each of the currently executing tasks by dividing the processing for each contention period by the number of concurrent tasks. In the exemplary configuration, the scheduling logic computes the contention period by identifying, from among the currently executing tasks, an end time of the earliest completing task and identifying the start time for the received task. The scheduling logic delimits the contention period by computing the interval from the start time of the received task to the end time of the earliest completing task, thus identifying the point at which the number of concurrent tasks changes. Identifying the end time of the earliest completing task further includes estimating an end time of the received task based on the number of currently executing tasks and the processing burden of the received task, and recalculating the end time of the currently executing tasks based on the additional processing corresponding to the received task. The scheduling logic then compares each of the end times of the received task and the currently executing tasks, and denotes the earliest of the compared end times as the end time of the earliest completing task. In alternate configurations, the tasks further include non-breakable tasks, in which each of the non-breakable tasks is indicative of a set of sequential atomic operations, and the contention section corresponds to the processing of a current task of the currently executing tasks. Further, the processing may include allocating processing among the work burden of the currently pending tasks in equal time slices, or other suitable scheduling. In the exemplary configuration, recalculating the completion time includes determining the number of executing tasks including the addition of the received task, and computing, for each of the currently executing tasks, an amount of work performed in the contention section, or period. The latency simulator extends the completion time for each of the currently executing tasks based on the computed work performed during the contention section augmented by the new task. Since the newly received task and currently executing tasks are concurrently contending tasks, therefore, the scheduling logic determines the contention section indicative of a fixed number of concurrently contending tasks, and computes, for the determined contention section, an amount of work burden completed for each of the concurrently contending tasks. The scheduling logic then identifies the end of the contention section by the earliest completion of one of the contending tasks, and iterates the determining, computing, and identifying for successive contention sections until the remaining work burden for the received task is zero, i.e. the newly added task is completed. The invention as disclosed above is described as implemented on a computer having a processor, memory, and interface operable for performing the steps and methods as disclosed herein. Other embodiments of the invention include a computerized device such as a computer system, central processing unit, microprocessor, controller, electronic circuit, application-specific integrated circuit, or other hardware device configured to process all of the method operations disclosed herein as embodiments of the invention. In such embodiments, the computerized device includes an interface (e.g., for receiving data or more segments of code of a program), a memory (e.g., any type of computer readable medium), a processor and an interconnection mechanism connecting the interface, the processor and the memory. In such embodiments, the memory system is encoded with an application having components that, when performed on the processor, produces a process or processes that causes the computerized device to perform any and/or all of the method embodiments, steps and operations explained herein as embodiments of the invention to allow execution of instructions in a computer program such as a Java, HTML, XML, C, or C++ application. In other words, a computer, processor or other electronic device that is programmed to operate embodiments of the invention as explained herein is itself considered an embodiment of the invention. In a large storage area network at a customer site, customer reported symptoms of undesirable managed object operation may be difficult to recreate in a diagnostic setting, such as in a SAN management application test and development facility. A storage area network simulation operable to simulate a given configuration, such as a particular customer installation, allows observation and analysis of a particular SAN configuration under load, and also allows a maintenance engineer to analyze problems and develop and test remedial procedures, without physically reproducing such a large physical interconnection. A storage area network simulator, disclosed further below, is operable to simulate an exchange of calls emanating from a SAN management application to a plurality of manageable entities, such as switches, for observing and analyzing SAN management application response to a particular configuration. A capture or gathering tool discovers manageable entities interconnected in a particular target SAN, such as the SAN experiencing undesirable operation. The gatherer delivers a range of exemplary calls to an agent, and gathers responses. The exemplary calls enumerate expected responses from the various manageable entities responsive to the agent. The gathered responses take the form of a normalized file, such as an XML markup script. An emulator plug-in is operative as a simulator interface module (e.g. plug-in) for a test agent in a test environment, such as the management application test facility. The test agent is adapted to employ the emulator plug-in as the API plug-in for calls emanating from the test agent. The emulator plug-in receives the normalized responses, and selectively transmits the corresponding response from the normalized responses in response to the corresponding agent call received via the API. Further, the emulator plug-in receives additional configuration data from the capture tool, such as latency timing information such that the emulator responses occur at substantially similar intervals as their counterparts in the symptomatic configuration. FIG. 1 FIG. 1 100 110 114 130 102 130 130 130 130 is a context diagram of an exemplary managed information environment including a storage area network and suitable for use with configurations of the call load latency simulator in a storage area network. Referring to , the managed information environment includes a server responsive to a user console and coupled to agents , or manageable resources, interconnected in the simulated storage area network . In the exemplary arrangement discussed herein, the agent is operable as a simulation agent, however the simulation agent corresponds to a functional SAN host executing the agent as a simulation agent . Techniques for operating the agent as a simulation agent are performed by employing an emulator plug in as described in copending U.S. patent application entitled: “SYSTEM AND METHODS FOR STORAGE AREA NETWORK SIMULATION,” filed concurrently, incorporated herein by reference. 110 116 112 116 130 112 130 102 102 140 130 The server receives commands from the console , and effects the commands via the simulation agent responsively to a SAN management application . The agents are typically software entities executing on hosts in the SAN for maintaining and monitoring managed entities, or managed resources in the SAN . In a simulation environment, the emulated resource simulates a managed resource, such as storage arrays, connectivity devices (i.e. switches), and databases. An agent may manage a plurality of managed resources in the SAN, in which agents are typically assigned to manage resources having a common denominator such as a common vendor and/or type of manageable resource, described further in the above referenced copending patent application. 140 142 146 144 110 130 140 102 130 150 1 2 140 142 146 160 150 1 2 150 144 146 1 2 160 In the exemplary configuration, the emulated resource emulates a manageable entity such as a switch, and includes a latency simulator having scheduling logic and a processing queue . From the perspective of the server and simulation agent , the emulated resource appears as one or more manageable entities in the simulated SAN . The simulation agent sends a plurality of calls , indicative of simulated tasks T and T for processing by the emulated resource . The latency simulator employs the scheduling logic for computing a completion time for sending the responses corresponding to each call , discussed further below. Responsive to the tasks T and T, the emulated resource receives the tasks in the processing queue , computes a completion time according to the scheduling logic , and sends responses R and R ( generally) at the computed completion time of each. However, this model achieves efficiency and simulation realism by apportioning the processing burden across the available processing resources according to the linear scheduling described in the exemplary configuration. Alternate scheduling mechanisms may be performed in alternate configurations. FIG. 2 FIGS. 1 and 2 150 112 102 130 140 200 142 150 144 160 150 201 is a flowchart of an agent employing the call load latency simulator. Referring to , the method for processing storage area network resource access requests according to configurations of the invention include receiving a plurality of resource access requests, such as calls , from the storage area network management application that operates in conjunction with the simulated storage area network environment via the agent operating as the emulated resource , as depicted at step . The latency simulator queues each of the received resource access requests (i.e. calls) in the processing queue for preparation and forwarding of the resource access response at calculated resource access response times respectively calculated for each resource access request , as depicted at step . 142 146 150 202 1 2 The latency simulator employs scheduling logic for calculating a resource access response time that simulates contention for access to the simulated resource by other resource requests , as shown at step . As each additional request arrives, the processing resources are divided among the currently executing tasks T, T, thus extending the completion time for the currently pending tasks, discussed further below. The calculation of the resource access response time includes, in the exemplary configuration, computing the resource access response time based upon i) a number of concurrently co-pending resource access requests (i.e. tasks), and ii) an amount of work to be performed by each of the concurrently co-pending resource access requests, expressed as a work burden (W) of time to complete the task in an unencumbered system. 142 160 150 203 204 160 Upon expiration of the calculated completion time, the latency simulator prepares a resource access response to the storage area network resource access request , as depicted at step , and forwards the resource access response at the calculated resource access response time, as shown at step . The resource access response , as indicated above, simulates a response from a SAN resource, and may be predetermined according to the copending U.S. patent application cited above. 150 Contention simulation of the equal (i.e. similar work burden W) tasks from multiple clients started at the same time is straightforward. In case of the non-breakable tasks, the first one completes after time t, where t is the time required for a task to complete in a non-contention case, the second task completes after 2*t, and so on with the last (n'th) task completing after n*t seconds. In case of the breakable tasks, all tasks complete after n*t seconds. However, complexity increases for a case of different tasks starting at different times. The scheduling mechanism for non-breakable tasks is relatively straightforward as well. If a task starts at time Tstart, which normally takes t seconds to complete, and the target does not do anything, then it completes at time Tstart+t. If the same task starts at the same time, but the target is processing other non-breakable tasks, which we call pending tasks, while the last target pending task is to complete at time Ttarget-complete, then the task will complete at time Ttarget-complete+t. Note, that once a task completion time is determined, it never changes with more tasks coming in during execution of the pending tasks. The scheduling logic for breakable tasks becomes more complex. One of the complexities is caused by the fact that completion time of the pending tasks (currently executed on the target or waiting the execution) gets affected by other tasks coming in the middle of the execution. We review what happens with the same tasks start time and duration as in the previous example, but here the tasks are breakable. FIGS. 3-4 146 150 The scheduling logic estimates completion time of the pending tasks upon arrival and updates the time with other tasks coming in during execution of the pending tasks . 150 150 140 1 3 When a task comes in, it interleaves with other pending tasks executed by the target . For example, at time , when a second task comes in, we know that it will be executed together with task one for some time until one of the two ends. At time , when a third task arrives in, there are already 2 tasks pending on the target. Now, three tasks get interleaved until one of them completes, after which the remaining two tasks remain interleaved until one of them completes, and then the remaining task completes the execution alone. Interleaving of the tasks extends execution of all of them. In a particular example, until a second task started, an estimated time for completion of the first task was 4. When the second task started, time of execution of both tasks for the period of interleaving is twice as long, since the target processes requests from both tasks. As a result, execution time of both tasks gets extended by a factor of 2 for the duration of the contention section. Similarly, when the third task joins, all three tasks get extended because of execution of three interleaved tasks for some period, and then execution of remaining two tasks is extended as well. Interleaving of tasks affects the execution time as follows. If we have n tasks executed in parallel, when the new one starts, then execution time for the period of interleaving of all n+1 tasks gets extended by the factor of (n+1)/n for the pending tasks and by n+1 for the new task. Particular aspects, discussed in further detail in the examples in below, are: FIGS. 3 and 4 FIGS. 1 and 3 150 1 2 150 142 1 0 2 144 1 1 2 142 160 1 2 are an exemplary timing chart of a particular configuration of the call load latency simulator showing a contention section. Referring to , exemplary calls arrive as tasks T and T. The arriving tasks impose a processing burden expressed in time units for completion in an unencumbered system. The latency simulator schedules task T at time (t=0), requiring two processing units of time, and T at t=1, also requiring two minutes of processing time, in the processing queue ′. At T=1, as T arrives, the latency simulator identifies T and T as currently pending. Accordingly, the latency simulator identifies the contention section as the time when the same number of concurrent tasks T, T are executing. 142 146 160 1 2 2 160 162 1 2 150 162 160 146 1 2 164 166 146 1 2 162 146 The latency simulator invokes the scheduling logic to resolve the current contention section and compute the revised completion time of T and T in view of the arrival of T. For the duration of the contention section , the scheduling logic computes a concurrent section , effectively “stretching” the time of each task to represent the sharing of processing resources. Since T and T compete for processing resources, each task receives 1/(number of tasks) or ½ of the processing. Accordingly, for the duration of the concurrent section derived from the contention section , the scheduling logic doubles the processing time of each task T and T, shown by shaded areas and . Accordingly, the scheduling logic computes the new completion time of t as T=3, and the new completion time of T as t−4, effectively lengthening the duration by one time unit to correspond to the concurrent section of two time intervals during which each received half processing attention. The scheduling logic is discussed in further detail below as a general mechanism of computing the completion time for an arbitrary number of concurrent tasks. 146 150 146 142 Therefore, in other words, the scheduling logic apportions the available computing resources among the contentious, concurrent tasks in an equal time-slicing manner. The simulated tasks represent breakable tasks, meaning that they need not be completed in a single continuous interval of processing time. Further, the scheduler performing the exemplary scheduling logic does not assume a wait time such as may be incurred by a resource waiting, for example, for an external entity such as a disk drive or network port. Often, scheduling algorithms favor breaking, or swapping, tasks at a wait point to mitigate idle time by the processor. A scheduler operable of performing optimal context switching at a wait point, rather than equal interval time slicing, may provide faster results. Accordingly, the testing results provided by the latency simulator is likely to provide test encumbrance at least as burdensome as the actual calls which it represents. 160 130 146 130 146 The exemplary contention section is applicable more generally to an arbitrary number of concurrent tasks, rather than the two shown above. In order for the emulated resource executing in conjunction with the simulation agent to adjust task completion time for tasks performed by different clients and contacting the same target, the latency simulator operates as a simulator daemon, which accepts tasks from all the clients (e.g. simulation agents ), and returns results to the clients after the estimated task execution time elapses. The latency simulator maintains list of all the pending tasks per target, and the estimated completion time for each pending task. The daemon monitors completion times of the pending tasks, and returns to the caller after the estimated task completion time exceeds the current time, and the task gets removed from the list of pending tasks. Task estimated completion time of a pending task is first calculated when the task starts (task request arrives at the daemon), and then it is updated if necessarily every time new task request comes in, until the task returns to the caller. 1 2 W=amount of a new task work−time required for completion of the new task if there was no contention (this was t in the above sections of the document) Contending tasks—pending tasks to be executed by the same target in parallel N=number of currently contending tasks−number of pending contending tasks to the same target excluding the new request Contention section—period of time while the same set of tasks are executed by the target in parallel T=time duration of the contention section Tnew=updated duration of the contention section due to addition of the new task Tstart=start time of the contention section, which is equal to the start time of the new task for the first section Tend=end time of the contention section The scheduling logic presented below applies to the calculation of the estimated completion time of the new task (NT) and recalculation of the estimated completion time of the pending tasks (CT, CT) at the time of the new task request coming in. The following labels apply: 146 In the exemplary configuration, the scheduling logic operates as follows: The start time (Tstart) will be the new task start time. If there are no contending pending tasks, an expected completion time of the new task is simply Tstart+W. If there are other contending pending tasks, the following calculation applies The end of the section (Tend) will be the earliest end time of one or more of all the contending tasks, including the new request. To make the new task estimated end time comparable to the currently contending tasks, its end time is estimated here as Tstart+W*(N+1) Current section duration time T=Tend−Tstart T T N+ N The duration of this section (T) has to be adjusted since now N+1 tasks will be executed in parallel instead of original N tasks: −new=*(1)/ Find contention section starting at the current time (period of N+1 tasks executed in parallel) Completion time of all the contending pending tasks (excluding new task) gets extended by Tnew−T Update completion time for contending pending tasks based on the extension of the contention section, which will take now Tnew instead of the original T. The Tnew/(N+1) amount of work will be done by the target for the new request during the contention section, since the section length is Tnew and N+1 tasks are being executed during this time, and the new task will get only 1/(N+1) slice of that time Tnew. The remaining work for the new task will be W=W−Tnew/(N+1) Update remaining work to be done for the new Task (W) Continue iterations on the contention section with the next one starting at Tstart=Tstart+Tnew While W (of the new task NT) is greater than 0 FIGS. 5-8 FIG. 1 1 5 8 142 130 300 150 130 140 301 142 130 142 144 160 are a flowchart of the latency simulator of , and illustrating the contention section discussed in the general method above. Referring to FIGS. and -, the latency simulator simulating responses to requests by first receiving a call to perform a task by a simulation agent as depicted at step , and identifying any other currently pending requests operable for concurrent processing by the simulation agent executing the emulated resource , as shown at step . As indicated above, the exemplary configuration executes the latency simulator as a plug-in of device specific interfaces in an agent designated as a simulation agent , however alternate configurations may be employed. The latency simulator maintains pending tasks in the processing queue until the computed response time, or completion time, of the simulated task, at which time it responds with an emulated response . 142 146 160 302 160 160 303 142 160 304 160 305 The latency simulator computes, based on the collective load burden of the identified currently pending requests, a completion time for the processing of the received task call by identifying the received task and currently executing tasks as concurrently contending tasks. The scheduling logic then determines a contention section indicative of the fixed number of concurrently contending tasks, as depicted at step . The contention section is defined as the period of time (interval) during which the same tasks are executing, i.e. the newly arrived task and the currently pending N tasks yielding N+1 concurrent tasks, until the first of the N+1 tasks completes. Therefore, computing the completion time of the received task call further includes determining a contention period for each interval during which a fixed number of currently executing tasks are executing concurrently with the received task, as disclosed at step . The latency simulator computes the duration of the contention section (period), in which the contention period corresponds to a period of simultaneous processing by a set of tasks, as depicted at step . Computing the contention section further comprises identifying, from among the currently executing tasks, an end time of the earliest completing task, as depicted at step . 160 162 160 306 307 308 The contention section runs through the end of the first completed task, as modified by the arrival of the new task to define the concurrency section . A new contention period then commences, until the completion time of the new task is computed. Other tasks will have had their completion times extend by each of the intervening contention periods, and may or may not complete before the new task, depending on the work burden remaining upon the arrival of the new task. Computation of the contention section , therefore, involves identifying the start time for the received task (the new task), as depicted at step , and delimiting the contention period by computing the interval from the start time of the received task to the end time of the earliest completing task, as shown at step . Identifying the end time of the earliest completing task first includes estimating an end time of the received new task based on the number of currently executing tasks and the processing burden of the received task, as shown at step . This estimation does not actually limit or set the completion time, it is simply used to compute the contention section duration which applies to compute the modification to the completion times, expressed as W*N (Work Burden of the new task * number of concurrently executing tasks). Typically, this estimation is a “worst case” analysis as it assumes that the new task succeeds all pending tasks in duration. 142 309 310 146 160 160 162 146 312 T T N+ N The latency simulator then recalculates the end time of each of the currently executing tasks based on the additional processing corresponding to the received task, as depicted at step . Accordingly, recalculating further comprises determining the number of executing tasks including the addition of the received task, as shown at step . The scheduling logic computes, for each of the currently executing tasks, an amount of work performed in the contention section . The contention section is actually lengthened by the duration imposed by the new task to define an adjusted contention section, shown as the concurrency section . The scheduling logic extends the completion time for each of the currently executing tasks based on the computed work performed as shown at step , and given by: new=*(1)/ 146 313 314 To determine the earliest completing task, the scheduling logic compares each of the end times of the received task and the currently executing tasks, as disclosed at step , and denotes the earliest of the compared end times as the end time of the earliest completing task, as shown at step . 162 146 146 315 146 162 316 160 317 318 146 319 320 321 W=W−T N+ After computing the concurrency section denoting the contention section augmented by the work burden W of the new task, the scheduling logic computes the work completed on behalf of the new task NT to determine the remaining work. Accordingly, the scheduling logic computes, for each contention period, the work burden completed for the received task by dividing the duration of the interval by the number of concurrent tasks and the received task contending for processing, as depicted at step . Therefore: new/(1) The scheduling logic then computes, for the determined concurrency section , an amount of work burden completed for each of the concurrently contending tasks, as shown at step . This computation for recalculating the completion time includes identifying, for each contention period , the number of concurrent tasks, as disclosed at step , and allocating processing among the work burden of the currently pending tasks in equal time slices, as depicted at step . The scheduling logic identifies the end of the contention section by completion of one of the contending tasks (i.e. the earliest to complete), as shown at step . Based on this completion, the scheduling logic recalculates, based on the received call, the completion time of the currently pending requests, as depicted at step , and extends the completion time for each of the currently executing tasks by dividing the processing for each contention period by the number of concurrent tasks, as depicted at step . 322 316 146 323 146 160 142 324 325 309 A check is performed, at step , to determine if there are more tasks in the contention section update, and control reverts to step until each currently pending task is updated. The scheduling logic schedules transmitting a response to the current task call at the recalculated completion time for the task terminating the contention section, as shown at step . At the end of each contention section, a particular task has completed, and the scheduled termination, or completion time, marks the time at which the scheduling logic sends the corresponding response . The latency simulator continues iterating the determining, computing, and identifying until the remaining work burden for the received task is zero, as depicted at step , by recalculating the adjusted contention section, or concurrency section, resulting from the addition of the new task, for each contention section in the new tasks execution. Accordingly, at step , a check is performed to determine if there is work remaining for the new task, or if the work burden W has been incorporated into the currently pending tasks. If there is remaining work, control reverts to step until the new task is complete. 142 326 146 150 327 328 At a successive time, the latency simulator may receive successive task calls requesting an associated response, as shown at step . Recall that as the tasks are simulated, the latency computation occurs relatively quickly to reschedule the simulated response; it does not persist for the duration of the newly rescheduled responses. Accordingly, shortly after rescheduling to accommodate a particular new task (NT), another new task may arrive. In such a scenario, the latency simulator recalculates the completion time of each of the currently pending task requests , as depicted at step , and transmits the associated response for each of the currently pending requests at the corresponding computed completion time, as disclosed at step . In alternate configurations, as described above, the tasks further comprise non-breakable tasks, each of the non-breakable tasks indicative of a set of sequential atomic operations, wherein the contention period correspond to the processing of a current task of the currently executing tasks. Such sequential processing ensures that once execution has begun on a particular task, the processing is not prorated, or context switched, among the concurrent processes, but rather, copending tasks remain pending without completing any work burden W. FIGS. 9-12 9 1 2 401 402 1 2 1 2 401 402 403 are an example of a call latency scenario of multiple pending calls using the method disclosed in the flowchart above. Referring to FIG. ., a set of current tasks CT and CT have an expected duration (W) of 2 and 4 time units remaining, shown as scheduling bars and , respectively. Note that as CT and CT are already pending, their burden may have in fact changed from inception into the processing queue by previous iterations of the mechanism described herein. The mechanism described herein recomputes the completion time, and hence the remaining time burden relative to the other pending tasks, upon each arrival of a task. Accordingly, at the exemplary time t=0, new task (NT) arrives with a work duration (W) of 3, in contention with CT and CT, as shown by scheduling bars , and . 180 1 2 180 182 FIG. 10 Accordingly, t=0 becomes the new start time for this iteration. The arrival of the new task NT presents a contention section , during which 3 tasks CT, CT and NT compete for processing. Referring to , this contention section results in a concurrency section for the duration of the simultaneous execution of these three tasks. The duration Tend is defined by the earliest end time of the three tasks. The duration of the new task NT is estimated as its work burden W * the number of concurrent tasks (including the new request) N+1, or: Tendnewtask = Tstart + W * (N + 1) = 0 + 3 *3 = 9 411 412 413 451 452 453 which gives section termination times of 2, 4, and 9, as shown by scheduling bars , and , respectively. The current contention section takes the earliest ending value, or 2, as Tend, as shown by shaded bars , and . The current contention section duration is: Tend − Tstart = contention section end 2 − 0 = 2 180 182 Accordingly, the duration of the current contention section is adjusted to determine the concurrency section with the addition of the third task: Tnew = T * (N + 1)/N = 2 * (2 + 1)/2 = 2 * 3/2 182 Thus, the new contention section Tnew, or concurrency section is 2*3/2, or 3 time units in duration. 142 182 411 412 413 182 The latency simulator updates the completion time based on the new concurrency section by Tnew−T, or 1, as shown by hatched areas and (the extension of the NT was already estimated above). Next, the latency simulator computes the remaining work burden B for the new task NT by determining the work done during the concurrency section : W = W − Tnew/(N + 1) = 3 − 3/3 = 2 182 Thus, during the contention section , during which NT received ⅓ of the processing, 1 work burden unit was completed, leaving 2 remaining. 182 1 421 2 182 2 422 423 460 2 184 186 FIG. 11 The next iteration begins at the end of the concurrency section (i.e. previous contention section) , at t=3, thus the task CT has completed and only CT and NT compete. Referring to , each task completed 1 work burden unit in concurrency section , thus CT has 1 more to complete and NT has 2 work burden units W remaining, shown by scheduling bars and . From t=3, the remaining estimated work for NT is: W*(N+1)2*2=4 as shown by hatched portion . Accordingly, the least work burden remaining is determined by CT, having 1 unit, and denoting contention section . Computation of concurrency section results from 2 tasks sharing the contention section, or Tnew = T * (N + 1)/N = 1 * (2)/1 = 2 186 462 463 so the concurrency section extends the completion time by Tnew, or 2, as shown by shaded portions and . 2 Therefore, at the completion of CT at t=5, the work performed on NT during the contention section is: Tnew/(N+1)2/(1+1) or 1 work unit, leaving: W = W − Tnew/(N + 1) W = 2 − 1 = 1 433 1 2 182 186 431 432 or 1 work unit to be performed on task NT. Since no other tasks have arrived, the remaining work unit is preformed uncontested from t=5, thus resulting in a computed termination time of task NT at t=6, shown by scheduling bar . As discussed above, tasks CT and CT completed at the end of the respective concurrency sections and , as t=3 and t=5, as shown by scheduling bars and . In alternate configurations, contention may add less than expected 100% extension to the tasks execution time. For example, if we have two breakable tasks with original duration (amount of tasks work) of 10 seconds each, we would expect both tasks to compete after 20 seconds based on the algorithm presented above. In some cases, these two tasks complete in reality after 18 seconds. This could be explained by other overheads not directly affected by the contention, like networking overhead for example. We introduce contention factor, which reflects the effect of contention to the execution time of contending tasks, and it is equal to Factor=(Real execution time−Task work)/(Expected execution time−Task work), orFactor=(18−10)/(20−10)=0.8 (80%). In most of the cases, clients have timeout for tasks responds from the target. Since contention may cause extension of a task execution time, it can take much longer than usually expected without a contention and clients would start getting timeout errors. We simulate this affect of contention by maintaining timeout per interface, and returning the timeout error to the client if an expected task execution time becomes longer than the interface timeout value for non-breakable tasks. The task still remains in the list of pending tasks since the target continues its execution not realizing that the client does not wait for the result any more. In the case of breakable tasks (SNMP task as an example), clients have timeout per low level requests, not per whole tasks. Since we don't model these low level requests, we can not formally simulate these timeouts too. Based on the empirical data we found that the SNMP targets can successfully handle many small low level requests, but start getting timeout with more than 2 clients sending heavier low level requests in parallel (SNMP zoning queries for example). We simulate this by generating timeout error for one of the clients when there are more than two clients executing breakable tasks with heavy low level requests (zoning tasks in our case). The call load latency simulator mechanism disclosed herein may encompass a variety of alternate deployment environments. In a particular configuration, as indicated above, the exemplary SAN management application discussed may be the EMC Control Center (ECC) application, marketed commercially by EMC corporation of Hopkinton, Mass., assignee of the present application. Those skilled in the art should readily appreciate that the programs and methods for call load latency simulator as defined herein are deliverable to a processing device in many forms, including but not limited to a) information permanently stored on non-writeable storage media such as ROM devices, b) information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media, or c) information conveyed to a computer as executable instructions. The operations and methods may be implemented in a software executable object or as a set of instructions. Alternatively, the operations and methods disclosed herein may be embodied in whole or in part using hardware components, such as Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software, and firmware components. While the system and method for storage area network simulation has been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims. Accordingly, the present invention is not intended to be limited except by the following claims. BRIEF DESCRIPTION OF THE DRAWINGS The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, with emphasis instead being placed upon illustrating the embodiments, principles and concepts of the invention. FIG. 1 is a context diagram of an exemplary managed information environment including a storage area network and suitable for use with configurations of the call load latency simulator in a storage area network; FIG. 2 is a flowchart of an agent employing the call load latency simulator; FIG. 3 is an exemplary timing chart of a particular configuration of the call load latency simulator showing a contention section; FIGS. 4-8 FIG. 1 are a flowchart of the latency simulator of ; and FIGS. 9-12 are an example of a call latency scenario of multiple pending calls.
The odds of life existing on another planet grow ever longer, argues this piece in the WSJ Here’s the story: The same year (1966) Time featured the now-famous headline (“Is God Dead?”), the astronomer Carl Sagan announced that there were two important criteria for a planet to support life: The right kind of star, and a planet the right distance from that star. Given the roughly octillion—1 followed by 24 zeros—planets in the universe, there should have been about septillion—1 followed by 21 zeros—planets capable of supporting life. As our knowledge of the universe increased, it became clear that there were far more factors necessary for life than Sagan supposed. His two parameters grew to 10 and then 20 and then 50, and so the number of potentially life-supporting planets decreased accordingly. The number dropped to a few thousand planets and kept on plummeting. Even SETI proponents acknowledged the problem. Peter Schenkel wrote in a 2006 piece for Skeptical Inquirer magazine: “In light of new findings and insights, it seems appropriate to put excessive euphoria to rest . . . . We should quietly admit that the early estimates . . . may no longer be tenable.” As factors continued to be discovered, the number of possible planets hit zero, and kept going. In other words, the odds turned against any planet in the universe supporting life, including this one. Probability said that even we shouldn’t be here. Today there are more than 200 known parameters necessary for a planet to support life—every single one of which must be perfectly met, or the whole thing falls apart. But wait, there’s more: The fine-tuning necessary for life to exist on a planet is nothing compared with the fine-tuning required for the universe to exist at all. For example, astrophysicists now know that the values of the four fundamental forces—gravity, the electromagnetic force, and the “strong” and “weak” nuclear forces—were determined less than one millionth of a second after the big bang. Alter any one value and the universe could not exist. For instance, if the ratio between the nuclear strong force and the electromagnetic force had been off by the tiniest fraction of the tiniest fraction—by even one part in 100,000,000,000,000,000—then no stars could have ever formed at all. Feel free to gulp. Multiply that single parameter by all the other necessary conditions, and the odds against the universe existing are so heart-stoppingly astronomical that the notion that it all “just happened” defies common sense. It would be like tossing a coin and having it come up heads 10 quintillion times in a row. Really? Fred Hoyle, the astronomer who coined the term “big bang,” said that his atheism was “greatly shaken” at these developments. He later wrote that “a common-sense interpretation of the facts suggests that a super-intellect has monkeyed with the physics, as well as with chemistry and biology . . . . The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.” I think Monty Python nailed it back in ’83: So remember, when you’re feeling very small and insecure How amazingly unlikely is your birth And pray that there’s intelligent life somewhere up in space ‘Cause there’s bugger none down here on Earth
A smoking article (1, 101, 201, 301) comprises: a combustible heat source (10) having opposed front (12) and rear (14) faces; one or more airflow channels (16) extending from the front face (12) to the rear face (14) of the combustible heat source (10); an aerosol-forming substrate (30) downstream of the rear face (14) of the combustible heat source (10); and a thermostatic bimetal valve (20, 120, 220, 320) located between the rear face (14) of the combustible heat source (10) and the aerosol-forming substrate (30). The thermostatic bimetal valve (20, 120, 220, 320) is arranged to deform from a first position, in which the valve (20, 120, 220, 320) substantially prevents or inhibits fluid communication between the one or more airflow channels (16) and the aerosol-forming substrate (30), to a second position, in which the one or more airflow channels (16) and the aerosol-forming substrate (30) are in fluid communication, when the thermostatic bimetal valve (20, 120, 220, 320) is heated to above a threshold temperature.
There are many ways to count, but when it comes to computing there is only one way: in binary. This means associating a 0 or a 1 to each bit. This is what we could call one-bit computing, so we have two possible values. With the two-bit we would have four possible values, with the three we would have eight (two to the third power, that is, two cubed) and so on. Doing an exponential calculation, a 32-bit processor gives us 4,294,967,296 possible values, while a 64-bit processor offers us a range of 18,446,744,073,709,551,616 values, which represents a large number of bits with which to to work. However, the differences go beyond the numbers. How are 32-bit and 64-bit systems different? For starters, we can pull the most obvious: a 64-bit chip (whose architecture we sometimes refer to as x64) can do much more than a 32-bit. Today it is more than likely that you run 64-bit systems on processors with architectures of the same type, something that has even reached the mobile world. The first smartphone with this architecture was the iPhone 5s. However, not all modern operating systems correspond to this architecture. Windows 7, 8, 8.1 and 10, in addition to some versions of GNU / Linux (such as Ubuntu and its flavors), also offered in 32-bit architectures. How can you tell which one you have? TOP APPS WINDOWS 2020 FREE The 17 BEST PROGRAMS for your PC Identifying a 64-bit operating system Let’s say you are running Windows on a computer that is less than ten years old. Your chip is more than likely 64-bit, but you may have a 32-bit system installed. It is very easy to check. Open the file explorer and right click on “This computer”. If you then select “Properties” you will see a summary of your system, where you can check what you are running, as you can see in the image that presides over these lines. You can also type “About your PC” in the search box of the Start menu, which will give you the same result. Another way to quickly identify it is to use the 64-bit Checker program. That being said, why would no one install a 32-bit system on a computer? The main reason is because has a 32-bit processor, which a 64 system does not support. That this happens today, however, is highly unlikely. Intel started manufacturing 32-bit processors with family 80386 in 1985 and was already selling 64-bit processors by 2001. If you have bought a computer since the release of Pentium D processors in 2005, then it is unlikely that its architecture is 32-bit. Now, if you have a netbook that was sold with such fury a few years ago, such as any computer from the Aspire One family from Acer, the story may be different. Computers that, unsurprisingly, did not catch on with users. What happens if I install a 32 bit OS on a 64 processor? The answer is absolutely nothing, but it is not optimal. A 32-bit operating system has more limitations, the main one being that it only supports 4 GB of RAM. Increasing the amount of random access memory beyond this figure doesn’t have much of an impact on performance, but on a 64-bit system it is quite noticeable. To be clear on the memory issue, Windows 10 may come to support up to 512 GB RAM in its Pro version (for the 128 of the Home version). The theoretical limit for RAM on a 64-bit system is 16 exabytes. In other words, a real outrage. There is no hardware that can support this amount … yet (although there is still a lot left). 64-bit computing offers other improvements, albeit in ways that cannot be seen with the naked eye. These are characteristics that a great majority of users do not even think about, but that help computer engineers to get the most out of computing. Why can I download programs in 32-bit and 64-bit versions? Firefox is a great example of this. If you go to the page where all the browser downloads are listed, you will see up to five different versions Installer: - Windows - 64-bit Windows - OS X (which is 64-bit only) - Linux - 64-bit Linux Why do this? Because there are still 32-bit operating systems. They need software written in their architecture to function, and as we have already said they cannot run 64-bit programs. Now, as it was already clear when we talked about operating systems, the reverse is perfectly possible. In particular, the case of Windows is striking, where a emulation subsystem called WoW64. To check it, just go to the C: drive, where there are two Program Files folders and where one of which has the suffix “(x86)”, where all 32-bit programs go. There are still many programs written for this architecture. Mac users will not find as many 32-bit programs for their platform. From the Snow Leopard release In 2009, Apple only produced 64-bit operating systems. Most of the applications available for Mac are already built with only this architecture in mind.
https://www.chordscenter.net/sin-categoria/32-bit-and-64-bit-operating-systems-whats-the-difference/
Austin Crecelius' girlfriend once told him that life was "like a roller coaster" -- and he took those words to heart. In the viral video above, watch as Crecelius pops the question to his girlfriend, Allison Boyle, during a roller coaster ride at the Holiday World theme park in Santa Claus, Indiana. As the coaster began its steep ascent, Crecelius launched into a heartfelt speech. "At one point in time, you had mentioned to me that life is like a roller coaster. And it's got its ups and downs, it's got its twists and turns, and it even throws you for a loop sometimes," he told Boyle. The couple’s friends captured the emotional moment on camera. Watch it above. Crecelius -- who, like Boyle, is a senior at Indiana’s Anderson University -- told NBC News that he was very nervous prior to the adrenaline-filled proposal. The roller coaster had broken down before the couple got on it, meaning he was forced to wait while it was fixed. "I waited in line for 15 excruciating minutes nervously waiting to pop the question," Crecelius said. "There was even one point in time where she asked me if I felt really hot because my face was red and I was sweating." According to NBC News, Crecelius gave Boyle a "fake" ring on the roller coaster. After they got off the ride, he knelt down and proposed to her again with the "real ring." Also on HuffPost:
https://www.huffpost.com/entry/roller-coaster-proposal_n_56122b5ee4b07681270274d7
A pair of pliers, as the name implies, consists of two levers working together to give the user a tighter grip than would be available with bare hands. Understanding how those levers function and interact can make for a child's science project, or simply help a workman understand the tools of his trade. Simple Machines The lever is one of the six simple machines from which all other machines are constructed. The other five simple machine are the wheel and axle, pulley, inclined plane, wedge, and screw. Simple machines are combined to form complex machines. For example, if you attach a lever to a wheel and axle, then put a bin on top, you've built a wheelbarrow. Parts of a Lever A lever consists of four parts: the load, the fulcrum, the effort and the lever itself. The load is the point that exerts force on the object being manipulated by the lever, such as the bin of a wheelbarrow. The fulcrum is the point at which the lever pivots, such as the center of a see-saw. The effort is the point at which a person or machine exerts force on the lever, such as the handle of a crowbar. The lever is the structure that connects the other three parts. Classes of Lever There are three different classes of lever. A first-class lever has the fulcrum between the load and the effort. A second-class lever has the load between the effort and the fulcrum. A third-class lever has the effort between the load and the fulcrum. A see-saw is an example of a first-class lever. A wheelbarrow is an example of a second-class lever. A fishing rod is an example of a third-class lever. The Levers in Pliers A pair of pliers consists of two levers working in opposite directions. The load of these levers is the point where the pliers grip. The effort is at the handles, the point where the user grips the pliers. The fulcrum is at the nut where the pliers rotate. Because the fulcrum is between the load and the effort, both levers are first-class levers. How the Levers Work When you exert force on the effort of one of the levers in a pair of pliers, that force is multiplied several times before it is exerted on the load. The fulcrum changes the direction of the force as it pivots. When you squeeze the pliers, both levers exert force at once, in opposite directions and towards each other. Anything in between the load points on those levers will be squeezed with several times the pressure you apply to the handles of the pliers. References - Enchanted Learning: Levers - "The New Way Things Work"; David Macaulay, et. al; 1998 About the Author Jason Brick has written professionally since 1994. His work has appeared in numerous venues including "Hand Held Crime" and "Black Belt Magazine." He has completed hundreds of technical and business articles, and came to full-time writing after a long career teaching martial arts. Brick received a Bachelor of Arts in psychology from the University of Oregon.
https://sciencing.com/how-do-pliers-work-as-a-lever-13401141.html
51.To ensure objectivity of scoring in MAPEH, which is necessary? A. Scoring rubric B. Scope of the test C. Table of specification D. Right minus wrong policy 52. Which is the practical test that we used to do in Physical Education classes? A. Performance B. Drill test C. Product assessment D. exhibit 53. You want to know if your students in music can now read notes? Which should you use? A. Performance test B. Indirect test C. Product D. Traditional test 54. You want test your Health student’s mastery of the Basic Food Groups. Which should you use? A. Authentic test B. Summative test C. Formative test D. Written test More from Teach Pinas: ● RPMS-PPST Downloadable Materials for SY 2022-2023 (New Normal) ● Transfer of Teachers (DepEd Guidelines) ● RPMS Tools and Forms in the time of COVID-19 S.Y. 2021-2022 ● 5M students have enrolled during the first week of enrollment 55. Which are the responsibilities of a coach? I. Developing participants’ physical and psychological fitness of athletes II. Providing the best possible practical conditions in order to maximize athletes performance III. Ensure that his/her preferred team perform best A. I and II B. II only C. III only D. I only 56. Which is/are preventive forms of officiating? Takes two forms. I. One is helping players to avoid violations. II. Notifying a player not to continue a foul. III. Resolving a fight between the teams. A. I and III B. I only C. III only D. II only 57. What skills are include in sport management? I. Planning II. Organizing III. Directing IV. Controlling A. I, II, III and IV B. I and II C. II and III D. I, II and III 58. Which type of tournament format is the easiest and fastest way to declare a winner? A. Single Elimination B. Double Elimination C. Round Robin D. Double Round Robin 59. If the practices is inversely proportional to number of mistakes committed in choral rendition, this means that the practice was ____ . A. neutral impact B. quite long C. ineffective D. effective 60. You want to study how much importance schools give to MAPEH as a subject. How will you gather data? I. Study documents that bear time allotment of subjects II. Interview teachers and schools heads on their perceptions on the importance of the MAPEH subject III. Gather data on how many MAPEH related activities are organized A. III only B. I, II and III C. I and II D. II only 61. In teaching contemporary art you draw connection between art and other subject and important issues I society. Which approach in teaching art is applied? A. Collaborative B. Interdisciplinary C. Contemporary D. Reflective 62. If in you Art class you are occupied with philosophical discussion such as creating art that questions life and reality, discussion on religion, discussion of good and evil, creating religious art with which of the multiple intelligences are you occupied? A. Spatial B. Interpersonal C. Existential D. Intrapersonal 63. In your ART class you encourage children to tell you how they feel before the end of the period by sketching smiley or lonely face or whatever is appropriate. You practice caters to which of the multiple intelligence? A. Existential B. Intrapersonal C. Spatial D. Interpersonal 64. When do the PE students need their bodily-kinesthetic intelligence? When they A. run kick, throw and catch B. anticipate trajectories of flying balls C. discuss game strategy D. keep score and calculate 65. Which of the multiple intelligence I needed when the PE student orients himself to the playing field? A. Interpersonal Intelligence B. Spatial Intelligence C. Intrapersonal Intelligence D. Existential intelligence 66. A PE student uses his linguistic intelligence when A. discusses game strategy, read rule B. calculates angles of release for throwing and kicking C. uses rhythm when running D. work with teammates 67. Which intelligence of the MAPEH student is challenged when he works with other varying skill level and abilities A. Intrapersonal intelligence B. Linguistic intelligence C. Intrapersonal intelligence D. Interpersonal intelligence 68. In which sense is the K to 12 Curriculum Framework for Health constructivist I. It is learner-centered II. It is development appropriate. III. It connects new learning to prior experience. A. I, II and III B. I and III C. I only D. II and III only 69. When does the MAPEH teacher employ differentiated instruction? A. When he makes use of different teaching activities B. When he allows students to do their own thing C. When he proposes for homogenous sectioning D. When he gives attention to the most dominant MI group 70.Health teachers make use of art and music to impart health care messages, Which approach do they? A. Inquiry-based B. Integrated C. Collaborative D. Problem-based 71. Which practices goes with authentic assessment? A. Make student play volleyball to determine if they learned how play volleyball B. Make student recite the do’s and don’t’s of playing volleyball C. Make student check their own answer after a quiz D. Make student come up with a flow chart to demonstrate 72. Here is a sample item: Record your food intake over a period of three days ( at least one of the days needs to be a Saturday or Sunday ). Record what you eat, how much, when, and where. Evaluate your food intake by comparing it to the Dietary Guidelines for Filipinos and Food Guide Pyramid. Determine what nutritional goals you should set based on your evaluate of your food intake and make a plan for reaching those goals. Which type of assessment task is this? A. Constructed response B. Perform task C. Product D. Traditional 73. On which should you base your assessment? A. Learning outcomes B. Lesson Procedure C. Content D. Course syllabus 74. MAPEH is a skill-dominant subject. You expect assessment to be more of the _______ type. A. authentic B. traditional C. practice D. portfolio 75. You want to assess the attention of this particular learning outcome : draw at least 6 types of lines in art. Which assessment item is aligned to your outcome? A. Describe the 6 types of lines art work B. Draw at least 6 types of lines in art work C. Make an album that show the 6 types of lines in art work D. Point objects class that show the 6 types of lines 76. Which assessment is referred to as alternative assessment? A. Indirect B. Authentic C. Traditional D. Performance 77. Here is a learning outcome in the subject Research in MAPEH: apply theories and principle in conducting MAPEH research. Which assessment task is aligned to this learning outcome? A. The student do Powerpoint presentation on theories and principles conducting research. B. The student answer written test on theories and principles. C. The students conduct research and present research report. D. The student take a test on research. 78. MAPEH as a subject is focused on skills. You will expect assessment to have ______ . A. more performance test and product B. less performance test and more product C. more product and performance test D. no written test 79. Is the written test part of assessment in MAPEH? A. Yes B. No C. Yes, if MAPEH teacher is highly traditional. D. It depends on how the MAPEH teacher teach. 80. An effect assessment practice is multidimensional. How is this applied in MAPEH? A. Assessment sticks to authentic. B. The written test combines objectives and essay test C. Assessment is done with the use of varied tools and tasks. D. Assessment excludes traditional 81. As students I Art advance, they should be able to cite evidence from specific works of an art, such as specific features in a painting, the details of a score or script. As student develop, they should be able to gather and share more evidence to support their understanding; they should notice more n each work, and able to draw on it. Which principle in teaching Art is explained in the above paragraph? A. Allowing students to learn with their learning styles B. Providing Multiple Intelligence-based learning experiences C. Providing an explicit learning progression in the arts that is developmentally appropriate D. Providing for more interaction between students and art work 82. Inspire all students without exception to reach their fullest creative potential. On which principle of an art education is this practice anchored? A. Inclusive education B. Constructive teaching C. Integrative education D. Collaborative teaching The arts belong to all of us, whether we are old or young, rich or poor. They enrich the lives of people of all races and ethnicities. They communicate to people who speak different language and they bring joy and personal growth into the lives of the people of varying cognitive and physical abilities. If our students are comprehend the human story, then they must have opportunities to learn about how men, women and children all over the world and throughout the age have expressed their ideas and feelings, belief through the arts. 83. According to the paragraph, which is true of art? Are transcends _____ . I. time II. culture III. age IV. nation’s boundaries A. I, II, III and IV B. III and IIV C. I and II D. I, II and III 84. Based on the paragraph, what is TRUE of art? A. Students express themselves through art B. The arts are essential for non-formal education C. The arts are essential to the education of all D. Art is the most relevant to the picture-smart 85. For art education to be development appropriate in the lower grades, art education in the clearly in the early grades should be ____ . I. Sensory-based II. Exploratory III. Playful IV. Specialized A. I, II and III B. IV C. I and II D. III and IV 86. Which illustrates a “spiraling” approach to Arts curriculum? I. Comprehensive and sequential arts experiences that begin in preschool continue throughout high school to provide the foundation for lifelong learning in the arts. II. Performing, creating and responding to the art are taught in Senior High School where students are more ready III. In order to build a knowledge base in the arts, students need repeated exposure to process, content , concepts and question, and the opportunity to solve increasingly challenging problems as their skills grow. A. II B. I and III C. I, II and III D. II and III “What is did here,” explains Maria, a high school senior, as she points to a complex pattern on the computer screen, “is scan in a weaving that my mother made. The cloth is very old, and the people in our village have been these patterns for centuries. Each design has a meaning. “And now,” she moves the cursor across the screen, “here are photos of my relative. Here’s my mother when she was my age, and her mother, and that’s me in front. I’m trying to compose a picture of all us, three generations, with the designs in the cloth as a unifying element. … What I like about using the computer is being able to play with the sizes, shape and colors of thing. For instance, I can make myself look transparent and ghostly here and my grandmother look solid and real.” 87. What is the role of the artist ini society as shown by Maria? I. Innovator II. Preserve of tradition III. Critic of art work of earlier generation A. I and II B. I, II and III C. I only D. II only 88. What does the above paragraph say about art work? I. Performing, creating, and responding to the arts work? II. Art work offers students the satisfaction of sharing their ideas talents III. An artist is both an innovator and a preserves of tradition. A. I and III B. II only C. I, II and III D. I and III 89. There are four parts to teacher a new skills such as shooting iin basketball. If you teacher the skill, in which order should the four steps come? I. Instruction II. Applying III. Demonstrating IV. Confirming A. I, II, II and IV B. III, I, IV, II C. I, II, III and IV D. III, I, II and IV 90. Of these in teacher a new skill, in which step is the student made to practice the skill in a planned situation to help him/her transfer the learning? A. Instructing B. Demonstrating C. Applying D. Confirming 91. In which step do you, the PE teacher, give feedback on how successful the student has been? A. Confirming B. Demonstrating C. Instructing D. Confirming 92. In which step do you, the PE teacher, give feedback on how successful model on how to perform a skill? A. Confirming B. Demonstrating C. Instructing D. Confirming 93. Which is continuous form of practices which is the best for simple skills? An example would be a rally in badminton where the learner must repeated perform drop shots. A. Distributed practice B. Variable practicing C. Massed practicing D. Fixed practice 94.Which type of practice is best with discrete, closed skills? A. Massed practice B. Variable practice C. Fixed practice D. Distributed practice 95. Support you want shooting practicing in football, where you may set up drills and alter the starting position and involvement of defenders. Which type of practice is BEST? A. Fixed practice B. Massed practiced C. Fixed practice D. Variable practice 96. Which type of practice is the best for difficult, dangerous or fatiguing skills? A. Massed practice B. Distributed practice C. Fixed practice D. Variable practice 97. If you teach trial jump this is what you intend to do: The student practices the hop and learn it before he practices the skip and learn it, too; then you link the two. Finally the students will learns the jump individually then tag on the end of the skip Which method of practice did you apply? A. Progressive part method B. Part method C. Whole method D. Whole-part-whole method 98. The skill is first demonstrate and then practiced from start to finish. It helps the learner to get a feel for the skill, timings and end product. Which practices method is is described? A. Progressive part method B. Part method C. Whole method D. Whole-part-whole method 99. Which is best used for fast skills which cannot easily be separated into sub-parts, such as a javelin throw? A. Whole method B. Part Method C. Progressive part method D. Whole-part-whole method 100. The parts of the skill are practiced in isolation which is useful for complicated and serial skills and is good for maintaining motivating and focusing on specific elements of the skill. Which practiced method is described? A. Whole-part-whole method B. Whole method C. Progressive part method D. Part method Download this LET Reviewer as well as the answer keys from the button above. If the download button is not working, please leave a comment below and we will fix it as soon as possible. For the complete list of LET Reviewers, you may follow this link: Downloadable LET Reviewers TEACH PINAS is not affiliated, associated, endorsed by, or in any way officially connected to any government organization. All the information on this website is published in good faith and for general information purpose only. We, the admins/staff, do not claim any ownership of some content posted here unless otherwise stated. If you own rights to those and do not wish them to appear on this site, please contact us via e-mail: [email protected] and we will take necessary actions ASAP. TEACH PINAS does not make any warranties about the completeness, reliability, and accuracy of this information. Any action you take upon the information you find on this website (www.teachpinas.com), is strictly at your own risk. TEACH PINAS will not be liable for any losses and/or damages in connection with the use of our website. Read more...
https://www.teachpinas.com/download/let-reviewer-mapeh-part-2/
A chemist needs to prepare a buffer solution of pH 8.80. What molarity of NH3 (pKb = 4.75) is required to produce the buffer solution if the (NH4)2SO4 in the solution is 1.8 M? - Chemistry in an experiment, ammonia gas, NH3(g) was bubbled through distilled water. Some of the dissolved ammonia gas, NH3, reacted with the water to form the aqueous ammonia ions, NH4. When red litmus paper was placed in contact with the - AP Chemistry A buffered solution was created by mixing solutions of NH4Cl and NH3 at 25 C. Which species, NH4+ or NH3, has the highest concentration in this buffered solution?. Justify your answer. - Chemistry The question is: what is the ratio [NH3/[NH4+] in an ammonia/ammonium chloride buffer solution with pH= 10.00? (pKa for ammonia=9.25) When working the problem, I tried to solve it a bit backwards, in that I plugged in each of the - AP CHEMISTRY An experiment was carried out to determine the value of the equilibrium constant Kc for the reaction. Total moles of Ag+ present = 3.6 x 10-3 moles Total moles of NH3 present = 6.9 x 10-3 moles Measured concentration of Ag(NH3)2+ - Chemistry A sample of ammonia gas was allowed to come to equilibrium at 400 K. 2NH3(g) --> N2(g) + 3H2(g) At equilibrium, it was found that the concentration of H2 was 0.0584 M, the concentration of N2 was 0.0195 M, and the concentration of - Chemistry What volume of hydrogen is necessary to react with five liters of nitrogen to produce ammonia? (assume constant temprature and pressure) Balanced equation- N2 + 3H2= 2NH3 After finding this answer, what volum of ammonia is - Chemistry My question is basically to find out what these reactants yield: CuCl2(aq) + NH3(aq) --> ? my best guess here is... CuCl2(aq) + 6NH3(aq)-->[Cu(NH3)4(H2O)2]2+ + N2(g) + 2HCl Im pretty sure about the [Cu(NH3)4(H2O)2]+2 part but not - Chemistry Ammonia (NH3) ionizes according to the following reaction: NH3(aq) + H2O(l) ⇌ NH4+(aq) + OH–(aq) The base dissociation constant for ammonia (NH3) is Kb = 1.8 × 10–5. Ammonia (NH3) also has a chloride salt, ammonium chloride You can view more similar questions or ask a new question.
https://www.jiskha.com/questions/530233/what-concentration-of-ammonia-nh3-should-be-present-in-a-solution-with-nh4-0-734m
After 10 days on mattresses at Te Puea Marae, the first thing two little boys wanted to do in their new home was run up and down the stairs. The two boys, aged 2 and 4, moved today into a brand new four-bedroom Housing NZ townhouse in Mt Roskill with their dad, two other teenage siblings and their big sister "B", 16, who is having chemotherapy at Starship Hospital for non-Hodgkin lymphoma. "My little brothers are excited," B said. "They just want to run up and down the stairs." The family are the first occupants in a block of four new two-storey units, which all come with tiny gardens with lemon trees already planted. B's father, 46, who came to New Zealand from Samoa in 2000 and has been a solo dad since the children's mother returned to Samoa, said he would share one bedroom with the two young boys so that the three teenagers could each have their own rooms. It will be a complete change from their cousin's house in Mangere, where they stayed for three months after one of their cousins drowned at Hunua Falls. "There were 15 people living at aunty's," B wrote on Te Puea Marae's Facebook page last week. "I slept on a mattress in a room I shared with my little brothers. It wasn't really good because the hospital said I had to wear a mask because I could easily get infected." In the new house she has chosen "the biggest bedroom" - a corner room with windows on two sides. "It's cool, it's awesome," she said. Her father worked as a painter when the family lived in a Hamilton private rental before moving to Mangere in March, and he has received several painting job offers in Auckland since B's Facebook post. "After I sort out everything I will look for a job," he said. He said Work and Income had provided beds, a fridge and a washing machine, but he expected to pay off the beds at $15 a week on top of rent which has been fixed initially at $104 a week, based on a quarter of the family's benefit income. The rent will rise when he starts working again. Volunteers from Te Puea Marae formed a human chain to load in the family's other belongings in plastic rubbish bags, as well as food, books and other items donated by the public. Earlier, about 30 volunteers attended farewell prayers and speeches for the family at the marae, taking turns to hug B. "I'll miss them," B said. Marae spokeswoman Moko Templeton said a marae social worker would keep supporting the family in their new home for two weeks, and the family was welcome to return for meals at the marae at any time.
https://www.nzherald.co.nz/nz/relief-as-homeless-teen-cancer-patient-and-family-move-into-new-house/2JT32PQXJQ4I3RDG54KO5UFW5E/
Kilometers to Dekameters (km to dam) - How many dekameters in a kilometer? How many dekameters in a kilometer? There are 100 dekameters in a kilometer. 1 km = 100 dam. 1 dam = 0.01 km. Kilometers to dekameters (km to dam) conversion factor is 100. To calculate how many dekameters in kilometers, multiply by the conversion factor or use the converter. For example, to calculate how many dekamaters there are in 20 kilometers, multiply the km value by the conversion factor, 20 km * 100 = 2000 dekameters in 20 km. Kilometer is a metric unit and equals to 1000 meters. It is used to measure the distance between two geographical locations. The abbreviation is "km". Dekameter is a metric unit and equals to 10 meters. It is very rarely used unit of length and mostly used in meteorology. The abbreviation is "dam".
https://www.asknumbers.com/km-to-dam.aspx
21 Cal.App.2d 225 (1937) JACKIE COOGAN PRODUCTIONS, INC. (a Corporation), et al., Petitioners, v. INDUSTRIAL ACCIDENT COMMISSION OF THE STATE OF CALIFORNIA, NORA JONES et al., Respondents. Civ. No. 1857. California Court of Appeals. Fourth Appellate District. May 28, 1937. Charles J. Katz, Alfred Gitelson and J. L. Kearney for Petitioners. Everett A. Corten and Arthur I. Townsend for Respondent. Barnard, P. J. Petitioners seek to have annulled awards of the Industrial Accident Commission in favor of the wife, mother and son of one Charles Jones, who was killed in an automobile accident on May 4, 1935. The petitioning insurance company was the insurance carrier for Jackie Coogan Productions, Inc., a corporation, which will be herein referred to as the petitioner. Charles Jones was employed by the petitioner on a ranch it owned in San Diego County near the Mexican border. On the day in question he accompanied John L. (Jackie) Coogan, John H. Coogan, his father, and two friends of the Coogans on a hunting trip to a point in Mexico about 25 miles from the ranch. The trip was made in an automobile *227 owned by Jackie Coogan, personally, and driven by John H. Coogan. While they were returning to the ranch an accident occurred which caused the death of four occupants of the car, including Jones. A proceeding before the respondent commission followed, with an award against the insurance carrier in the sum of $3,960.20 and an additional award against the petitioner based upon a finding that the injury was caused by serious and wilful misconduct on the part of the employer. [1] It is here contended that the evidence is insufficient to support the finding that the injuries suffered by Charles Jones arose out of and in the course of his employment, and the further finding that these injuries were caused by the serious and wilful misconduct of the employer. With respect to the first point raised it appears that John H. Coogan was the president of the petitioner; that he and his wife owned all of the stock in that corporation with the exception of two qualifying shares; that the corporation had operated this ranch for a number of years and had never made a profit therefrom; that there was a guest house thereon which was referred to as a "lodge"; that John H. Coogan frequently brought friends to the ranch and entertained them there, particularly over week-ends; that on this particular occasion friends of Jackie Coogan were being entertained over the week-end; and that Jones, as foreman, was employed by the month and was supposed to work or to be subject to call twenty-four hours a day and seven days a week. The widow of Charles Jones testified that on previous occasions she had heard John H. Coogan order her husband to go on hunting trips with him and that her husband always went pursuant to these instructions; that one of her husband's main duties was to help entertain guests brought to the ranch and to assist in providing comforts for them; that he saddled horses for the guests, cleaned the game secured by them and assisted in cleaning and taking care of their guns before and after these hunting expeditions; that she had heard Jackie Coogan and his father discuss between themselves and with guests the fact that they would take her husband with them on hunting trips and that he would help clean the fowls and look after the guns; that when guests came to go on a trip or to spend a week-end on the ranch John H. Coogan would call her husband from whatever work *228 he was doing and have him assist in the matter of making the guests comfortable; that several times her husband had told her that John H. Coogan wanted him to go on hunting trips into Mexico; that John H. Coogan usually told her when he was going to take her husband with him and on May 3, 1935, he told her that he was going to take her husband with him the next day; that on this occasion her husband told her that he did not want to go on this trip, as he wanted to finish building a fence around some grain so that the cattle would not get in; and that just before her husband left on the morning of May 4th, he was working on this fence. The mother of Charles Jones testified that her son looked after the company that came to the ranch, took them milk and got firewood, and that John H. Coogan would just tell him they would go hunting on a certain day and that "they was always hunting a big part of the time". Jackie Coogan testified, after saying that he had asked Jones to go on this trip about two weeks before May 4th, "Well, it was my father and I, we asked him. We were both standing there together and my dad told him he had lined up a dove hunt at Oscar Denton's ranch, and so I asked Charlie, naturally, to go with us, and I brought some shells for him from Los Angeles." He further testified that Jones had frequently gone on hunting trips with him and with his father; that he would not think of going out hunting without asking Jones if he was around where he could get hold of him, "because I liked his company and because he was a good shot"; that there were a lot of days when he could not get him as Jones was busy; and that "it is a different set-up. It wasn't like a strict employer and employee." A Mr. Bernstein testified that he was general manager of the petitioner corporation; that John H. Coogan employed Jones in the first place; that John H. Coogan was the only one who would know what Jones' duties were; that the witness gave no orders to Jones except with reference to furnishing information for inventories and the like; that he did not know whether the trip in question was a pleasure trip or whether it was one in the course of Jones' employment; that during the shooting season they sent people down to the ranch; and that he knew that Jones had gone out on shooting expeditions "but I was never there at any time when they ever went off the ranch on a hunting trip". When asked if it was not Jones' duty to comply with any request *229 made by Mr. Coogan he replied that Jones was foreman and would naturally comply with suggestions made by Mr. Coogan, and that "I would do anything my boss asked me to." It is petitioner's contention that the trip on which Jones met his death had no connection with his duties as ranch foreman, that on this occasion he stepped aside from his employment and engaged in a trip for his own pleasure, and that in so doing he was accepting an invitation and not obeying instructions. The evidence justifies the inference that this ranch was maintained, in part at least, for entertainment purposes and that a large part of the duties imposed upon Jones related to this use of the ranch. It may fairly be inferred from the evidence that Jones, as a part of his employment, was expected to go on hunting expeditions of this nature, that the situation was thoroughly understood by both sides, and that the duty thus resting upon him was no less real because the direction to go on this particular occasion may have taken the form of a suggestion rather than that of a direct order. In our opinion, the finding that the injury suffered by Jones arose out of and in the course of his employment is fully supported by the evidence. (Shafter Estate Co. v. Industrial Acc. Com., 175 Cal. 522 [166 P. 24].) [2] With reference to the finding of serious and wilful misconduct on the part of the employer the evidence is as follows: The accident occurred at a point about two miles from the ranch, as the party was returning from this hunting trip. Jackie Coogan testified that for at least ten miles before the point where the accident occurred the road over which they traveled is a regular mountain highway; that the accident happened on the last of three curves; that as they approached this last curve in the series of three, a car coming in the opposite direction, forced them off the road; that they had gone around two of the curves and were negotiating the third curve when the crash came; that the third curve is a little sharper than the others and that as you start into the third curve you can see a distance of 100 feet ahead; that they were traveling on their right-hand side of the road; that just as they were coming to the crux of the third curve, at a speed of between 40 and 50 miles an hour, a car coming from the other direction shot around the point; that this other car was astraddle of the white line and was in the middle of the highway; and that the car driven by his father *230 turned to the right, hit a soft shoulder, started to skid, skidded quite a ways before it left the road, and then went off the embankment and crashed into the rocks below. A Mr. Pollack testified that a few minutes before the accident he was riding along this road in another car and that some three or four miles before the point of the accident the Coogan car passed them going in the same direction; that at that time the Coogan car was traveling at from 70 to 75 miles an hour; that there were many turns and grades in the road; that the siren or horn on the Coogan car was sounded continuously; that "it sounded like a police car or an ambulance"; that that car was being driven astraddle the center of the highway; and that he could see that car for 150 yards when it passed out of his sight around a turn in the road. Two other witnesses testified that the Coogan car passed where they were standing at a point about two and one-half miles from the scene of the accident; that their attention was first attracted to the car by the sound of its "siren" or "whistle", which was sounded continuously; that at that time the car was traveling at from 70 to 75 miles an hour; that a few minutes alter they went to the scene of the accident; that there were seven white posts alongside of the highway at the end of the last curve before the scene of the accident; that two or three of these white posts were broken off; that the tracks of the Coogan car were visible on the shoulder of the road for a distance of 100 to 125 feet; that these tracks led to where the posts were broken off and down to the wreck; and that the marks showed that the tires had skidded for all of this distance of 100 or 125 feet. An occupant of the car which passed the Coogan car on the turn immediately before that car left the highway testified that she first saw the Coogan car about 20 or 30 feet ahead of her coming toward the car in which she was riding; that the Coogan car was traveling at a speed of 75 or 80 miles an hour; that the car in which she was riding stopped within 10 or 20 yards after the Coogan car passed when another occupant of her car called out that the Coogan car had gone over; and that she then looked back and saw a cloud of dust. Another occupant of that car testified that that car was on the right-hand side of the road as it passed the Coogan car and that the Coogan car was traveling about 70 miles an hour. A third occupant of that car testified that that car was traveling on its right-hand side of the *231 road; that the Coogan car was traveling at about 70 miles an hour; that she looked back and saw the Coogan car leave the highway in a cloud of dust but did not see it light; that she spoke to the driver of the car in which she was riding; and that that car was then turned around. The definition of the term "wilful misconduct" and the general rules applicable in the determination of the question of whether such misconduct appears in a particular instance are well established. (Meek v. Fowler, 3 Cal.2d 420 [45 PaCal.2d 194].) In E. Clemens Horst Co. v. Industrial Acc. Com., 184 Cal. 180 [193 P. 105, 16 A.L.R. 611], it was stated that serious misconduct on the part of an employer must be taken to mean "conduct which the employer either knew, or ought to have known, if he had turned his mind to the matter, to be conduct likely to jeopardize the safety of his employees". In applying these rules to a particular set of circumstances close questions are often presented. Generally speaking, we think it may be said that driving a motor car at an excessive speed is in itself not sufficient to establish wilful misconduct. (McLeod v. Dutton, 13 Cal.App.2d 545 [57 PaCal.2d 189]; McCann v. Hoffman, (Cal.App.) [62 PaCal.2d 401].) It has, however, been held that excessive speed, taken in connection with other circumstances, was sufficient to show that the driver of a car knew or should have known that injury to others would probably result from his actions. In Walker v. Bacon, 132 Cal.App. 625 [23 PaCal.2d 520], driving an automobile at a speed of 66 miles an hour over a high crowned road that was not smooth, when the driver knew the condition of the road and knew that his automobile had a badly worn steering knuckle, was held to justify the conclusion that the driver was guilty of wilful misconduct. In Norton v. Puter, 138 Cal.App. 253 [32 PaCal.2d 172], it was held that the defendant was guilty of wilful misconduct in driving his automobile at a speed of 55 miles an hour on a wet pavement with his vision obscured by falling rain, when his windshield wiper was not operating, when he knew that he was approaching a dangerous curve in the highway where he had had previous close calls, and when he had been warned to diminish the speed of his *232 car. In Candini v. Hiatt, 9 Cal.App.2d 679 [50 PaCal.2d 843 ], a driver who had barely escaped an accident while driving around the first curve in a highway at the rate of 45 miles an hour, after being warned that he was driving too fast, and who, in disregard of a warning that he was approaching another dangerous curve, increased his speed to 50 miles an hour, with resulting injury to his passengers, was held to have been guilty of wilful misconduct. In Medberry v. Olcovich, 15 Cal.App.2d 263 [59 PaCal.2d 551, 60 PaCal.2d 281], the court said: "The question of whether the minor defendant was or was not guilty of wilful misconduct is essentially one of fact for determination by the fact-finder. Appellants extensively review many cases involving wilful misconduct, calling our attention in their review that the facts in the instant case are similar to facts in some cases where judgment went for plaintiff, and that they are dissimilar to facts in other cases where defendant prevailed. It must be borne in mind, of course, that the interpretation to be given actions and conduct must turn on the circumstances of the individual case, and that decision passing upon facts constituting, or failing to constitute, wilful misconduct, can be of little assistance, other than to announce the definition of that term. It is our duty on appeal to indulge in all reasonable inferences to support the findings and judgment." While the question before us is rather close we think it is primarily one of fact and that the evidence in its entirety, with the reasonable inferences therefrom, is sufficient to sustain the finding of the commission. The point where the accident occurred was within two miles of this ranch. It appears from the evidence that John H. Coogan had taken a number of trips which necessarily took him over this route and he must have been familiar with the fact that it was a mountain road with many turns. It appears, without conflict, that he was driving at an excessive rate of speed at two points a few miles away from the scene of the accident and there is ample evidence that just before the accident happened he rounded a turn in the road where vision was limited to 100 feet at an excessive rate of speed. The evidence justifies the inference that his speed while rounding that turn was inherently and extremely dangerous under the conditions there existing and that he knew or should have *233 known that injury to others was probable. Assuming that the evidence might have justified a different conclusion we think it may not be held, as a matter of law, that this factfinding body exceeded its jurisdiction in viewing the evidence, with its reasonable inferences, as disclosing serious and wilful misconduct on the part of the driver of the car. For the reasons given both awards are affirmed. Marks, J., and Jennings, J., concurred.
Is 999983 a prime number? It is possible to find out using mathematical methods whether a given integer is a prime number or not. Yes, 999 983 is a prime number. Indeed, the definition of a prime numbers is to have exactly two distinct positive divisors, 1 and itself. A number is a divisor of another number when the remainder of Euclid’s division of the second one by the first one is zero. Concerning the number 999 983, the only two divisors are 1 and 999 983. Therefore 999 983 is a prime number. As a consequence, 999 983 is only a multiple of 1 and 999 983. Find out more: Since 999983 is a prime number, 999983 is also a deficient number, that is to say 999983 is a natural integer that is strictly larger than the sum of its proper divisors, i.e., the divisors of 999983 without 999983 itself (that is 1, by definition!).
https://www.numbers.education/999983.html
1 Amy put down the paintbrush and reached out to stretch her arms. She stared at the picture she was working on with a critical eye. The heavy paper on the easel in front of her was bright with color. She thought the picture looked a lot like her family's flower garden. On the ground beside her was the metal case that held her watercolor paints and extra brushes. She nodded to herself, pleased with the result so far. 2 "It's not bad," she murmured to herself. 3 "What's not bad?" asked a voice behind her. 4 Amy jumped at the sound of her little brother's voice. Where had he come from? 5 "It's none of your business," she snapped. She just wanted him to go away and leave her alone.
https://www.edhelper.com/ReadingComprehension_Grade3_6_1.html
It is increasingly stated that renewable power generation costs are now lower than costs of traditional fossil fuel generation; furthermore, that the continued renewable generation cost decline will result in ‘cheap’ electricity for all. Solar power is becoming increasingly competitive with coal-fired power generation; however, our analysis suggests that financial assistance for solar projects is still required for low solar power price agreements to be viable. The very serious negative externalities associated with coal-fired generation—climate change and air pollution—mean that such measures are absolutely warranted in order to encourage investment in renewables. In the absence of a comprehensive global policy agreement that accounts for these external costs, our analysis suggests that fossil fuel generation remains the cheaper form of electricity generation. Historical cost comparisons CRU has collected information on 34 commercial scale solar power plants that have been installed since 2010 in order to estimate the levelised-cost of electricity (LCOE). LCOE can include a variety of assumptions, however, in order to compare the true, underlying cost of power generation (i.e. before government or policy intervention), the methodology we have adopted does not account for any form of financial penalty (e.g. in the case of a carbon charge) or assistance from subsidy or favourable financing. We believe this approach provides a clearly defined framework on which the impact of policy decisions can be tested and a start point for any investment decision. The sample contains a total of 7.8 GW of capacity and includes plants in the US (7), India (6), Europe (6), the Middle East and Africa (5), Australia (4), South America (3), North East Asia (2) and South East Asia (1). Importantly, the sample does not include any Chinese solar plants. In the coming months, we will be working on Chinese power generation costs as a whole and will provide this information to our subscribers when completed. The critical pieces of information collected for each plant and used to estimate the levelised-cost of electricity (LCOE) are commissioning year, capital cost of installation, capacity, actual power generation (i.e. the efficiency of installed capacity), lifespan of the plant, degradation of the cells (i.e. the rate at which capacity declines over time) and operating and maintenance costs. For a number of plants, actual power output has been reported and, in these instances, we have used this information in our calculation. The tax rate selected is the typical, corporate tax rate in the relevant country. Other variables relevant to our estimates are not typically reported, therefore, we have used the following set of standard assumptions where necessary: - 30% efficiency (n.b. this is the top-end of the range of recent achieved efficiency) - 25-year operating life - 15-year depreciation schedule - 0.5%/y degradation of cell capacity - $30 /kW of installed capacity operating and maintenance costs - 5% discount rate We have also carried out similar research for coal-fired power plants that have been built in recent years. The key assumptions that underpin our calculations of LCOE for these plants are as follows: - 85% available capacity (i.e. 15% downtime for maintenance and stoppages) - 40-year operating life - 15-year depreciation schedule - 39% efficiency of converting energy in coal to power (n.b unless stated otherwise) - $59 /t coal price (n.b. the average price in 2016) - Additional operating costs set to 15% of fuel costs - 5% discount rate To re-iterate, we have not included financial assistance that reflects the fact that solar power is a merit good and, similarly, we have not included any form of additional costs in our calculations for coal plants to account for negative externalities, such as carbon taxes or the cost of collecting and sequestering carbon dioxide (CO2). To illustrate, a coal-plant with a 39% efficiency emits ~0.9 tCO2/MWh and a CO2 price of $30 /t, for example, would add $27 /t to the levelised cost of coal generation. This would approximately double the calculated LCOE, however, we have similarly not included battery storage costs for solar farms, which will be critical in order for large-scale plants to be accommodated in the grid and these could add 30-50% to installations costs of solar plants. In this analysis, we solely aim to compare the underlying costs of both types of power supply in the absence of government or financial intervention. The charts below plot the installation costs of solar and coal power plants and our estimates of the LCOE for each plant. The charts above demonstrate a handful of important and interesting trends. Firstly, it is abundantly clear that the cost reductions achieved in the solar industry since ~2010 are extremely strong; installation and levelised costs for plants built in the last year or so are ~70% lower than plants built in 2010. However, costs of plants installed between 2015-2017 H1 have remained broadly stable and, so far, we have been unable to find a solar plant that has been built with a lower LCOE than a standard coal-fired plant under the aforementioned assumptions. We are certain this finding will be questioned, however, the data for plants built in the last three years or so suggests that costs have not been falling so rapidly and none have been cheaper than coal plants. That is, solar costs for plants already up-and-running appear to have plateaued and remain approximately twice as expensive as coal-fired plants under our assumptions. Costs plummeting in the next wave of solar plants Indeed, the common consensus is that solar power costs are continuing to fall very sharply and that solar is already more competitive than coal. This view is driven by the very low power price agreements (PPAs) being signed by solar power developers in the last 18 months or so for plants that are, or will soon be, under construction. For example, Avaada Power and Phelan Energy have signed PPAs at INR2.62 /kWh ($40 /MWh) for plants they are building in India and Solarpack Corporation and Jinko Solar/Marubeni have agreed to supply power at $29 /MWh from their plants in Chile and the UAE respectively. These recent PPAs strongly suggest that the costs of the plants being constructed today have dropped massively and we are witnessing a step-change in costs similar to that from ~2010-2015. For example, PPAs of $29-50 /MWh for upcoming plants in India, Chile, the US or the UAE, under the standard assumptions previously listed (n.b. without subsidy, tax benefits, favourable financing, etc.) require installation costs of ~$0.50-0.80 /W (n.b. depending on capacity); a ~50-70% fall compared with the cost of plants installed in the last two years. Needless to say, this is a phenomenally fast decline and is hard to envisage given the fact that solar module prices – while also much lower than in recent years – are currently $0.33 /W. In recent years, solar module prices have accounted for ~25-35% of total installation costs and ‘other’ costs of installation have typically amounted to >$1.00 /W. In the next wave of power plants, the implication is ‘other’ costs will fall to ~$0.17-0.47 /W. Publically available cost estimates for these upcoming plants are difficult to find. One exception is for Jinko Solar and Marubeni’s plant in Abu Dhabi. It has been reported that the joint venture has raised $870 M in finance for the 1.17 GW plant. This will be the largest solar plant outside China (n.b. and possibly the second largest plant in the world). The implied installation cost of $0.74 /W, while significantly lower than the cost of plants built in the last few years, is still not low enough to warrant the reported PPA of $29 /MWh under un-subsidised conditions. In the absence of some form of subsidy or financial assistance – or a technological jump in efficiency or lifespan of cells – we estimate the LCOE of this particular plant will be $41 /MWh. Similarly, plants are being constructed in countries such as Mexico, Philippines and Vietnam with capital expenditure budgets that imply the LCOE will be $45-55 /MWh. It is possible that some of these budgets will not be kept to and/or plants encounter unanticipated problems in the years ahead; this is a common occurrence in a wide range of capital-intensive industries. Having said that, the $45-55 /MWh cost level is competitive with – but not necessarily lower than – coal-fired power plants. Under legislative environments where the negative externalities associated with a coal-fired power plant are not accounted for (n.b. we are not suggesting this would be a positive situation), a coal-fired power plant remains a far less risky investment than a solar power plant. For example, while the efficiency of solar plants is undoubtedly improving, there is less certainty regarding actual output compared with a coal plant where the technology and process is established and well understood. Furthermore. we have assumed that future solar plants will operate at 30% utilisation; this level is at the top end of the range of utilisations of recent plants in ‘sun-belt’ countries and lower, actual results would result in higher LCOEs and lower returns on investment. As such, in those countries that are seeking to electrify and bring benefits to their population, coal-fired generation will likely remain the dominant form of power generation chosen to underpin expansion. Nonetheless, the reality is the negative externalities associated with burning coal are starting to be passed on to power producers and the prospect of carbon charges of, potentially, ~$20-30 /MWh, or higher even, in the future makes an investment in a coal plant less attractive. Meanwhile, solar power developers have gained, and in some countries continue to do so, from a combination of tax benefits, cheap financing and subsidy and this, as well as the strong cost reductions being achieved, is driving strong growth in the sector. These benefits – while justified and largely supported by the public in many countries – do come at a financial cost to governments and, ultimately, consumers. However, our research suggests that solar power costs are falling towards a level where the industry can compete with coal plants on a like-for-like basis in some circumstances and, for this reason, subsidies in certain countries are being reduced. Conclusion PPAs of <$30 /MWh have spurred numerous comments that solar power is now cheaper than fossil fuel generation. The cost reductions and technological progress required for these plants to provide power at these tariffs – under normal financing terms and tax payments etc. – appear optimistic. Nonetheless, we believe that levelised costs of solar generation will fall to $45-55 /MWh in the coming years, without subsidy, in countries with abundant sunshine and land and this level is one that is competitive with coal-fired power generation; this is a massive achievement considering that costs were approximately four times higher just two years ago. However, this does not imply a step change in the costs of total power generation and we do not believe that the world is at the dawn of an era of low-priced electricity. In addition, it should be noted that in regions with less sunlight (e.g. Northern Europe and Northeast Asia), we estimate the cost of solar power remains, for now, twice as high as coal-fired generation in the absence of all forms of policy intervention.
https://www.crugroup.com/knowledge-and-insights/spotlights/is-solar-power-cheaper-than-coal/
--- abstract: 'We give several applications of a lemma on completeness used by Osserman to show the meromorphicity of Weierstrass data for complete minimal surfaces with finite total curvature. Completeness and weak completeness are defined for several classes of surfaces which admit singular points. The completeness lemma is a useful machinery for the study of completeness in these classes of surfaces. In particular, we show that a constant mean curvature one (i.e. CMC-1) surface in de Sitter $3$-space is complete if and only if it is weakly complete, the singular set is compact and all the ends are conformally equivalent to a punctured disk.' address: - | Department of Mathematics, Graduate School of Science,\ Osaka University, Toyonaka, Osaka 560-0043, Japan - ' Department of Mathematics, Tokyo Institute of Technology, O-okayama, Meguro, Tokyo 152-8551, Japan' author: - 'M. Umehara' - 'K. Yamada' date: 'August 02, 2010' title: | Applications of a completeness lemma\ in minimal surface theory\ to various classes of surfaces --- Introduction {#introduction .unnumbered} ============ Firstly, we recall the following lemma given in MacLane [@McL] and Osserman [@O1] to show properties of complete minimal surfaces. We set $$\Delta:=\{z\in {\boldsymbol{C}}\,;\,|z|<1\},\qquad\text{and}\qquad \Delta^*:=\Delta\setminus \{0\}.$$ Let $\omega(z)$ be a holomorphic $1$-form on $\Delta^*$. Suppose that the integral $ \int_{\gamma}|\omega| $ diverges to $\infty$ for all paths $\gamma:[0,1)\to \Delta^*$ accumulating at the origin $z=0$. Then $\omega(z)$ has at most a pole at the origin. This lemma plays a crucial role in showing the meromorphicity of Weierstrass data for complete minimal surfaces with finite total curvature. We show in the first section that it is also useful for showing several types of completeness of surfaces which may admit singularities. In Section \[sec:cor\], we give a further application: Let $S^3_1(\subset {\boldsymbol{R}}^4_1)$ be the de Sitter $3$-space with metric induced from the Lorentz-Minkowski $4$-space ${\boldsymbol{R}}^4_1$, which is a simply-connected $3$-dimensional complete Lorentzian manifold with constant sectional curvature $1$. We consider the projection $$p_L:{{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})\longrightarrow S^3_1={{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})/{{{\operatorname{ SU}}}}(1,1).$$ A holomorphic map $F\colon{}\Sigma^2\to{{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})$ defined on a Riemann surface $\Sigma^2$ is called [*null*]{} if $\det (dF/dz)$ vanishes identically on $\Sigma^2$, where $z$ is an arbitrary complex coordinate on $\Sigma^2$. For a null holomorphic immersion $F:\Sigma^2\to {{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})$, $$f:=p_L\circ F:\Sigma^2\longrightarrow S^3_1$$ gives a spacelike [CMC-$1$]{} (i.e. constant mean curvature one) surface with singularities, called a [*[CMC-$1$]{} face*]{}. Conversely, a [CMC-$1$]{} face $f\colon{}\Sigma^2\to{{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})$ is obtained as $f=p_L\circ F$, where $F\colon{}\widetilde\Sigma^2\to{{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})$ is a null holomorphic immersion defined on the universal cover $\widetilde\Sigma^2$ of $\Sigma^2$ (see [@F] and [@FRUYY] for details). The pull-back metric $ds^2_\#$ of the canonical Hermitian metric of ${{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})$ by the map $F^{-1}$ of taking inverse matrices gives a single-valued positive definite metric on $\Sigma^2$ (see [@FRUYY Remark 1.11]). That is, $$\label{eq:ds-sharp} ds^2_{\#}:= {{{\operatorname{ trace}}}}(\alpha_{\#}){\vphantom{\overline{(\alpha_{\#})}}^t \overline{(\alpha_{\#})}} = \bigl(1+|G|^2\bigr)^2 \left|\frac{Q}{dG}\right|^2\qquad \biggl( \alpha_{\#}:=(F^{-1})^{-1}d(F^{-1}) \biggr),$$ where $G$ and $Q$ are the hyperbolic Gauss map and the Hopf differential, respectively (cf. [@FRUYY (1.17)]), and ${\vphantom{(~)}^t (~)}$ denotes the transposition. A [CMC-$1$]{} face $f:\Sigma^2\longrightarrow S^3_1$ is called [*weakly complete*]{} if $ds^2_\#$ is complete. On the other hand, we define another completeness as follows: Let $(N^3,g)$ be a Riemannian or a Lorentzian manifold in general. \[def:complete\] We say a $C^\infty$-map $f\colon \Sigma^2 \to N^3$ is [*complete*]{} if there exists a symmetric covariant tensor $T$ on $\Sigma^2$ with compact support such that $ds^2+T$ gives a complete Riemannian metric on $\Sigma^2$, where $ds^2$ is the pull-back of the metric $g$ on $N^3$, called the [*first fundamental form*]{}. If $f$ is complete and the singular set is non-empty, then the singular set must be compact, by definition. If $f$ is a [CMC-$1$]{} face, completeness implies weak completeness (cf. [@FRUYY Proposition 1.1]). This new definition of completeness is a generalization of the classical one, namely, if $f$ has no singular points, then completeness coincides with the classical notion of completeness in Riemannian geometry. For complete [CMC-$1$]{} faces, we have an analogue of the Osserman inequality for minimal surfaces in ${\boldsymbol{R}}^3$ (cf. [@FRUYY Theorem 0.2]). The goal of this paper is to prove the following assertion: A [CMC-$1$]{} face in $S^3_1$ is complete if and only if it is weakly complete, the singular set is compact and every end is conformally equivalent to a punctured disk. The ‘only if’ part has been proved in [@FRUYY]. The proof of the ‘if’ part in this theorem is given in Section \[sec:cor\]. In this paper, we shall discuss the relationships between completeness and weak completeness on various classes of surfaces with singularities as applications of the completeness lemma. The above theorem is the deepest result amongst them. Applications of the completeness lemma {#sec:complete} ====================================== In this section, we shall give several new applications of the completeness lemma, which show the importance of this kind of assertion (see Question in Remark \[rmk:Q\]). Let $(N^3,g)$ be a Riemannian $3$-manifold. It is well-known that the unit tangent bundle $T_1N^3$ of $N^3$ has a canonical contact structure. A map $L:\Sigma^2\to T_1N^3$ defined on a $2$-manifold $\Sigma^2$ is called a [*Legendrian immersion*]{} if and only if $L$ is an isotropic immersion. Then a map $f:\Sigma^2\to N^3$ is called a [*wave front*]{} or a [*front*]{} if there exists a Legendrian immersion $L_f:\Sigma^2\to T_1N^3$ that is a lift of $f$. The pull-back metric $$\label{eq:tau} d\tau^2:=L_f^*\tilde g$$ of the canonical metric $\tilde g$ of $T_1N^3$ coincides with the sum of the first fundamental form and the third fundamental form of $f$ if $(N^3,g)$ is a space form. Then $f$ is called [*weakly complete*]{} if $L_f^*\tilde g$ is a complete Riemannian metric on $\Sigma^2$. It should be remarked that the completeness (in the sense of Melko and Sterling [@MS]) of surfaces of constant Gaussian curvature $-1$ in ${\boldsymbol{R}}^3$ coincides with our notion of weak completeness of wave fronts. The differential geometry of wave fronts is discussed in [@SUY]. This weak completeness is different from that for [CMC-$1$]{} faces as in the introduction. Moreover, weak completeness for improper affine spheres given in Remark \[rem:impas\] of this section is also somewhat different from that for fronts and [CMC-$1$]{} faces. The authors do not yet know of a unified treatment of weak completeness. In fact, there might be several possibilities for completeness of a given class of surfaces with singularities. Definition \[def:complete\] defines completeness for wave fronts. By definition, completeness implies weak completeness. These two notions of completeness were defined for flat fronts (i.e. wave fronts whose first fundamental form has vanishing Gaussian curvature on their regular sets) in the hyperbolic $3$-space $H^3$, the Euclidean $3$-space ${\boldsymbol{R}}^3$, and the $3$-sphere $S^3$, respectively (cf. [@KRUY], [@MU] and [@KitU]). In particular, fundamental properties of flat surfaces in $H^3$ were given in Gálvez, Martínez and Milán [@GMM1]. Later, further properties for such surfaces as wave fronts were given in [@KUY] and [@KRUY]. Let $\Sigma^2$ be a $2$-manifold and $f:\Sigma^2\to H^3$ a flat front. In this case, there is another lifting of $f$ (different from $L_f$) as follows: the map $f$ induces a canonical complex structure on $\Sigma^2$, and there exists a holomorphic immersion $F:\widetilde \Sigma^2\to {{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})$ such that $f=\pi\circ F$ holds and $F^{-1}dF$ is off-diagonal, where $$\pi:{{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})\longrightarrow H^3={{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})/{{{\operatorname{ SU}}}}(2)$$ is the canonical projection. Remarkably, the weak completeness of $f$ is equivalent to the completeness of the pull-back metric of the canonical Hermitian metric of ${{{\operatorname{ SL}}}}(2,{\boldsymbol{C}})$ by $F$ (see [@KUY]). For complete flat fronts in $H^3$, we have an analogue of the Osserman inequality for complete minimal surfaces (see [@KUY] for details). The caustic (i.e. focal surface) of a complete flat front $f:\Sigma^2\to H^3$ is weakly complete (by taking a double cover of $\Sigma^2$ if necessary). The asymptotic behavior of weakly complete (but not complete) flat fronts in $H^3$ was analyzed in [@KRUY2]. A flat front in $H^3$ is complete if and only if it is weakly complete, and the singular set is compact and each end of the front is conformally equivalent to a punctured disk. It should be remarked that another characterization of completeness of $f$ using the total curvature of $d\tau^2$ is given in [@KRUY Theorem 3.3]. The ‘only if’ part has been proved in [@KRUY Theorem 3.3], since completeness at each end implies that the end is conformally equivalent to a punctured disk. So we shall prove the converse. We use the same notations as in [@KRUY]. Let $f:\Delta^*\to H^3$ be a flat immersion which is weakly complete at $z=0$. Then, the metric $d\tau^2$ given in is written as $d\tau^2=|\omega|^2+|\theta|^2$, where $\omega$ and $\theta$ are the canonical forms as in [@KRUY (2.4) and (2.7)]. Since $f$ is weakly complete at $z=0$, the length $L(\gamma)$ with respect to the metric $d\tau^2=|\omega|^2+|\theta|^2$ diverges to $\infty$ for all paths $\gamma:[0,1)\to \Delta^*$ accumulating at the origin. Though $\omega$ is defined only on the universal cover $\widetilde\Delta^*$ of $\Delta^*$, $|\omega|$ is well-defined on $\Delta^*$. Hence there exists $\mu\in[0,1)$ such that $$\label{eq:omega-exp} \omega(z) = z^\mu \hat \omega(z)\,dz,$$ where $\hat \omega(z)$ is a holomorphic function on $\Delta^*$. (We have not yet excluded the possibility that $\hat \omega(z)$ has an essential singularity at $z=0$.) We shall now prove that $\hat \omega(z)$ has at most a pole there. For this purpose, we set $\rho:=\theta/\omega$. Then the singular set is characterized by $\{|\rho|=1\}$ (cf. [@KRUY (2.6)]). We may assume $|\rho|<1$ on $\Delta^*$ without loss of generality by exchanging roles of $\omega$ and $\theta$ (see [@KRUY page 270]), if necessary, because $f$ is an immersion. Then we have that for any path $\gamma$ accumulating at $0$, $$\infty=L(\gamma)= \int_{\gamma} \sqrt{|\omega|^2+|\theta|^2} \le \int_{\gamma} \sqrt{(1+|\rho|^2)|\omega|^2} \leq \sqrt{2}\int_{\gamma}|\omega| \leq \sqrt{2} \int_{\gamma}|\hat \omega(z)|\,|dz|.$$ Then by the completeness lemma, $\hat \omega(z)$ has at most a pole. On the other hand, $|\rho|$ is well-defined on $\Delta^*$ and $|\rho|<1$ holds on $\Delta^*$. Thus there exists a real number $\nu\in [0,1)$ such that $\rho(z)=z^\nu \hat \rho(z)$, where $\hat \rho(z)$ is a holomorphic function on $\Delta^*$ (that is, $\hat \rho(z)$ may have essential singularity at the origin). We have on $\Delta^*$ that $$1>|z^{1-\nu}|>|z^{1-\nu}\rho(z)| =|z \hat \rho(z)|.$$ Then by the great Picard theorem, $z \hat \rho(z)$ is meromorphic at the origin. Summing up, $\omega$ and $\theta=\rho\omega$ are meromorphic $1$-forms on a neighborhood of the origin. Then [@KRUY (3.2)] yields that $d\tau^2$ has finite total curvature, and completeness follows from [@KRUY Theorem 3.3]. The assumption ‘conformal equivalency of the end to the unit punctured disc’ in Proposition cannot be dropped. However, in other situations, this condition might not be required. In fact, a flat front in ${\boldsymbol{R}}^3$ is complete if and only if it is weakly complete and the singular set is compact (see [@MU Corollary 4.7]). On the other hand, a weakly complete immersion may not be complete in general. In fact, we set $$ds^2=du^2+2\cos \omega(u,v)\,du\,dv+dv^2$$ on $({\boldsymbol{R}}^2;u,v)$, where $\omega:{\boldsymbol{R}}^2\to (0,\pi)$ is a $C^\infty$-function satisfying $ {\partial^2 \omega(u,v)}/{\partial u \partial v}=0. $ Then it is known that (cf [@Kit] or [@KitU]) there is a flat immersion $f:{\boldsymbol{R}}^2\to S^3$ into the unit sphere $S^3$ such that $ds^2$ is the first fundamental form and $2\sin\omega(u,v)\,du\, dv$ is the second fundamental form. So if we set $$\omega(u,v) =\arcsin \left(\frac{e^{-u^2}}2 \right) +\arcsin \left(\frac{e^{-v^2}}2 \right),$$ there is an associated flat immersion of ${\boldsymbol{R}}^2$ into $S^3$. In this case, the length of the curve $[0,\infty)\ni t\mapsto (t,-t)\in {\boldsymbol{R}}^2$ with respect to $ds^2$ is finite, that is $$\int_0^\infty \sqrt{2\bigl(1-\cos\omega(t,-t)\bigr)} =\int_0^\infty 2\sin \frac{\omega(t,-t)}2 dt =\int_0^\infty\exp\left(-t^2\right)dt<\infty.$$ Since the sum of the first and the third fundamental forms is given by $du^2+dv^2$, the immersion $f$ is weakly complete, but not complete. \[rem:impas\] Improper affine spheres in ${\boldsymbol{R}}^3$ are closely related to flat surfaces in $H^3$ (see [@Mz2] and also [@IM]). An improper affine sphere with admissible singular points in the affine space ${\boldsymbol{R}}^3$ is called an [*improper affine map*]{} (see [@Mz]). Martínez [@Mz] defined completeness for improper affine maps and proved an analogue of the Osserman inequality. Definition \[def:complete\] is a generalization of this completeness and completeness for wave fronts. An improper affine map is called [*weakly complete*]{} if the metric $d\tau^2$ as in [@Mz (9)] is complete. Let $\Sigma^2$ be a Riemann surface and $(F,G)$ a pair of holomorphic functions on $\Sigma^2$ such that 1. \[item:ia-map-1\] ${{{\operatorname{ Re}}}}(F\,dG)$ is exact, 2. \[item:ia-map-2\] $|dF|^2+|dG|^2$ is positive definite. Then the induced mapping $f:\Sigma^2\to {\boldsymbol{R}}^3={\boldsymbol{C}}\times {\boldsymbol{R}}$ given by $$f:=\left( G+\overline F, \frac{|G|^2-|F|^2}{2}+{{{\operatorname{ Re}}}}\left(GF-2\int FdG\right) \right)$$ is an improper affine map with $d\tau^2=2(|dF|^2+|dG|^2)$. Conversely, any improper affine map is given in this way. The following assertion holds: [ *An improper affine map in ${\boldsymbol{R}}^3$ is a wave front. Moreover, it is complete if and only if it is weakly complete, the singular set is compact and all ends are conformally equivalent to a punctured disk.*]{} In fact, $\nu:=\left( \bar F-G, 1\right)$ gives a Euclidean normal vector field along $f$ (cf. [@Mz (8)]), and one can easily verify that $f$ is a wave front, that is, $(f,[\nu]):\Sigma^2 \to {\boldsymbol{R}}^3\times P^2({\boldsymbol{R}})$ gives an immersion if and only if (ii) holds. (Nakajo [@N Proposition 4.3] gave an alternative proof of this fact.) To prove the second assertion, we use the same notations as in [@Mz]. Let $f:\Sigma^2\to {\boldsymbol{R}}^3$ be a complete improper affine map. By [@Mz (9)], we have that $$\begin{aligned} ds^2&=|dF+dG|^2\le (|dF|+|dG|)^2\\ &=|dF|^2+|dG|^2+2|dF||dG| \le 2(|dF|^2+|dG|^2)=d\tau^2, \end{aligned}$$ which implies the completeness of $d\tau^2$, that is, $f$ is weakly complete. On the other hand, by [@Mz Proposition 1], all ends of $f$ are conformally equivalent to a punctured disk. Next, we show the converse. Let $f:\Delta^*\to {\boldsymbol{R}}^3$ be an improper affine immersion which is weakly complete at $z=0$, and for which the end $z=0$ is of punctured type. Then the data $(F,G)$ corresponding to $f$ is a pair of holomorphic functions on $\Delta^*$, and the length $L(\gamma)$ with respect to the metric $d\tau^2=2(|dF|^2+|dG|^2)$ diverges to $\infty$ for all paths $\gamma:[0,1)\to \Delta^*$ accumulating at the origin. Since $f$ is an immersion, $|dF|\ne |dG|$ holds. So we may assume $|dF|<|dG|$ holds on $\Delta^*$ without loss of generality. Then we have that $$\infty=L(\gamma)= \sqrt{2}\int_{\gamma} \sqrt{|dF|^2+|dG|^2} \le 2\int_{\gamma}|dG|.$$ Thus, by the completeness lemma, $dG(z)$ has at most a pole at the origin. Since $|dF/dG|<1$, the great Picard theorem yields that $dF(z)/dG(z)$ has at most a pole at the origin. In particular, both $dF$ and $dG$ have at most a pole at the origin. Thus $dF(0)/dG(0)$ is well-defined, and satisfies $|dF(0)/G(0)|\le 1$. Since the singular set $\{z\in \Delta^*\,;\, |dF(z)/dG(z)|=1\}$ is empty, we have that $|dF(0)/dG(0)|< 1$. Then there exists $\delta\in (0,1)$ such that $|dF(z)/dG(z)|<\delta$ holds near $z=0$, and $$\begin{aligned} ds^2&=|dG+dF|^2=|dG|^2\left|1+\frac{dF}{dG}\right|^2 \ge |dG|^2 \left(1-\left|\frac{dF}{dG}\right|\right)^2 \\ &\ge (1-\delta)^2 |dG|^2 \ge \frac{(1-\delta)^2}2\left(|dF|^2+|dG|^2\right), \end{aligned}$$ which proves the completeness of $f$. Proof of Theorem {#sec:cor} ================ Now we give a proof of the theorem in the introduction. It is sufficient to show the converse of [@FRUYY Proposition 1.1]. We use the same notations as in [@FRUYY]. Let $f\colon{}\Delta^*\to S^3_1$ be a [CMC-$1$]{} immersion which is weakly complete at the origin. Since $f$ is an immersion, the metric $d\sigma^2$ given in [@FRUYY (1.15)] is a metric of constant curvature $-1$ on $\Delta^*$. By [@FRUYY Theorem 2.1 and Definition 3.2], $f$ is $g$-regular. Also, by [@FRUYY Corollary 3.1], $f$ must be elliptic or parabolic. Elliptic case {#elliptic-case .unnumbered} ------------- In this case, $f$ is a $g$-regular elliptic end. Since the Schwarzian derivative $S(g)$ of the secondary Gauss map $g$ is a projective connection of elliptic type (see [@FRUYY Section 2]), $g$ is written in the form (see [@FRUYY Proposition 2.2]) $$g = z^{\mu} h(z),$$ where $\mu$ is a real number and $h$ is a holomorphic function with $h(0)\neq 0$. If $\mu\ne 0$, then we can replace $g$ by $1/g$ (see [@FRUYY Remark 1.9]), and so we may assume that $\mu>0$. In particular, $g(0)=0$ holds. On the other hand, if $\mu=0$, then $|g(0)|\ne 1$, since the singular set $\{|g|=1\}$ does not accumulate at the origin. Then, replacing $g$ by $1/g$ if necessary (cf. [@FRUYY Remark 1.9]), we may assume $|g|^2<1-{\varepsilon}$ on $\Delta^*$ for sufficiently small ${\varepsilon}>0$. When $\mu>0$, the inequality $|g|^2<1-{\varepsilon}$ holds trivially. Thus we have that $$\label{eq:hat_ds} d\hat s^2:= (1+|g|^2)^2|\omega|^2 \leq 4 |\omega|^2 \leq \frac{4}{{\varepsilon}^2} (1-|g|^2)^2|\omega|^2 =\frac{4}{{\varepsilon}^2}ds^2,$$ where $(g,\omega)$ is the Weierstrass data as in [@FRUYY (1.6)]. The weak completeness means the completeness of the metric $ds^2_\#$ as in . (The metric $ds^2_\#$ coincides with the metric given in [@FRUYY (1.17)].) On the other hand, the completeness of $ds^2_\#$ is equivalent to that of $d\hat s^2$ (see the last part of the proof of [@FRUYY Proposition 1.1]). By , $ds^2$ is complete at the origin. Parabolic case {#parabolic-case .unnumbered} -------------- In this case, by [@FRUYY Lemma P], $f$ is a $g$-regular parabolic end of the first kind. In the proof of Theorem 3.2 in [@FRUYY], completeness is not used, but what is applied is the fact that the ends are immersed, $g$-regular and of punctured type. So we can refer to all of equations in that proof. One can choose the secondary Gauss map $g$ as (see [@FRUYY (3.2)]) $$\frac{1}{i}\frac{g(z)+1}{g(z)-1}=\hat g(z)=i(h(z)+ {\varepsilon}\log z),$$ where ${\varepsilon}=\pm 1$ and $h(z)$ is a holomorphic function on a sufficiently small neighborhood of the origin. Then we have that $$g(z):=\frac{\hat g(z)-i}{\hat g(z)+i},\qquad g'(z)=\frac{2(zh'(z)+{\varepsilon})}{ z(h(z)+{\varepsilon}\log z+1)^2}, \qquad \left('=\frac{d}{dz}\right).$$ Since $zh'(z)+{\varepsilon}$ is bounded on a neighborhood of the origin and $$\label{eq:limit} \frac{|(h(z)+{\varepsilon}\log z+1)|}{|\log z|}\to 1$$ as $z\to 0$, there exists a positive constant $c_1$ such that $$\label{eq:limit2} |g'|\le \frac{c_1}{|z|\left|\log z\right|^2}.$$ On the other hand, by we have that $$\label{eq:limit3} |g(z)| = \left| \frac{h(z)+{\varepsilon}\log z-1 }{h(z)+{\varepsilon}\log z+1} \right| \to 1 \qquad (z\to 0).$$ Moreover, since $$1-|g|^2=\frac{4({{{\operatorname{ Re}}}}h+{\varepsilon}\log|z|)}{|\log z+{\varepsilon}(h+1)|^2},$$ it can be easily checked that there is a constant $c_2$ such that $$\label{eq:limit4} \frac{c_2}{|\log z|}\le |1-|g|^2|.$$ Since $ds^2=(1-|g|^2)^2|Q/dg|^2$, yields that there is a constant $c'_2(>0)$ such that $$\label{eq:limit4a} |z|^2|\log z|^2 \left|\frac{Q}{dz}\right|^2 \le c'_2 ds^2,$$ where $Q=\omega\,dg$ is the Hopf differential (cf. [@FRUYY (1.8)]). Then, there exist positive constants $c_3$ and $c'_3$ such that $$\label{eq:hats} d\hat s^2:= (1+|g|^2)^2|\omega|^2 \leq c_3 \left|\frac{Q}{dg}\right|^2 \leq c'_3 \left|\log z\right|^4\left|\frac{Q}{dz}\right|^2.$$ We recall the following inequality shown on [@FRUYY Appendix A] $$\label{eq:limit5} ds^2 \le c_0 |Q/dz|^2,$$ which holds for $g$-regular parabolic ends of the first kind, where $c_0$ is a positive constant. By , and , we have that $$\begin{aligned} d\hat s^2 & \leq c'_3 \left|\log z\right|^4\left|\frac{Q}{dz}\right|^2 \leq c'_3 c'_2 \frac{\left|\log z\right|^2}{|z|^2}ds^2 \leq c'_3 c'_2 c_0 \frac{\left|\log z\right|^2}{|z|^2}\left|\frac{Q}{dz}\right|^2\\ & \leq \frac{c'_3 (c'_2)^2 c_0}{|z|^4}ds^2 \leq c'_3 (c'_2 c_0)^2 \left|\frac{Q}{z^2 dz}\right|^2.\end{aligned}$$ Again, by the equivalency of completeness of the two metrics $ds^2_\#$ and $d\hat s^2$, the weak completeness implies the completeness of $d\hat s^2$. In particular, $|Q/(z^2dz)|^2$ is a complete metric at the origin. Then by the completeness lemma, $Q/(z^2dz)$ (and so $Q$) has at most a pole at $z=0$. We denote by ${{{\operatorname{ ord}}}}_0 Q$ the [*order*]{} of $Q$ at the origin. For example, ${{{\operatorname{ ord}}}}_0 Q=m$ if $Q=z^m\,dz^2$. Firstly, we consider the case that $z=0$ is not a regular end. Then the hyperbolic Gauss map $G(z)$ of $f$ has an essential singular point at $z=0$ and the Schwarzian derivative $S(G)$ has pole of order $<-2$. Since $f$ is $g$-regular, $S(g)$ is of order $-2$ (see [@FRUYY Definition 3.2]), so the identity $2Q=S(g)-S(G)$ (cf. [@FRUYY (1.20)]) implies ${{{\operatorname{ ord}}}}_0 Q\leq -3$. On the other hand, if $z=0$ is a regular end, then ${{{\operatorname{ ord}}}}_0 Q= -2$ by [@FRUYY Lemma 5.1]. Thus, the Hopf differential of $f$ satisfies ${{{\operatorname{ ord}}}}_0 Q\le -2$. In particular, there exists a constant $c_4>0$ such that $$|Q|\ge \frac{c_4|dz|^2}{r^2}\qquad (r:=|z|)$$ holds on $\Delta^*$. By [@FRUYY (3.4)], we have that $$d\sigma^2:=\frac{4\,|dg|^2}{(1-|g|^2)^2}\le \frac{C^2|dz|^2}{r^2(c+\log r)^2},$$ where $C$ and $c$ are positive constants. Thus there exists a constant $c_5>0$ such that (cf. [@FRUYY (1.15) and (1.16)]) $$ds^2=4\frac{|Q|^2}{d\sigma^2} \ge 4(c_5)^2r^2|Q|^2(c+\log r)^2 \ge 4(c_4c_5)^2\frac{(c+\log r)^2}{r^2}|dz|^2.$$ Since $|dz|\geq dr$ and $$\int_1^t \frac{(c+\log r)}{r}dr =\frac{(\log t)^2}{2}+c\log t$$ diverges to $\infty$ as $t\to 0$, the metric $ds^2$ is complete at $z=0$, which proves the assertion. \[rmk:Q\] Related to the above proof of our main theorem, we leave here the following: Let $\omega(z)$ be a holomorphic $1$-form on $\Delta^*$ and $n$ a non-negative integer. Suppose that the integral $$\int_{\gamma} \left|\omega(z) (\log z)^n \right|$$ diverges to $\infty$ for all $C^\infty$-paths $\gamma:[0,1)\to \Delta^*$ accumulating at the origin $z=0$. Then does $\omega(z)$ have at most a pole at the origin? When $n=0$, this question reduces to the original lemma. If the answer is affirmative, one can obtain the meromorphicity of the Hopf differential $Q$ directly by applying it to equation , since the weak completeness implies the completeness of the metric $d\hat s^2$. The proof of the completeness lemma given in [@O2] cannot be modified directly, since the estimate of ${{{\operatorname{ Im}}}}(\log z)$ along a path $\gamma$ seems difficult. Fortunately, in our situation, we have succeeded to prove our main result without applying the statement, because of . It should be remarked that this key inequality itself comes from the Gauss equation of the surface. It is known that [CMC-$1$]{} surfaces in $S^3_1$ have a quite similar properties to spacelike maximal surfaces in the Lorentz-Minkowski space ${\boldsymbol{R}}^3_1$ of dimension $3$ with the metric of signature $(-,+,+)$. We consider a fibration $$p_L:{\boldsymbol{C}}^3\ni (\zeta^1,\zeta^2,\zeta^3)\longmapsto {{{\operatorname{ Re}}}}\left(-\sqrt{-1} \zeta^3,\zeta^1,\zeta^2\right)\in {\boldsymbol{R}}^3_1.$$ The projection of null holomorphic immersions into ${\boldsymbol{R}}^3_1$ by $p_L$ gives spacelike maximal surfaces with singularities, called [*maxfaces*]{} (see [@UY3] for details). Here a holomorphic map $F=(F_1,F_2,F_3)\colon\Sigma^2\to {\boldsymbol{C}}^3$ defined on a Riemann surface $\Sigma^2$ is called [*null*]{} if $(F_1')^2+(F_2')^2+ (F_3')^2$ vanishes identically, where $'=d/dz$ denotes the derivative with respect to a local complex coordinate $z$ of $\Sigma^2$. The completeness and the weak completeness for maxfaces are defined in [@UY3]. As in the case of [CMC-$1$]{} faces in $S^3_1$, completeness implies weak completeness, however, the converse is not true. For complete maxfaces, we have an analogue of the Osserman inequality (cf. [@UY3]). On the other hand, recently, the existence of many weakly complete bounded maxfaces in ${\boldsymbol{R}}^3_1$ has been shown, and such surfaces cannot be complete (see [@MUY]). We can prove the following assertion by applying the completeness lemma: [*A maxface in ${\boldsymbol{R}}^3_1$ is complete if and only if it is weakly complete, the singular set is compact and all ends are conformally equivalent to a punctured disk.*]{} The proof of this assertion is easier than the case of CMC-1 faces in $S^3_1$. The ‘only if’ part has been proved in [@UY3 Corollary 4.8], since finiteness of total curvature implies that all ends are conformally equivalent to a punctured disk. So it is sufficient to show the converse. The notations are the same as in [@UY3]. Let $f:\Delta^*\to {\boldsymbol{R}}^3_1$ be a spacelike maximal immersion which is weakly complete at $z=0$, namely, the length $L(\gamma)$ with respect to the metric $d\sigma^2=(1+|g|^2)^2|\omega|^2$ given in [@UY3 Definition 2.7] diverges to $\infty$ for all paths $\gamma:[0,1)\to \Delta^*$ accumulating at the origin. Since $f$ is an immersion, we may assume that $|g(z)|\ne 1$ for all $z\in \Delta^*$. In particular, the image $g(\Delta^*)$ has infinitely many exceptional values. Then by the great Picard theorem, $g$ has at most a pole at the origin. Without loss of generality, we may assume that $g(0)\in {\boldsymbol{C}}$. Then there exists $M>0$ such that $|g(z)|<M$ for $z\in \Delta^*$. Thus we have that $$\infty=L(\gamma)= \int_{\gamma}d\sigma \le (1+M^2)\int_{\gamma}|\omega|.$$ By the completeness lemma, $\omega$ has at most a pole at the origin. Since the Weierstrass data $(g,\omega)$ has at most a pole at the origin, $d\sigma^2$ has finite total curvature. Then completeness follows from [@UY3 Corollary 4.8]. [99]{} , ‘Spacelike CMC $1$ surfaces with elliptic ends in de Sitter $3$-space’ [*Hokkaido Math. J.*]{} 35 (2006) 289–320. , ‘Spacelike mean curvature one surfaces in de Sitter $3$-space’, [*Comm. Anal. Geom.*]{} 17 (2009) 383–427. [ J. A. Gálvez, A. Martínez F. Milán ]{}, ‘Flat surfaces in hyperbolic $3$-space’, [*Math. Ann.*]{} 316 (2000) 419–435. [ G. Ishikawa Y. Machida ]{}, ‘Singularities of improper affine spheres and surfaces of constant Gaussian curvature’, [*Int. J. Math.*]{} 17 (2006) 269–293. , ‘Periodicity of the asymptotic curves on flat tori in $S^3$’, [*J. Math. Soc. Japan*]{} 40 (1988) 457–476. , ‘Extrinsic diameter of immersed flat tori in $S^3$’, preprint. , ‘Flat fronts in hyperbolic $3$-space’, [*Pacific J. Math.*]{} 216 (2004) 149–175. , ‘Flat fronts in hyperbolic $3$-space and their caustics’, [*J. Math. Soc. Japan*]{} 59 (2007) 265–299, , ‘Asymptotic behavior of flat surfaces in hyperbolic $3$-space’, [*J. Math. Soc. Japan*]{} 61 (2009) 799–852. , ‘On asymptotic values’, abstract 603-166, [*Notice of the Amer. Math. Soc*]{} 10 (1963) 482–483. , ‘Complete bounded holomorphic curves immersed in ${\boldsymbol{C}}^2$ with arbitrary genus’, [*Proc. Amer.  Math. Soc.*]{} 137 (2009), 3437–3450. , ‘Improper Affine maps’, [*Math. Z.*]{} 249 (2005) 755–766, , ‘Relatives of Flat Surfaces in $H^3$’, Proceedings of [International Workshop on Integrable systems, Geometry and Visualization]{} (November 19–24 at Kyushu University, Fukuoka, Japan), pp. 115–132, 2005. , ‘Application of soliton theory to the construction of pseudo-spherical surfaces in ${\boldsymbol{R}}^3$’, [*Annals of Global Analysis and Geometry*]{} 11 (1993) 65–107. , ‘Flat surfaces with singularities in Euclidean $3$-space’, [*J. of Differential Geometry*]{} 82 (2009) 279–316. D. Nakajo, ‘A representation formula for indefinite improper affine spheres’, [*Result. Math.*]{} 55 (2009), 139–159. , ‘Global Properties of Minimal Surfaces in $E^3$ and $E^n$’, [*Ann. of Math.*]{} 80 (1964), 340–364. , [*A survey of minimal surfaces*]{} (Dover Publications Inc, 1986). , ‘The geometry of fronts’, [*Ann. of Math.*]{} 169 (2009) 491–529. , ‘Maximal surfaces with singularities in Minkowski space’, [*Hokkaido Math. J.*]{} 35 (2006) 13–40.
CROSS-REFERENCES TO RELATED APPLICATIONS STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOP ENT REFERENCE TO SEQUENCE LISTING OR COMPUTER PROGRAM LISTING APPENDIX BACKGROUND OF THE INVENTION BRIEF SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE INVENTION The present Utility Patent Application is based upon and claims priority from co-pending U.S. Provisional Patent Application No. 61/939,003 filed Feb. 12, 2014 entitled “Screen Protector”. Not. Applicable Not Applicable Fingerprint identification sensors that can detect and identify human fingerprints are increasingly used to secure computers, mobile phones and sensitive facilities. In order to identify a fingerprint, these sensors must be able to detect the ridges and features present on a finger that are unique to that finger. Unfortunately, these fingerprint sensors typically cannot be accessed when the device with the sensor is secured within a protective case. Protective touchscreen covers have been developed that allow a touchscreen to be used when it is beneath the protective cover. Unfortunately, a fingerprint sensor cannot read a fingerprint through existing touch screen covers. Therefore, what is needed is a way to use existing devices with fingerprint identification sensors while the devices are secured within a protective case. An embodiment of the present invention is directed toward a protective cover for a device having a touchscreen and a fingerprint sensor. The protective cover includes a Polyethylene terephthalate touchscreen cover that covers the touchscreen while allowing the touchscreen to be accessed and used. The touchscreen cover includes an opening that corresponds to the location of the fingerprint sensor on the device when the protective cover is installed over the device. A flexible Thermoplastic polyurethane membrane covers the opening in the touchscreen cover. The flexible membrane is preferably bonded to the touchscreen cover to secure it. The flexible membrane is also preferably provided with a matte textured surface that keeps the membrane from clinging to the sensor and has a pleasing appearance. The touchscreen cover is glued to a rigid frame to produce a protective cover which forms part of it larger device case. The provision of flexible membrane over the fingerprint sensor allows the fingerprint sensor to be used while the device is contained within the case and thus, is a substantial improvement upon the prior art. The present invention is directed toward a protective cover construction for an electronic device that includes a touchscreen and a fingerprint identification sensor. The protective cover utilizes a flexible, non-stick membrane material such as Thermoplastic polyurethane (TPU) through which the fingerprint scanner can read fingerprints. This membrane material is bonded to a die cut Polyethylene terephthalate (PET) plastic substrate touchscreen cover that is glued to a rigid injected polycarbonate plastic frame. An opening is cut in the touchscreen cover that roughly corresponds to the size and location of the fingerprint sensor. The flexible membrane material is then bonded over the opening in the relatively rigid touch screen cover. The above described assembly is used as a protective screen cover for a device case for a touch screen device that allows for the use of fingerprint identification sensor through the protective cover. The protective screen cover assembly of Thermoplastic polyurethane flexible membrane, die cut Polyethylene terephthalate substrate, and rigid polycarbonate plastic frame is combined with a device case assembly made up of a silicone shell, rigid polycarbonate plastic shell, polyurethane foam, and silicone camera plug to form the completed device case. FIG. 1 10 10 6 8 10 2 6 4 10 10 Referring now to , an illustration of a protective device cover constructed in accordance with the present invention is shown. The device cover includes a touchscreen cover that is glued on a rigid plastic frame . A hole has been cut in the touchscreen cover that corresponds with the position of the fingerprint sensor when the protective device cover is positioned over the front of the touchscreen device. A flexible membrane with enough flexibility to allow the fingerprint sensor to properly function is the bonded to the cover over the opening . In the embodiment shown the entire assembly is then mounted over the device, onto a back cover for the device to form the device case. While the protective cover shown in is for an iPhone® case, those skilled in the art will appreciate that the present invention can be readily adapted to any device case for a device having a fingerprint sensor. The above described screen protector allows a user to use the touch screen of a device as well as its fingerprint sensor while the device is enclosed in a protective case and, therefore, represents a substantial improvement upon the prior art. Although there have been described particular embodiments of the present invention of a new and useful screen protector, those skilled in the art will appreciate that the screen protector can be incorporated into a wide variety of protective cases. Therefore, it is not intended that the particular embodiments described be construed as limitations upon the scope of this invention except as set forth in the following claims. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS FIG. 1 is an illustration of a protective device cover constructed in accordance with an embodiment of the present invention.
# RuneSoft Runesoft GmbH, stylised as RuneSoft (founded as e.p.i.c. interactive entertainment gmbh), is a German publisher founded in 2000 that ports games to alternative platforms such as Linux, Mac OS X, AmigaOS, MorphOS, and magnussoft ZETA. Alongside their own published games, they also ported Software Tycoon and Knights and Merchants: The Shattered Kingdom for Linux Game Publishing. Starting in 2012 the company started to offer some of their game catalogue on the Desura digital distribution service. ## Published titles ### Released Simon the Sorcerer II: The Lion, the Wizard and the Wardrobe (2000) (Mac OS X and AmigaOS) Knights and Merchants: The Shattered Kingdom (2001) (Linux, Mac OS X and MorphOS) Earth 2140 (2001) (Linux, Mac OS X and AmigaOS) Birdie Shoot (2002) (Windows, Mac OS X, and MorphOS) The Feeble Files (2002) (Mac OS X and AmigaOS) Gorky 17 (2002) (Mac OS X) Robin Hood: The Legend of Sherwood (2002) (Linux, Mac OS X, MorphOS, and magnussoft ZETA) Blitzkrieg (2003) (Mac OS X) Chicago 1930 (2003) (Mac OS X) RHEM (2003) (Windows and Mac OS X) Alida (2004) (Mac OS X) Barkanoid 2 (2004) (Linux, Mac OS X, and MorphOS) Airline Tycoon Deluxe (2005) (Linux, Mac OS X, MorphOS, and magnussoft ZETA, iPhone, Raspberry Pi) Cold War (2005) (Mac OS X) RHEM 2: The Cave (2005) (Windows and Mac OS X) Ankh (2005) (Linux and Mac OS X) Ankh: Heart of Osiris (2006) (Linux and Mac OS X) Buku Sudoku (2006) (Mac OS X) Ankh: Battle of the Gods (2007) (Mac OS X) Jack Keane (2007) (Linux and Mac OS X) 101 Puppy Pets (2007) (Mac OS X) Europa Universalis 3 (2007) (Mac OS X) Global Conflicts: Palestine (2007) (Mac OS X) Big Bang Board Games (2008) (Mac OS X) Dream Pinball 3D (2008) (Mac OS X) Global Conflicts: Latin America (2008) (Mac OS X) RHEM 3: The Secret Library (2008) (Windows and Mac OS X) Hearts of Iron III (2009) (Mac OS X) RHEM 4: The Golden Fragments (2010) (Windows and Mac OS X) Patrician IV (2010) (Mac OS X) Buku Kakuro (Mac OS X) Burning Monkey Mahjong (Mac OS X) Burning Monkey Solitaire (Windows and Mac OS X) Dr. Tool (R) Maths Trainer (Mac OS X) Dr. Tool (R): Eye Trainer (Mac OS X) Dr. Tool(R) Brain Jogging Vol. 2 (Mac OS X) Mahjongg Mac (Mac OS X) MangaJONGG (Mac OS X) Metris IV (Mac OS X) Murmeln and More (Mac OS X) Solitaire Mac (Mac OS X) The Legend of Egypt (Mac OS X) The Legend of Rome (Mac OS X) The Legend of the Tolteks (Mac OS X) Toysight (Mac OS X) Pet Doc (Mac OS X) Lemurs (Mac OS X) Northland (Linux and Mac OS X) Software Tycoon (AmigaOS and Linux) Strategy 6 (Mac OS X) The 8th Wonder of the World (Mac OS X) Winter Games (Mac OS X) Officers: World War II (Mac OS X)
https://en.wikipedia.org/wiki/RuneSoft
The minimum participation requirement for Cruzers is to come to at least two practices per week and to attend at least three swim meets during our season (not including the "mini meet" or the All Star meet). All families need to volunteer at three swim meets. Swimmers who are age 5 or younger as of May 1, 2020 and who are new to swim team must come for an evaluation before registering, to ensure that they meet our minimum swim requirements. Evaluations will be held at the Lost Creek pool on Monday through Wednesday, 3:30-4:00, during the week prior to Spring Break (March 9-11), and on Mondays, March 30, April 6, and April 13, from 3:30-4:00. Due to other programming at the pool, we are not able to offer any additional practice times for swimmers missing practices due to VBS, camps, vacations, etc. Swimmers must come to their scheduled practice group for the entire season. April 20-May 28 School Schedule (times subject to change): Practices held Monday-Thursday 7-8 4:00-4:30 6 & U 4:30-5:00 9-10 5:00-5:45 11 & up 5:45-6:30 June 1 to June 19 Summer Schedule (times subject to change): Practices held Monday-Friday 6 & Unders 8:30-9:00 7-8 9:00-9:45 9-10 9:45-10:30 11 & ups 10:30-11:30 Friday Practice Schedule (June 5, June 12, and June 19 only):
https://www.teamunify.com/TabGeneric.jsp?_tabid_=111631&team=recahsllcc
By Mark McConville ESCAPE to the country in this stunning two-bed cottage that featured heavily in BBC series All Creatures Great and Small, and is now available to rent for just £14 per person per night. Incredible images show the exterior of the quaint cottage as well as the beautiful surrounding countryside. Other stunning pictures show that the cottage has been fully renovated with a modern kitchen, cosy living room and spacious bedrooms. Helwith Cottage is located near Marske in the remote hamlet of Helwith, Yorkshire and sits on the edge of the 7,000-acre Barningham and Holgate Estate, owned by Sir Edward and Lady Milbank and can be rented on cottages.com for £405 for a week stay. As the cottage sleeps four this works out at just £14 per person for each night. ““The estate, as a whole, has been in the Milbank family for a number of years (since 1690), so when my husband’s parents decided that it was time to pass Barningham onto the next generation, it was a privilege to take on the next phase of the estate’s life,” said Lady Milbank. “Understanding the importance of tourism in the beautiful Dales, we were keen to continue with our programme of restoring the numerous redundant farm buildings. “The hamlet of Helwith used to be a thriving community, however as the lead and tin manufacturing activities in the area declined, so did the population.” All Creatures Great and Small is a British television series based on the books of the British veterinary surgeon Alf Wight, who wrote under the pseudonym James Herriot. The BBC series had two runs: the original (1978 to 1980, based directly on Herriot’s books) was for three series; the second (1988 to 1990, filmed with original scripts) for four. A total of ninety episodes were broadcast.
https://mediadrumworld.com/2019/01/24/yorkshire-cottage/
We studied predation by colonizing wolves on a high density and highly productive moose (Alces alces) population in south-eastern Norway (about 1.5 moose and 0.01 wolves per km2 in winter). As indices to population changes, we used hunter observations. Over the summer, the wolf pack utilized about one tenth of their total territory (530 km2), with the den area as the centre of activity. Of the main prey taken (moose, roe deer, and beaver), moose calves contributed 61% of the biomass ingested by wolves in summer. Hunting statistics and hunters’ observations of moose showed no changes for the territory as a whole after wolves settled there in 1998. However, in the den areas (60 - 80 km2) the number of calves per cow and the total number of moose seen per hunter-day significantly decreased during the year of wolf reproduction. The following year, though, both indices increased again. We speculate that some of the lack of overall effects might be due to increased fecundity in cows that lost their calf. As the wolves changed their den from year to year, den areas were spatially spread over time. The pressure from wolf predation will differ between cohorts in the same area, and landowners should adjust their hunting quotas accordingly. Downloads Published How to Cite Issue Section License - Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. - Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
https://alcesjournal.org/index.php/alces/article/view/491
Q: Can two characters be added or subtracted? I came across this snippet of code which outputs the following. I need to know that can two characters be added or subtracted from each other and how? char Ch = 'A'; printf("%c",Ch+'a'-'A'); Output : a A: From the C Standard (6.2.5 Types) 17 The type char, the signed and unsigned integer types, and the enumerated types are collectively called integer types. The integer and real floating types are collectively called real types. and 18 Integer and floating types are collectively called arithmetic types. Each arithmetic type belongs to one type domain: the real type domain comprises the real types, the complex type domain comprises the complex types. and (6.5.6 Additive operators) 2 For addition, either both operands shall have arithmetic type, or one operand shall be a pointer to a complete object type and the other shall have integer type. (Incrementing is equivalent to adding 1.) So in this expression Ch+'a'-'A' all operands have arithmetic types. The object Ch has the type char that in this expression is converted implicitly to the type int due to the integer promotions. The character integer constants 'a' and 'A' have the type int. So the expression is evaluated using internal representations of the character constants and of the object Ch converted to the type int. Then the result of the expression is outputted as a character object using the gotten value as an internal representation of an object of the type char To make it more clear consider the following demonstration program. #include <stdio.h> int main(void) { char Ch = 'A'; printf( "%c = %d, " "%c = %d, " "%c + %c - %c = %d + %d - %d => " "%d = %c\n", Ch, Ch, 'a', 'a', Ch, 'a', 'A', Ch, 'a', 'A', Ch + 'a' - 'A', Ch + 'a' - 'A' ); } Its output might look like A = 65, a = 97, A + a - A = 65 + 97 - 65 => 97 = a
Cronin, T. W., & Wing, A. A. (2017). Clouds, Circulation, and Climate Sensitivity in a Radiative-Convective Equilibrium Channel Model. J. Adv. Model. Earth Syst., 9(8), 2883–2905. Fleisher, D. H., Condori, B., Quiroz, R., Alva, A., Asseng, S., Barreda, C., et al. (2017). A potato model intercomparison across varying climates and productivity levels. Glob Change Biol, 23(3), 1258–1281. Follstad Shah, J. J., Kominoski, J. S., Ardón, M., Dodds, W. K., Gessner, M. O., Griffiths, N. A., et al. (2017). Global synthesis of the temperature sensitivity of leaf litter breakdown in streams and rivers. Glob Change Biol, 23(8), 3064–3075. Fronzek, S., Pirttioja, N., Carter, T. R., Bindi, M., Hoffmann, H., Palosuo, T., et al. (2018). Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change. Agricultural Systems, 159, 209–224. Harrington, J., Chi, H., & Gray, L. P. (2017). Florida tourism. In E. P. Chassignet, J. W. Jones, V. Misra, & J. Obeysekera (Eds.), Florida's climate: Changes, variations, & impacts (pp. 297–309). Gainesville, FL: Florida Climate Institute. Abstract: Tourism is one of the largest economic industries in Florida. In 2015, a record 106.3 million tourists visited Florida (about five visitors per resident), with an economic impact of about $90 billion. Tourism also provides additional benefits for federal, state, and local governments in the form of taxes (e.g., excise, sales, income, and property taxes). In Florida, tourism accounts for over one million direct jobs and an additional 1.5 million indirect and supply chain jobs. The three industries or business sectors most impacted by tourism and currently experiencing substantial growth in the state, include: leisure and hospitality (e.g., hotels, restaurants, museums, amusement parks, entertainment), transportation (e.g., cruise ships, taxis, airports), and retail trade (e.g., gas stations, retail stores). The 106.3 million tourists comprise approximately 91.2 million out-of-state visitors, 3.9 million Canadian visitors, and 11.2 million overseas visitors. The domestic visitors are anticipated to grow by 20% in 2018. Tourism and the associated industries in Florida are highly vulnerable to climate change over time. The state population and real estate markets continue to grow in the coastal areas, with corresponding increases in property values at risk. In addition, there are losses associated with the properties used to mitigate the effects of climate change. In summary, indicators of climate change, such as higher sea levels and more frequent and powerful hurricanes and other extreme weather events, have the potential to severely impact the tourism industry in Florida. Malone, S. L., Barr, J., Fuentes, J. D., Oberbauer, S. F., Staudhammer, C. L., Gaiser, E. E., et al. (2016). Sensitivity to Low-Temperature Events: Implications for CO2 Dynamics in Subtropical Coastal Ecosystems. Wetlands, 36(5), 957–967. Abstract: We analyzed the ecosystem effects of low-temperature events (< 5 A degrees C) over 4 years (2009-2012) in subtropical short and long hydroperiod freshwater marsh and mangrove forests within Everglades National Park. To evaluate changes in ecosystem productivity, we measured temporal patterns of CO2 and the normalized difference vegetation index over the study period. Both water levels and distance from the coast influenced the ecosystem response to low-temperature events. Photosynthetic capacity, or the maximum CO2 uptake rate, and sensitivity to low-temperature events were much higher in mangrove forest than in freshwater marsh ecosystems. During low-temperature events photosynthetic capacity was enhanced in freshwater marsh while it declined in mangrove forests, and respiration rates declined across Everglades ecosystems. While the long hydroperiod freshwater marsh gained 0.26 g CO2 m(-2) during low-temperature events, the mangrove forest had the greatest C lost (7.11 g CO2 m(-2) low-temperature event(-1)) followed by the short hydroperiod freshwater marsh (0.37 g CO2 m(-2) low-temperature event(-1)). Results suggest that shifts in the frequency and intensity of weather anomalies with climate change can alter C assimilation rates in Everglades ecosystems through effects on the photosynthetic capacity of existing species, which might lead to changes in species composition and ecosystem productivity in the future. Mu, C., Wu, X., Zhao, Q., Smoak, J. M., Yang, Y., Hu, L., et al. (2017). Relict mountain permafrost area (Loess Plateau, China) exhibits high ecosystem respiration rates and accelerating rates in response to warming. Journal of Geophysical Research: Biogeosciences, 122(10), 2580–2592. Pirttioja, N., Carter, T., Fronzek, S., Bindi, M., Hoffmann, H., Palosuo, T., et al. (2015). Temperature and precipitation effects on wheat yield across a European transect: a crop model ensemble analysis using impact response surfaces. Clim. Res., 65, 87–105. Abstract: This study explored the utility of the impact response surface (IRS) approach for investigating model ensemble crop yield responses under a large range of changes in climate. IRSs of spring and winter wheat Triticum aestivum yields were constructed from a 26-member ensemble of process-based crop simulation models for sites in Finland, Germany and Spain across a latitudinal transect. The sensitivity of modelled yield to systematic increments of changes in temperature (-2 to +9°C) and precipitation (-50 to +50%) was tested by modifying values of baseline (1981 to 2010) daily weather, with CO2 concentration fixed at 360 ppm. The IRS approach offers an effective method of portraying model behaviour under changing climate as well as advantages for analysing, comparing and presenting results from multi-model ensemble simulations. Though individual model behaviour occasionally departed markedly from the average, ensemble median responses across sites and crop varieties indicated that yields decline with higher temperatures and decreased precipitation and increase with higher precipitation. Across the uncertainty ranges defined for the IRSs, yields were more sensitive to temperature than precipitation changes at the Finnish site while sensitivities were mixed at the German and Spanish sites. Precipitation effects diminished under higher temperature changes. While the bivariate and multi-model characteristics of the analysis impose some limits to interpretation, the IRS approach nonetheless provides additional insights into sensitivities to inter-model and inter-annual variability. Taken together, these sensitivities may help to pinpoint processes such as heat stress, vernalisation or drought effects requiring refinement in future model development. Song, X., Zhang, G. J., & Cai, M. (2014). Characterizing the Climate Feedback Pattern in the NCAR CCSM3-SOM Using Hourly Data. J. Climate, 27(8), 2912–2930. Abstract: The climate feedback-response analysis method (CFRAM) was applied to 10-yr hourly output of the NCAR Community Climate System Model, version 3, using the slab ocean model (CCSM3-SOM), to analyze the strength and spatial distribution of climate feedbacks and to characterize their contributions to the global and regional surface temperature T-s changes in response to a doubling of CO2. The global mean bias in the sum of partial T-s changes associated with the CO2 forcing, and each feedback derived with the CFRAM analysis is about 2% of T-s change obtained directly from the CCSM3-SOM simulations. The pattern correlation between the two is 0.94, indicating that the CFRAM analysis using hourly model output is accurate and thus is appropriate for quantifying the contributions of climate feedback to the formation of global and regional warming patterns. For global mean T-s, the largest contributor to the warming is water vapor feedback, followed by the direct CO2 forcing and albedo feedback. The albedo feedback exhibits the largest spatial variation, followed by shortwave cloud feedback. In terms of pattern correlation and RMS difference with the modeled global surface warming, longwave cloud feedback contributes the most. On zonal average, albedo feedback is the largest contributor to the stronger warming in high latitudes than in the tropics. The longwave cloud feedback further amplifies the latitudinal warming contrast. Both the land-ocean warming difference and contributions of climate feedbacks to it vary with latitude. Equatorward of 50 degrees, shortwave cloud feedback and dynamical advection are the two largest contributors. The land-ocean warming difference on the hemispheric scale is mainly attributable to longwave cloud feedback and convection. Woli, P., Jones, J. W., Ingram, K. T., & Hoogenboom, G. (2014). Predicting Crop Yields with the Agricultural Reference Index for Drought. J Agro Crop Sci, 200(3), 163–171. Abstract: A generic agricultural drought index, called Agricultural Reference Index for Drought (ARID), was designed recently to quantify water stress for use in predicting crop yield loss from drought. This study evaluated ARID in terms of its ability to predict crop yields. Daily historical weather data and yields of cotton, maize, peanut and soybean were obtained for several locations and years in the south-eastern USA. Daily values of ARID were computed for each location and converted to monthly average values. Using regression analyses of crop yields vs. monthly ARID values during the crop growing season, ARID-yield relationships were developed for each crop. The ability of ARID to predict yield loss from drought was evaluated using the root mean square error (RMSE), the Willmott index and the modelling efficiency (ME). The ARID-based yield models predicted relative yields with the RMSE values of 0.144, 0.087, 0.089 and 0.142 (kg ha&#8722;1 yield per kg ha&#8722;1 potential yield); the Willmott index values of 0.70, 0.92, 0.86 and 0.79; and the ME values of 0.33, 0.73, 0.60 and 0.49 for cotton, maize, peanut and soybean, respectively. These values indicated that the ARID-based yield models can predict the yield loss from drought for these crops with reasonable accuracy.
https://floridaclimateinstitute.org/refbase/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%2C%20area%20FROM%20refs%20WHERE%20keywords%20RLIKE%20%22sensitivity%22%20ORDER%20BY%20first_author%2C%20author_count%2C%20author%2C%20year%2C%20title&client=&formType=sqlSearch&submit=Cite&viewType=&showQuery=0&showLinks=1&showRows=20&rowOffset=&wrapResults=1&citeOrder=&citeStyle=APA&exportFormat=RIS&exportType=html&exportStylesheet=&citeType=html&headerMsg=
Check the historical on-time performance rating for flight Jet Airways India 9W 825 to help avoid frequently delayed or cancelled flights. Jet Airways 9W825 Flugstatus 9W825 Cochin COK - Chhatrapati Shivaji BOM. Jet Airways spokesperson in a statement said the flight was rescheduled to take off at 14:02 hours, with a delay of two hours on account of a security-related matter. Jet Airways domestic flight 9W825 served route Kochi/Nedumbassery to Mumbai. This is archive flight and it is currently unavailable. Last known departure date was 26th October, 2019. 9W826 is a domestic flight operated by Jet Airways. 9W826 is departing from Dehradun DED, India and arriving at New Delhi DEL, India. The flight distance is about 13,228 km or 8,220 miles and flight time is 1 hours 20 minutes. Get the latest status of 9W826 / JAI826 here. 9W/JAI Mumbai 1993 - 2019. Photos. Framable Color Prints and Posters. Digital Sharp Images. Aviation Gifts. Slide Shows. Photos. Framable Color Prints and Posters. Man detained for trying to hijack Jet Airways Cochin-Mumbai flight. There was a major scramble at Cochin airport today after a passenger allegedly tried to hijack Jet Airways flight 9W 825. The man has been detained by the CISF.
http://zoneforfriend.com/Jet%20Airways%209w%20825%202021
--- abstract: 'We present deep spectroscopic follow-up observations of the Bremer Deep Field (BDF) where the two $z\sim$7 bright Ly$\alpha$ emitters (LAE) BDF521 and BDF3299 were previously discovered by @Vanzella2011 and where a factor of $\sim$3-4 overdensity of faint LBGs has been found by @Castellano2016. We confirm a new bright Ly$\alpha$ emitter, BDF2195, at the same redshift of BDF521, $z=7.008$, and at only $\sim$90 kpc physical distance from it, confirming that the BDF area is likely an overdense, reionized region. A quantitative assessment of the Ly$\alpha$ fraction shows that the number of detected bright emitters is much higher than the average found at z$\sim$7, suggesting a high Ly$\alpha$ transmission through the inter-galactic medium (IGM). However, the line visibility from fainter galaxies is at odds with this finding, since no Ly$\alpha$ emission is found in any of the observed candidates with $M_{UV}>$-20.25. This discrepancy can be understood either if some mechanism prevents Ly$\alpha$ emission from fainter galaxies within the ionized bubbles from reaching the observer, or if faint galaxies are located outside the reionized area and bright LAEs are solely responsible for the creation of their own HII regions. A thorough assessment of the nature of the BDF region and of its sources of re-ionizing radiation will be made possible by JWST spectroscopic capabilities.' author: - 'M. Castellano, L. Pentericci, E. Vanzella, F. Marchi, A. Fontana, P. Dayal, A. Ferrara, A. Hutter, S. Carniani, S. Cristiani, M. Dickinson, S. Gallerani, E. Giallongo, M. Giavalisco, A. Grazian, R. Maiolino, E. Merlin, D. Paris, S. Pilo, P. Santini' title: Spectroscopic investigation of a reionized galaxy overdensity at z=7 --- Introduction ============ The redshift evolution of the fraction of LBGs showing Ly$\alpha$ emission [e.g., @Stark2010] allows us to put constraints on the Ly$\alpha$ transmission by the IGM. A substantial decrease of the Ly$\alpha$ fraction between z$\sim$6 and $z\sim$7 has been established by many independent analysis and interpreted as indication of a neutral hydrogen fraction $\chi_{HI}\sim$40-50% at $z\sim$7 [e.g. @Fontana2010; @Vanzella2011; @Pentericci2011; @Schenker2012; @Caruana2012; @Pentericci2014]. The analysis of independent lines of sight presented in @Pentericci2014(P14 hereafter) has also shown that the decrease of the Ly$\alpha$ fraction suggests a patchy reionization process. Among the 8 pointings analysed by P14, the BDF [@Lehnert2003] stands out as a peculiar area in the $z\sim$7 Universe. In fact, a single FORS2 slit mask observation of this field yielded the detection of two bright ($L\sim L^*$) Ly$\alpha$ emitting galaxies, namely BDF-3299 and BDF-521, at z=7.109 and z=7.008 respectively [@Vanzella2011 V11 hereafter]. These two objects, originally selected from our sample of VLT/Hawki-I z-dropout LBGs [@Castellano2010b C10b hereafter], show Ly$\alpha$ equivalent widths $>$ 50Å and are separated by a projected distance of only 1.9Mpc, while the distance computed from Ly$\alpha$ redshifts is 4.4Mpc (see V11). The detection of bright Ly$\alpha$ emission from BDF-3299 and BDF-521 can be explained by these sources being embedded in an HII region that allows Ly$\alpha$ photons to redshift away from resonance before they reach the IGM [e.g. @Miralda1998]. However, following @Loeb2005 we estimated that these two galaxies alone cannot generate a large enough HII region, suggesting either the existence of additional ionizing sources in their vicinity [@Dayal2009; @Dayal2011] or the contribution of AGN activity. We identified such potential, fainter re-ionizers through a follow-up HST program [@Castellano2016 C16a hereafter]. The dropout selection yielded a total of 6 additional highly reliable $z>$6.5 candidates at S/N(Y105)$>$10, corresponding to a number density $\gtrsim$3-4 times higher than expected on the basis of the z=7 UV luminosity function [@Bouwens2015; @Finkelstein2015]. A stacking of the available HST and VLT images confirmed that these are robust $z\sim$7 sources. A comparison between observations and cosmological simulations [@Hutter2014; @Hutter2015] showed that this BDF overdensity has all expected properties of an early reionized region embedded in a half neutral IGM. In this paper we present deep spectroscopic follow-up of these additional LBGs aimed at estimating their Ly$\alpha$ fraction and redshift. If the BDF hosted a reionized *bubble* we expect to measure a Ly$\alpha$ fraction higher than in average $z\sim$7 lines of sight and more consistent with the one measured at $z\sim$6. *The BDF is the first $z\sim$7 field where a test of this kind can be performed*. The observations are described in Sect. \[sect\_obs\], while results and the estimate of the Ly$\alpha$ fraction are presented in Sect. \[sect\_res\] and \[sect\_lyafrac\], respectively. Finally, we discuss potential interpretations of our findings and directions for future investigations in Sect. \[sect\_disc\]. Throughout the paper, observed and rest–frame magnitudes are in the AB system, and we adopt the $\Lambda$-CDM concordance model ($H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_M=0.3$, and $\Omega_{\Lambda}=0.7$). Observations {#sect_obs} ============ We observed the HST-selected candidates with FORS2 on the ESO Very Large Telescope and adopting the same setup used in our previous works which proved to be highly successful for confirming $z\sim 7$ galaxies. We used the 600z+23(OG590) grism (resolution R=1390), with slits $1''$ wide and a length in the range 6-12$''$. This setup maximizes the number of observed targets while enabling a robust sky subtraction and the maximum efficiency in the wavelength range $8000-10100$Å. Our primary targets in the BDF overdensity around the two Ly$\alpha$ emitters are the 6 S/N(Y105)$>$10 LBGs presented in C16a plus additional 8 sources at S/N(Y105)$\sim$5-10. All these sources have magnitude in the range Y105$\sim$26-27.3. We also re-observe the two bright emitters from V11. The mask was observed for a total of 29 hrs, resulting in 22.5 hrs of net integration time after excluding overheads and low quality frames. Results {#sect_res} ======= A new confirmed emitter ----------------------- Out of the 16 candidates observed we confirm one new LBG with bright Ly$\alpha$ emission (Fig. \[fig\_spectra2d\]), BDF2195 at mag Y105$\sim$26. This object was also detected in the HAWKI Y-band catalog presented in C10b but not included in the high-redshift sample due to photometric uncertainties. We detect a clearly asymmetric Ly$\alpha$ line at $\lambda$=9737Å (z=7.008, Fig. \[fig\_spectra1d\]), with FWHM$=$240 km/s (gaussian fit, corrected for instrumental broadening), and flux=1.85$\pm0.46\times$10$^{-17}$ erg s$^{-1}$ cm$^{-2}$, corresponding to an EW=50Å. IGM absorption affects 22% of this bandpass; accounting for this, and removing line contribution, the corrected Y105 continuum magnitude is 26.2. Intriguingly, BDF2195 has exactly the same Ly$\alpha$ redshift as BDF521, and the two have a projected physical separation of only 91.3kpc (Fig. \[fig\_spectra2d\]). No additional lines are clearly found from spectra of the other observed candidates. To determine Ly$\alpha$ detection limits for these other objects, we adapted the simulations presented in F10, V11, P11, P14 and @Vanzella2014 to the new observations (Sect. \[subsect\_sim\]). The typical flux limit is $1.5 \times 10^{-18}$ erg s$^{-1}$cm$^{-2}$ in the range 8100 to 10000Å though it varies depending on the exact wavelength, due to the presence of bright sky emission in this spectral range. The corresponding rest-frame EW limit varies between 10 and 30Å across the redshift range z$\simeq$6-7.2. Details of the observed sample are reported in table \[table1\]. ------ ------------ ------------ ---------------- ----------------- ----------------------- -- -- ID RA Dec $Y_{105}$ Redshift Ly$\alpha$ EW$^a$ (Å) 2883 337.028076 -35.160122 25.97$\pm$0.08 - $<$13 2195 336.942352 -35.123257 26.02$\pm$0.04 7.008$\pm$0.002 50$\pm$12 401 337.051239 -35.172020 26.43$\pm$0.08 - $<$11 3299 337.051147 -35.166512 26.52$\pm$0.08 7.109$\pm$0.002 50$\pm$6$^b$ 521 336.944397 -35.118809 26.53$\pm$0.07 7.008$\pm$0.002 64$\pm$6$^b$ 2009 336.933716 -35.124950 26.89$\pm$0.14 - $<$17 994 336.957092 -35.136780 27.11$\pm$0.19 - $<$19 1147 337.027130 -35.163212 27.26$\pm$0.11 - $<$22 2660 336.940186 -35.116970 27.27$\pm$0.10 - $<$22 2980 337.024994 -35.142494 27.30$\pm$0.12 - $<$22 647 337.034332 -35.168716 27.31$\pm$0.15 - $<$23 1310 336.953339 -35.133030 27.32$\pm$0.16 - $<$23 2391 337.051361 -35.149185 27.33$\pm$0.17 - $<$23 187 336.953186 -35.147457 27.33$\pm$0.10 - $<$23 1899 336.958618 -35.126297 27.35$\pm$0.15 - $<$23 1807 337.057861 -35.155842 27.36$\pm$0.09 - $<$24 2192 337.018158 -35.151600 27.40$\pm$0.10 - $<$25 ------ ------------ ------------ ---------------- ----------------- ----------------------- -- -- Limits on NV$\lambda 1240$ emission ----------------------------------- The wavelength range observed by FORS2 covers the region of NV emission, where however no apparent emission signal is found in any of the three Ly$\alpha$ emitters within 500 km/s from the expected position of the line [e.g. @Mainali2018], resulting in limits on the ratio Ly$\alpha$/NV$\gtrsim$8-10. We then built a weighted average spectrum of the three emitters (see Fig. \[fig\_spectra1d\]) using all data of the present program and of our previous observations to compute limits on the NV emission, under the assumption that the shift between Ly$\alpha$ and NV emission is similar in the three sources. The stacked source has Ly$\alpha$ flux of $16.7\times 10^{-18}$ erg s$^{-1}$ cm$^{-2}$ and a NV$<3.36 \times 10^{-19}$ erg s$^{-1}$ cm$^{-2}$, corresponding to Ly$\alpha$/NV $>$ 17. This limit is much higher than the ratios measured in some z$\gtrsim$7 galaxies and considered indicative of AGN emission, ranging from Ly$\alpha$/NV$\sim$1-2 [@Tilvi2016; @Sobral2017a; @Hu2017] to $\simeq$6-9 [@Laporte2017; @Mainali2018]. Our limit is also higher than the average Ly$\alpha$/NV$\sim$12 found in LBG-selected narrow-line AGNs at $z\sim$2-3 by @Hainline2011. However, the latter work also find that the Ly$\alpha$/NV distribution covers a wide range of values and Ly$\alpha$/NV $\gtrsim$20 are found [see also @McCarthy1993; @Humphrey2008]. Finally, NV emission might also lack due to a very low metallicity [though BDF3299 is already fairly enriched, as shown by @Carniani2017]. It is thus not possible to rule out that our emitters also host AGN activity. The Ly$\alpha$ visibility in the BDF region {#sect_lyafrac} =========================================== Simulations of the Ly$\alpha$ population at high-redshift {#subsect_sim} --------------------------------------------------------- Under the scenario where the BDF region is highly ionized compared to the average z=7 universe, our expectation was to detect Ly$\alpha$ also in several faint galaxies. Instead, we only confirmed one new bright source. To assess the significance of this result we run Monte carlo simulations to determine the expected number of objects we should have detected if the BDF region was similar in terms of Ly$\alpha$ visibility to the average z=7 Universe or to the average z=6 one (i.e., with a greater Ly$\alpha$ visibility). [lccc]{} Sample &Total & Bright & Faint\ Observed & 17 & 5 & 12\ Detected in Ly$\alpha$ & 3 & 3 & 0\ [lcccccccc]{} PDF(z) & Ly$\alpha$ & & Probability & & & Expected Number &\ & visibility &$P(tot=3)$&$P(bright=3)$&$P(faint=0)$&$<N_{tot}>$&$<N_{bright}>$&$<N_{faint}>$\ Flat & z=7 & 0.21 & 0.009 & 0.17& 2.1 &0.7 &1.4\ P(z,Y) & z=7 & 0.18& 0.009& 0.22&1.9 & 0.7 & 1.2\ Flat & z=6 & 0.08& 0.035& 0.002 &5.5 & 1.2 &4.3\ P(z,Y) & z=6 & 0.11& 0.036& 0.004&5.0 & 1.2 & 3.8\ We consider the 17 sources presented in Table \[table1\], namely the 16 targets discussed in Sect. \[sect\_obs\] plus another bright object (BDF2883 at Y=25.97) that was observed in the same region with the old FORS2 mask (P11 and V11). First, for each object without a confirmed redshift we extract randomly a redshift according to two cases: a) we assume that the redshift distributions of the candidates follow those derived from the LBG color selection from C16a. These distributions ($P(z,Y)$ in Table \[table2\]) are derived from simulations for three $\Delta mag=0.5$ bins at Y=26-27.5, and peak at $z\sim$6.9 with magnitude-dependent tails covering the range $z\sim$6.0-7.8; b) we assume a *flat* redshift distribution in the small redshift range \[6.95:7.15\] approximately corresponding to a size of 10 Mpc, thus assuming all sources to be part of a unique, localized structure. When a redshift is assigned, we calculate the rest-frame $M_{UV}$ (at $\lambda=$1500Å) of the galaxy on the basis of the observed magnitude assuming a flat spectrum, and we determine the limiting flux at 3$\sigma$ from the expected position of the Ly$\alpha$ line. We then calculate the limiting Ly$\alpha$ line EW ($EW_{lim})$ on the basis of the limiting flux and the observed magnitude. For all objects with a Ly$\alpha$ detection we fix the redshift at the spectroscopic value and determine $M_{UV}$ and $EW_{lim}$ as above. The observed continuum flux is computed from the observed Y105 magnitude for all sources with no line detection. In the case of BDF521 and BDF2195 we adopt as reference the J125 magnitude [@Cai2015] which samples the UV at 1500Å and is not affected by IGM and Ly$\alpha$ emission, while for BDF3299 we correct the observed Y105 magnitude by subtracting line emission and accounting for the portion of filter (27%) sampling IGM. We extract an intrinsic Ly$\alpha$ EW ($EW_{intr}$) for our object: this is randomly drawn from the observed EW distributions which are derived separately for faint and bright galaxies ($M_{UV}<-20.25$) from more than 160 $z\sim$6-7 LBGs in the CANDELS fields [@DeBarros2017; @Castellano2017 and Pentericci et al. submitted]. If $EW_{intr}> EW_{lim}$ then the galaxy is counted as a detection, otherwise it is counted as a nondetection. If the extracted redshift is beyond the FORS2 range ($z\lesssim$7.3), it is automatically counted as a nondetection. The procedure is repeated 10$^5$ times for all 17 input objects. As additional input parameter, we can allow a fraction of the objects to be undetected because they are lower redshift interlopers. We obtain: 1) the fraction and total number of expected detections at bright and faint magnitudes; 2) the probability of having a total of 0,1,2 etc detections in each sample, and 3) the EW distribution of the detected objects. Prevalence of bright Ly$\alpha$ emitters ---------------------------------------- We find that the number of bright detected objects (3) points to a very high line visibility in this region. In fact, it is higher than expected at $z\sim$7 and even higher than the $z\sim$6 statistics, though more consistent with the latter scenario of a “clean” $z\sim$6-like visibility: the probability of finding 3 bright emitters given the known $z\sim$7 Ly$\alpha$ visibility is less than 1%. However, the Ly$\alpha$ fraction among faint galaxies is strikingly at odds with the expectations for the “reionized” case. We expect $\sim$4 detections in the faint sample for a $z\sim$6 EW distribution, and the probability of finding none is 0.2-0.4%. No appreciable difference is found among the “flat” and “P(z,Y)” cases. The results are summarized in table \[table2\]. Adopting a 5$\sigma$ threshold, the number of expected detections decreases by 25%-35% for bright sources and 40-100% for faint ones, depending on the redshift distributions, but remains inconsistent with the observations. We have so far assumed that all our sources are genuine high-redshift galaxies. Only under the extreme assumption of a $\gtrsim$50-70% fraction of interlopers at faint end (at 5$\sigma$ and 3$\sigma$ thresholds for line identification respectively), the null Ly$\alpha$ detection rate among faint galaxies can be reconciled with a $z\sim$6 EW distribution at both bright and faint fluxes. We consider this a very unlikely possibility given the conservative selection criteria adopted and the fact that in none of the unconfirmed galaxies do we detect other features that could point to a low redshift nature. Discussion and conclusions {#sect_disc} ========================== The high detection rate of Ly$\alpha$ emission in the BDF bright sources supports the scenario from C16a, namely the BDF hosts a reionized bubble where Ly$\alpha$ visibility is enhanced. However, the lack of Ly$\alpha$ detections in faint galaxies is apparently at odds with such a picture. Our observations could imply that contrary to the reference scenario outlined in C16a, the faint galaxies are actually outside the bubbles, while the bubbles are created by the bright galaxies alone, or thanks to the contribution of objects beyond the current detection limit [as the z$\sim$6 clustered ultra-faint dwarfs observed by @Vanzella2017a; @Vanzella2017b]. The faint galaxies might be part of a superstructure which includes the reionized regions, but their Ly$\alpha$ might be undetected because they lie outside the patches with low neutral fraction. Unfortunately the available HST imaging observations do not cover the full BDF region ($\sim 2.4\times 2.4$ Mpc at $z \sim$7) but only two $\sim 0.7 \times 0.7$ Mpc areas centred on the emitters, thus preventing detailed constraints on the extent and geometry of the overdensity. To ascertain whether the BDF emitters are capable of re-ionizing their surroundings or not, we performed SED-fitting on the available photometry (see C16a for details) and estimated the SFR and ionizing flux of the BDF emitters with our $\chi^{2}$ minimization code *zphot.exe* [@Fontana2000]. The SFR and the age of the galaxies are then used to measure the size ($R_{bubble}$) of the resulting ionized bubbles assuming a hydrogen clumping factor C=2 and an average neutral hydrogen fraction $\chi_{HI}=0.5$ surrounding the sources at the onset of star-formation [see, e.g., @Shapiro-giroux1987; @Madau1999]. We used both BC03 [@Bruzual2003] and BPASSV2.0 [@Eldridge2009; @Stanway2016] templates with constant SFR, age from 1Myr to the age of the universe at the given redshift, E(B-V) in the range 0.0-1.0 [assuming @Calzetti2000 extinction curve] and metallicity from 0.02$Z_{\odot}$ to solar. In Fig. \[fig\_bubbles\] we show the $R_{bubble}$ of the ionized regions created by BPASS SED models within 68% c.l. from the best-fit for the three emitters, as a function of the age of the stellar population and for different values of the $f_{esc}$. We also show the $R_{bubble}$ ranges for the case where we summed together the ionizing fluxes of the two sources BDF521 and BDF2195 that form a close pair at only $\sim$90 kpc projected speration. The size $R_{bubble}$ must be compared to the dimension $R_{min}$=1.1Mpc [estimated as in @Loeb2005] enabling Ly$\alpha$ to be redshifted enough to reach the observers. On the one hand, BDF521 and BDF2195 would require a high $f_{esc}\gtrsim$20-60% to create a large enough bubble, while BDF3299 is unable to create its own bubble even assuming 100% escape fraction. On the other hand, when summing the two contributions the BDF521-BDF2195 pair can create a large enough bubble with $f_{esc}\gtrsim$10-15% (BC03 and BPASSV2 respectively) and constant star-formation for $\gtrsim$400Myr. We do not find solutions that allow age $<$20Myr, which is also consistent with supernovae requiring 3.5-28 Myrs to build channels that can allow LyC photons to escape [@Ferrara2013]. We find that results obtained with BPASSV2 library point to slightly higher re-ionizing capabilites compared to BC03 ones, as slightly smaller fitted SFRs partially compensate the higher ionizing photons production rate of the BPASS models. The aforementioned $R_{min}$ assumes that the Ly$\alpha$ escapes from the galaxies at the systemic redshift. However, line visibility from smaller HII regions is possible in presence of strong outflows: a 220km/s shift, which is the median value for galaxies in massive halos from @Mason2018, results in $R_{min}\sim 0.85$ Mpc, which can be reached in a few 100Myr by the BDF521-BDF2195 pair with $f_{esc}\lesssim$10%, but still out of reach for BDF3299 without extreme $f_{esc}$. The case described above considers star formation as the only source of ionizing photons. However, we cannot exclude that the BDF emitters host AGN that could provide a substantial contribution to the ionizing budget, or that the bubbles have been created by past AGN activity. In such a case, bright emitters including BDF3299 could be solely responsible for the creation of reionized regions, assuming lower $f_{esc}$ and/or ages for their stellar populations. As an alternative to scenarios where the ionizing flux is generated by bright galaxies alone, some mechanism must be in place to prevent Ly$\alpha$ from faint galaxies to reach the observers. A possibile explanation can be found in an accelerated evolution of overdensity members compared to the normal field population. The bright emitters are young, relatively dust-free sources [consistent with the ALMA results from @Maiolino2015] experiencing a bursty episode of star-formation. Intense bursts of star-formation favoring the escape of Ly$\alpha$ photons are stimulated by an enhanced rate of mergers and interactions within the overdensity. In this picture, all faint LBGs are actually more evolved objects, thus with intrinsically fainter line emission, that have already experienced such bursty star-formation episodes in the past. Recombination of neutral hydrogen in the regions close to overdensity members can provide an additional mechanism explaining lack of line emission from faint galaxies, as only in bright galaxies with large circular velocities Ly$\alpha$ photons acquire a frequency shift enabling their escape from the circum-galactic medium. Indeed, as discussed by @Mason2018, Ly$\alpha$ emission from UV bright galaxies residing in reionized overdensities can be further boosted by their higher velocity offsets that reduce the damping wing absorption by cosmic neutral hydrogen. This effect, possibly along with enhanced Ly$\alpha$ photon production, has been proposed as a physical explanation for the increased Ly$\alpha$ visibility in very bright ($M_{UV}<$-22) $z>$7 galaxies found by @Stark2017. While the three BDF emitters at $M_{UV}\gtrsim$-21 are not as bright, the combination of a large enough HII region around them and of frequency shifts induced by their circular velocities likely plays a role at enhancing their Ly$\alpha$ visibility with respect to $z\sim$6 LBGs. Luckily, a thorough examination of the aforementioned scenarios will soon be made possible by observations with JWST. It will be possible to: 1) confirm a very low neutral fraction in the region surrounding the bright emitters by looking for blue wings in high-resolution Ly$\alpha$ spectra [e.g. @Hu2016]; 2) clarify the nature of bright emitters through a more accurate measurement of SFR, extinction and age (H$\alpha$ luminosity, H$\alpha$/H$\beta$ and H$\alpha$/UV ratios), and probing signatures of a high escape fraction [EW of Balmer lines or the $O_{32}$ ratio, e.g., @Castellano2017; @DeBarros2016; @Chisholm2018], AGN emission and hard ionizing stellar spectra [e.g., @Mainali2017; @Senchyna2017]; 3) assess whether faint candidates are members of a localized overdensity at $z\simeq$7.0-7.1 as the bright ones, or just outside such a region, or low-z interlopers in the sample by measuring their redshift from optical emission lines; and 4) measure velocity shifts between Ly$\alpha$ and UV/optical lines that trace the systemic redshift of bright emitters. A systematic analysis of this kind carried out with JWST on $z\gtrsim$7 lines-of-sight with different levels of Ly$\alpha$ visibility will eventually shed light on the processes responsible for the creation of the first reionized regions. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 099.A-0671(A). PD acknowledges support from the European Research Council’s starting grant ERC StG-717001 and from the European Commission’s and University of Groningen’s CO-FUND Rosalind Franklin program. natexlab\#1[\#1]{} , R. J., [Illingworth]{}, G. D., [Oesch]{}, P. A., [et al.]{} 2015, , 803, 34 , G., & [Charlot]{}, S. 2003, , 344, 1000 , Z., [Fan]{}, X., [Jiang]{}, L., [et al.]{} 2015, , 799, L19 , D., [Armus]{}, L., [Bohlin]{}, R. C., [et al.]{} 2000, , 533, 682 , S., [Maiolino]{}, R., [Pallottini]{}, A., [et al.]{} 2017, , 605, A42 , J., [Bunker]{}, A. J., [Wilkins]{}, S. M., [et al.]{} 2012, , 427, 3055 , M., [Fontana]{}, A., [Paris]{}, D., [et al.]{} 2010, , 524, A28 , M., [Dayal]{}, P., [Pentericci]{}, L., [et al.]{} 2016, , 818, L3 , M., [Pentericci]{}, L., [Fontana]{}, A., [et al.]{} 2017, , 839, 73 , J., [Gazagnes]{}, S., [Schaerer]{}, D., [et al.]{} 2018, ArXiv e-prints, arXiv:1803.03655 , P., [Ferrara]{}, A., [Saro]{}, A., [et al.]{} 2009, , 400, 2000 , P., [Maselli]{}, A., & [Ferrara]{}, A. 2011, , 410, 830 , S., [Vanzella]{}, E., [Amor[í]{}n]{}, R., [et al.]{} 2016, , 585, A51 , S., [Pentericci]{}, L., [Vanzella]{}, E., [et al.]{} 2017, , 608, A123 , J. J., & [Stanway]{}, E. R. 2009, , 400, 1019 , A., & [Loeb]{}, A. 2013, , 431, 2826 , S. L., [Ryan]{}, Jr., R. E., [Papovich]{}, C., [et al.]{} 2015, , 810, 71 , A., [D’Odorico]{}, S., [Poli]{}, F., [et al.]{} 2000, , 120, 2206 , A., [Vanzella]{}, E., [Pentericci]{}, L., [et al.]{} 2010, , 725, L205 , K. N., [Shapley]{}, A. E., [Greene]{}, J. E., & [Steidel]{}, C. C. 2011, , 733, 31 , E. M., [Cowie]{}, L. L., [Songaila]{}, A., [et al.]{} 2016, , 825, L7 , W., [Wang]{}, J., [Zheng]{}, Z.-Y., [et al.]{} 2017, , 845, L16 , A., [Villar-Mart[í]{}n]{}, M., [Vernet]{}, J., [et al.]{} 2008, , 383, 11 , A., [Dayal]{}, P., & [M[ü]{}ller]{}, V. 2015, , 450, 4025 , A., [Dayal]{}, P., [Partl]{}, A. M., & [M[ü]{}ller]{}, V. 2014, , 441, 2861 , N., [Nakajima]{}, K., [Ellis]{}, R. S., [et al.]{} 2017, , 851, 40 , M. D., & [Bremer]{}, M. 2003, , 593, 630 , A., [Barkana]{}, R., & [Hernquist]{}, L. 2005, , 620, 553 , P., [Haardt]{}, F., & [Rees]{}, M. J. 1999, , 514, 648 , R., [Kollmeier]{}, J. A., [Stark]{}, D. P., [et al.]{} 2017, , 836, L14 , R., [Zitrin]{}, A., [Stark]{}, D. P., [et al.]{} 2018, ArXiv e-prints, arXiv:1804.00041 , R., [Carniani]{}, S., [Fontana]{}, A., [et al.]{} 2015, , 452, 54 , C. A., [Treu]{}, T., [de Barros]{}, S., [et al.]{} 2018, , 857, L11 , P. J. 1993, , 31, 639 , J. 1998, , 501, 15 , L., [Fontana]{}, A., [Vanzella]{}, E., [et al.]{} 2011, , 743, 132 , L., [Vanzella]{}, E., [Fontana]{}, A., [et al.]{} 2014, , 793, 113 , M. A., [Stark]{}, D. P., [Ellis]{}, R. S., [et al.]{} 2012, , 744, 179 , P., [Stark]{}, D. P., [Vidal-Garc[í]{}a]{}, A., [et al.]{} 2017, , 472, 2608 , P. R., & [Giroux]{}, M. L. 1987, , 321, L107 , D., [Matthee]{}, J., [Brammer]{}, G., [et al.]{} 2017, ArXiv e-prints, arXiv:1710.08422 , E. R., [Eldridge]{}, J. J., & [Becker]{}, G. D. 2016, , 456, 485 , D. P., [Ellis]{}, R. S., [Chiu]{}, K., [Ouchi]{}, M., & [Bunker]{}, A. 2010, , 408, 1628 , D. P., [Ellis]{}, R. S., [Charlot]{}, S., [et al.]{} 2017, , 464, 469 , V., [Pirzkal]{}, N., [Malhotra]{}, S., [et al.]{} 2016, , 827, L14 , E., [Pentericci]{}, L., [Fontana]{}, A., [et al.]{} 2011, , 730, L35+ , E., [Fontana]{}, A., [Zitrin]{}, A., [et al.]{} 2014, , 783, L12 , E., [Castellano]{}, M., [Meneghetti]{}, M., [et al.]{} 2017, , 842, 47 , E., [Calura]{}, F., [Meneghetti]{}, M., [et al.]{} 2017, , 467, 4304
--- abstract: 'We discuss unitarity tests of the neutrino mixing (PMNS) matrix. We show that the combination of solar neutrino experiments, medium-baseline and short-baseline reactor antineutrino experiments make it possible to perform the first direct unitarity test of the PMNS matrix. In particular, the measurements of Daya Bay and JUNO (a next generation medium-baseline reactor experiment) will lay the foundation of a precise unitarity test of $|U_{e1}|^2 + |U_{e2}|^2 + |U_{e3}|^2 = 1 $. Furthermore, the precision measurement of $\sin^22\theta_{13}$ in both the $\bar{\nu}_e$ disappearance and the $\nu_e$ appearance (from a $\nu_{\mu}$ beam) channels will provide an indirect unitarity test of the PMNS matrix. Together with the search for appearance/disappearance at very short distances, these tests could provide important information about the possible new physics beyond the three-neutrino model.' author: - 'X. Qian' - 'C. Zhang' - 'M. Diwan' - 'P. Vogel' bibliography: - 'unitarity.bib' title: Unitarity Tests of the Neutrino Mixing Matrix --- \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} #### **Introduction:** In the past decades our understanding of neutrinos has advanced dramatically. Initially, neutrinos were thought to be massless, since only left-handed neutrinos and right-handed antineutrinos were detected in experiments [@helicity]. The existence of non-zero neutrino masses and the neutrino mixing were then successfully established through the observation of neutrino flavor oscillations. Recent reviews can be found e.g. in Ref. [@PDG; @mckeown_review]. In the three-neutrino framework, the oscillations are characterized by the neutrino mixing (commonly referred to as the Pontecorvo-Maki-Nakagawa-Sakata or PMNS in short) matrix [@ponte1; @ponte2; @Maki] and two neutrino mass-squared differences ($\Delta m^2_{32} = m^2_{3}-m^2_{2}$ and $\Delta m^2_{21} = m^2_{2}-m^2_{1}$). The PMNS matrix $U_{PMNS}$ (or $U$ in short), $$\left ( \begin{matrix} \nu_{e} \\ \nu_{\mu} \\ \nu_{\tau} \end{matrix} \right) = \left( \begin{matrix} U_{e1} & U_{e2} & U_{e3} \\ U_{\mu 1} & U_{\mu 2} & U_{\mu 3} \\ U_{\tau 1} & U_{\tau 2} & U_{\tau 3} \end{matrix} \right) \cdot \left( \begin{matrix} \nu_1 \\ \nu_2 \\ \nu_3 \end{matrix} \right),$$ describes the mixing between the neutrino flavor ($\nu_e$, $\nu_\mu$, $\nu_\tau$) and mass eigenstates ($\nu_1$, $\nu_2$, and $\nu_3$ with masses $m_1$, $m_2$, and $m_3$, respectively). Components of the PMNS matrix can be determined through measurements of neutrino oscillations. For neutrinos with energy $E$ and flavor $l$, the probability of its transformation to flavor $l'$ after traveling a distance $L$ in vacuum is expressed as: $$\begin{aligned} \label{eq:osc_dis} P(\nu_l\rightarrow \nu_{l'}) = \left |\sum_{i} U_{li}U^{*}_{l'i}e^{-i(m_{i}^2/2E)L} \right | ^2 \nonumber \\ = \sum_{i}|U_{li}U^*_{l'i}|^2 + \Re \sum_{i} \sum_{j \neq i} U_{li} U^{*}_{l'i} U^{*}_{lj} U_{l'j} e^{i\frac{\Delta m^2_{ij} L}{2E}}.\end{aligned}$$ The unitarity tests of the PMNS matrix refer to establishing whether $U \times U^* \stackrel{?}{=} I$ and $U^* \times U \stackrel{?}{=} I$, where $I$ is the 3$\times$3 unit matrix. These conditions are represented by twelve equations in total: $$\begin{aligned} |U_{l1}|^2 + |U_{l2}|^2 + |U_{l3}|^2 &\stackrel{?}{=}& 1 |_{l=e,\mu,\tau} \label{eq:uni1}\\ U_{l1}U^{*}_{l'1} + U_{l2}U^{*}_{l'2} + U_{l3}U^{*}_{l'3} &\stackrel{?}{=}& 0 |_{l,l'=e,\mu,\tau; l'\neq l} \label{eq:uni2}\\ |U_{e i}|^2 + |U_{\mu i}|^2 + |U_{\tau i}|^2 &\stackrel{?}{=}& 1 |_{i=1,2,3} \label{eq:uni3}\\ U_{e i}U^{*}_{e j} + U_{\mu i}U^{*}_{\mu j} + U_{\tau i}U^{*}_{\tau j} &\stackrel{?}{=}& 0 |_{i,j=1,2,3;i\neq j}.\label{eq:uni4}\end{aligned}$$ The PMNS matrix is conventionally written as explicitly unitary: $$\left( \begin{matrix} c_{12}c_{13} & s_{12}c_{13} & s_{13}e^{-i\delta} \\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta} & c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} & s_{23}c_{13} \\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} & -c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta} & c_{23}c_{13} \end{matrix} \right), \nonumber$$ with three mixing angles $\theta_{12}$, $\theta_{23}$, $\theta_{13}$ ($s_{ij} = \sin\theta_{ij}$ and $c_{ij} = \cos\theta_{ij}$), and a phase $\delta$, commonly referred to as the CP phase in the leptonic sector. Super-Kamiokande [@SuperK], K2K [@K2K], MINOS [@MINOS], T2K [@T2K_muon], and IceCube [@icecube] experiments determined the angle $\theta_{23}$ and the mass difference $|\Delta m^2_{32}|$ using the $\nu_\mu$ disappearance channel with atmospheric and accelerator neutrinos. The KamLAND [@KamLAND] and SNO [@SNO] experiments measured $\theta_{12}$ and $\Delta m^2_{21}$ with $\bar{\nu}_e$ disappearance channel using reactor antineutrinos and $\nu_e$ disappearance channel using solar neutrinos [^1], respectively. Recently, the Daya Bay [@dayabay; @dayabay_cpc], Double Chooz [@doublec], and RENO [@reno] measured $\theta_{13}$ and are on their ways to measure $|\Delta m^2_{31}|$ with $\bar{\nu}_e$ disappearance using reactor antineutrinos. The current knowledge of the mixing angles and mass squared differences [@PDG] are [^2]: $$\begin{aligned} \sin^2 2\theta_{12} &=& 0.857\pm0.024 \nonumber \\ \sin^2 2\theta_{23} &>& 0.95 \nonumber \\ \sin^2 2\theta_{13} &=& 0.095 \pm 0.010 \nonumber \\ \Delta m^2_{21} &=& (7.5\pm 0.2)\times 10^{-5} eV^2 \nonumber \\ |\Delta m^2_{32}| &=& (2.32^{+0.12}_{-0.08})\times 10^{-3} eV^2.\end{aligned}$$ Besides the disappearance channels listed above, the appearance channel is becoming a powerful tool to determine the matrix elements of $U_{PMNS}$. The MINOS [@MINOS_ap] and T2K [@T2K_ap; @T2K_ap1] measured the $\nu_\mu$ to $\nu_e$ appearance probability with accelerator neutrinos. In particular, T2K [@T2K_ap1] established the $\nu_e$ appearance at a 7.5$\sigma$ level. OPERA [@opera] and Super-Kamiokande [@SK_tau] experiments observed the $\nu_\mu$ to $\nu_{\tau}$ appearance with accelerator and atmospheric neutrinos, respectively. In the neutrino sector the determination of the remaining unknown quantities, including the value of CP phase $\delta$ and the sign of $|\Delta m^2_{32}|$ (neutrino mass hierarchy), are the goals of the current (No$\nu$a [@nova] and T2K), and next generation neutrino oscillation experiments (LBNE [@LBNE; @LBNE_off], JUNO [@JUNO1; @JUNO2], Hyper-K [@hyperK], and PINGU [@PINGU]). With future precise measurements of neutrino oscillation characteristics, unitarity tests of the PMNS matrix become possible. In the following, we will discuss the direct and indirect unitarity tests of the PMNS matrix. #### **Direct unitarity test of the first row:** In a direct unitarity test, individual components of the PMNS matrix are measured. Eqs. - may then be tested directly. The most promising one is the test of the first row $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2 \stackrel{?}{=} 1$, to be discussed in detail below. SNO measurement of $\nu_e$ disappearance with solar neutrinos provides the first constraint. The higher energy $^8B$ solar neutrinos detected in SNO can be well approximated as the mass eigenstates due to the MSW effect [@msw1; @msw2; @msw3; @msw4]. Therefore, by comparing the charged current ($\nu_e$ only) and the neutral current (sum of all $\nu_{l}$) events, a direct constraint on $\cos^{4}\theta_{13}\sin^2\theta_{12} + \sin^{4}\theta_{13}$ or in terms of the PMNS matrix elements on the combination $|U_{e2}|^2\cdot(|U_{e1}|^2+|U_{e2}|^2) + |U_{e3}|^4$ is provided. In addition, $|U_{e1}|^2$, $|U_{e2}|^2$, and $|U_{e3}|^2$ can be constrained using the $\bar{\nu}_e$ disappearance with reactor experiments. In particular, $4|U_{e1}|^2\cdot|U_{e2}|^2$ was first determined by the the KamLAND experiment, and will be significantly improved by the next generation medium-baseline reactor antineutrino experiments (e.g., the upcoming “Jianmen Underground Neutrino Observatory” (JUNO) experiment, and the RENO-50 experiment). Meanwhile, the currently running Daya Bay experiment (together with RENO and Double Chooz) will provide the most precise measurement of the $\bar{\nu}_e$ disappearance for the oscillations governed by $\Delta m^2_{3x}$ ($\Delta m^2_{31}$ and $\Delta m^2_{32}$), constraining $4|U_{e3}|^2\cdot(|U_{e1}|^2+|U_{e2}|^2)$. However, due to the tiny difference between $\Delta m^2_{31}$ and $\Delta m^2_{32}$, the Daya Bay experiment can not determine $4|U_{e3}|^2 \cdot |U_{e1}|^2$ and $4|U_{e3}|^2 \cdot |U_{e2}|$ separately. With three independent constrains, three unknown $|U_{e1}|^2$, $|U_{e2}|^2$, and $|U_{e3}|^2$ can be completely determined. Therefore, the combination of medium-baseline reactor experiments (KamLAND, JUNO and RENO-50), short-baseline reactor experiments (Daya Bay, RENO, and Double Chooz), and SNO solar neutrino results make possible the first direct unitarity test of the PMNS matrix. In the following, as an example, we study the expected sensitivity of testing $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2 \stackrel{?}{=} 1$ with SNO, Daya Bay, and JUNO. We adapted the fitter developed in Ref. [@JUNO1], which is used to study the physics capability of JUNO. The expected results from Daya Bay and the results of SNO are taken from Ref. [@dayabay_proceeding] and Ref. [@SNO], respectively. For JUNO, a 20 kt fiducial volume liquid scintillator detector is assumed at a distance of 55 km from the reactor complex with a total thermal power of 40 GW and five years running time. The experimental uncertainties in the absolute normalization in both the detection efficiency and the neutrino flux have to be taken into account for Daya Bay and JUNO. In particular, the debate regarding the “reactor anomaly” [@anom; @anom1] shows that the uncertainty in reactor flux can be as large as 6-8%. In Daya Bay, the uncertainty in reactor flux is mitigated by using the ratio method [@ratio] with near and far detectors. In JUNO, the constraint of $4|U_{e1}|^2\cdot|U_{e2}|^2$ is mainly coming from the spectrum distortion [@JUNO1] due to the $\Delta m^2_{21}$ oscillation. In both cases, the oscillation formula need to be modified and becomes actually $$\begin{aligned} \label{eq:osc1} P &=& \left(|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2\right)^2 \nonumber \\ &\cdot& ( 1 - \frac{4|U_{e1}|^2|U_{e2}|^2}{ (|U_{e1}|^2+|U_{e2}|^2 + |U_{e3}|^2)^2} \sin^2 \left( \frac{\Delta m^{2}_{21} L }{4E} \right) \nonumber \\ &-& \frac{4|U_{e3}|^2(|U_{e1}|^2+|U_{e2}|^2)}{(|U_{e1}|^2+|U_{e2}|^2 + |U_{e3}|^2)^2} \sin^2 \left( \frac{\Delta m^{2}_{3x} L }{4E} \right) ),\end{aligned}$$ in which the overall $(|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2)^2$ term cannot be separated from the uncertainty in the absolute normalization. Therefore, instead of constraining $4|U_{e3}|^2\cdot(|U_{e1}|^2+|U_{e2}|)$ and $4|U_{e1}|^2\cdot|U_{e2}|^2$, the Daya Bay and JUNO experiments are in fact constraining and going to constrain $\frac{4|U_{e3}|^2\cdot(|U_{e1}|^2+|U_{e2}|^2)}{(|U_{e1}|^2+|U_{e2}|^2 + |U_{e3}|^2)^2}$ and $\frac{4|U_{e1}|^2\cdot|U_{e2}|^2}{(|U_{e1}|^2+|U_{e2}|^2 + |U_{e3}|^2)^2}$, respectively. For example, if the 6% reactor anomaly is due to the existence of heavy sterile neutrinos, the impact of the fast oscillations of the sterile neutrino components will be absorbed into $(|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2)^2 \simeq 0.94$. The $\theta_{13}$ ($\theta_{12}$) angle measured by Daya Bay (JUNO) would therefore be an effective angle: $\sin^22\theta^{eff}_{13} = \frac{4|U_{e3}|^2\cdot \left(|U_{e1}|^2+|U_{e2}|^2 \right)}{0.94}$ instead of $4|U_{e3}|^2\cdot \left(|U_{e1}|^2+|U_{e2}|^2 \right)$ ($\sin^22\theta^{eff}_{12} \approx \frac{4|U_{e1}|^2|U_{e2}|^2}{0.94}$). ![Direct unitarity test of $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2 \stackrel{?}{=} 1$ by combining JUNO, Daya Bay, and solar results. We considered two scenarios i) current SNO constraint and ii) a five times better constraint than SNO. In addition, the red line shows the suggested value of $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2$ given a 6% reactor anomaly. See the text for more discussions.[]{data-label="fig:juno_direct"}](JUNO_direct.eps){width="90mm"} In Fig. \[fig:juno\_direct\], the sensitivity of the direct unitarity test of the $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2$ are shown after combining the expected results of JUNO, Daya Bay, and the current SNO results. It is assumed that Daya Bay will reach $\frac{4|U_{e3}|^2\cdot(|U_{e1}|^2+|U_{e2}|)}{(|U_{e1}|^2+|U_{e2}|^2 + |U_{e3}|^2)^2} = 0.09 \pm 0.0035$ [@dayabay_proceeding]. For the purpose of this study, we approximate SNO results as $|U_{e2}|^2\cdot(|U_{e1}|^2+|U_{e2}|^2) + |U_{e3}|^4=0.311 \pm 0.037$ [@SNO]. The experimental normalization uncertainty is assumed to be 10% in JUNO, and Eq. (\[eq:osc1\]) is used as the oscillation formula. The true MC spectrum is generated assuming $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2 = 1$. The hypotheses when $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2 $ deviates from unity are then tested by fitting the MC data. The results are presented as $\Delta \chi^2 = \chi^2_{|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2} -\chi^2_{unity}$ in Fig. 1. At 68% C.L. ($\Delta \chi^2<1$), the combination of JUNO, Daya Bay, and SNO results would give about 4% unitarity test. This unitarity test can be significantly improved with a stronger constraint from solar neutrino experiments. For example, with a five times improved constraint of $|U_{e2}|^2\cdot(|U_{e1}|^2+|U_{e2}|^2) + |U_{e3}|^4$, the unitarity test can be improved to about 1.2% level, which will be sufficiently accurate to test the reactor anomaly [^3]. #### **Other unitarity tests:** The rest of equations in Eq. (\[eq:uni1\]) (for the $\mu$ and $\tau$ flavor neutrinos) are more difficult to test. First, the only oscillation to be precisely measured in the foreseeable future is the $\nu_{\mu}$ disappearance in the $\Delta m^2_{3x}$ oscillations. Second, due to the small difference between $\Delta m^2_{31}$ and $\Delta m^2_{32}$, one would need a third independent constraint, in analogy to the solar neutrino measurements, even if the $\nu_{\mu}$ disappearance of the $\Delta m^2_{21}$ oscillation is determined. Finally, the unitarity tests would also suffer from the uncertainties in experimental absolute normalization, which however could be improved with a future neutrino factory [@nustorm]. Direct unitarity tests of Eq. (\[eq:uni2\]) can be accomplished by combining information from disappearance and appearance channels. For example, we can square both sides of $U_{e1}U^{*}_{\mu1} +U_{e2}U^{*}_{\mu2} +U_{e3}U^{*}_{\mu3} \stackrel{?}{=} 0$: $$\begin{aligned} \label{eq:sq} 0 &\stackrel{?}{=}& |U_{e1}|^2|U_{\mu1}|^2 + |U_{e2}|^2|U_{\mu2}|^2 + |U_{e3}|^2|U_{\mu3}|^2 \nonumber \\ &+& 2 \Re \left( U_{e1}U^{*}_{\mu1} U_{\mu 2} U^{*}_{e 2} \right) + 2 \Re \left( U_{e1}U^{*}_{\mu1} U_{\mu 3} U^{*}_{e 3} \right) \nonumber \\ &+& 2 \Re \left( U_{e2}U^{*}_{\mu2} U_{\mu 3} U^{*}_{e 3} \right).\end{aligned}$$ In order to directly test the above equation, one would need to measure $|U_{ei}|^2|_{i=1,2,3}$ as well as $|U_{\mu i}|^2|_{i=1,2,3}$ from disappearance channels. The latter three terms can be in principle accessed through the measurement of $\nu_{\mu}$ to $\nu_e$ appearance probability. However, the current (No$\nu$a and T2K) and the next generation (LBNE and Hyper-K) experiments will only focus on the $\Delta m^2_{3x}$ oscillations, which leaves the $\Delta m^2_{21}$ oscillations unconstrained. Eqs. (\[eq:uni3\]) and  (\[eq:uni4\]) can be tested by combining measurements of $\nu_{l}$ disappearance and $\nu_{l}$ to $\nu_{l'}$ appearance. For example, the constant term of the summation of $\nu_{\mu}$ disappearance, $\nu_{\mu}$ to $\nu_e$ appearance, and $\nu_{\mu}$ to $\nu_{\tau}$ appearance oscillation probabilities in vacuum would be the $\sum_i (|U_{ei}|^2 + |U_{\mu i}|^2 + |U_{\tau i}|^2) \cdot |U_{\mu i}|^2$, which is related to the Eq. (\[eq:uni3\]). However, this measurement would rely on an accurate determination of the experimental absolute normalization factor. Eq. (\[eq:uni4\]) can be tested by searching for the absence of $L/E$ dependence in the summation of oscillation probabilities from all three channels. However, in practice, these tests suffer from the matter effects, from the tiny difference between $\Delta m^2_{32}$ and $\Delta m^2_{31}$, as well as from the limited precision of $\nu_{\mu}$ to $\nu_{\tau}$ appearance channel. Therefore, it is actually more practical to perform an indirect unitarity test through measurements of the $\theta_{23}$ or the $\theta_{13}$ mixing angles, as discussed in the following. #### **Indirect unitarity tests:** While the direct unitarity tests appear to be extremely difficult, the violation of unitarity may be also naturally indicated in the next generation experiments searching for sterile neutrinos (e.g. Ref. [@nucifer; @*sbl_karsten; @*kamland_st; @*dayabay_st; @*broxeno_st; @*TPC_st; @*sage; @*isodar_st; @*oscsns; @*LAR1_st]). A discovery of sterile neutrino could be established by unambiguously observing new oscillation patterns (different from the known $\Delta m^2_{12}$ and $\Delta m^2_{3x}$ oscillations). On the other hand, if the sterile neutrinos or other new physics existed, the current measured mixing angles in the PMNS matrix would be effective angles, as discussed above, whose values would be process dependent. This point has been raised for example in Refs. [@st1; @st2] before. In such kind of tests, the “proof by contradiction” principle is utilized. First, the mixing angles are extracted from the data by assuming unitarity. If the same mixing angle measured by two different processes are inconsistent, the unitarity is then shown to be violated. Otherwise, the phase space of new physics will be constrained. Here, the word “indirect” comes from the fact that components in Eqs. - are not measured. There are currently three possibilities for such indirect unitarity tests: $\theta_{23}$, $\theta_{12}$, and $\theta_{13}$. The $\theta_{23}$ indirect test can be achieved by comparing the $\nu_{\mu}$ disappearance and $\nu_{\mu}$ to $\nu_{\tau}$ appearance. The precision will be limited by the $\nu_{\tau}$ appearance channel. For $\theta_{12}$, the indirect test between solar neutrino and medium-baseline reactor experiment is not necessary, as the direct test can be carried out as illustrated above. Therefore, the most promising candidate for such indirect unitarity test is $\theta_{13}$, which can be measured with the $\nu_{\mu}$ to $\nu_e$ appearance (T2K, No$\nu$a, LBNE, and Hyper-K) with accelerator neutrinos, as well as the $\bar{\nu}_e$ disappearance with reactor neutrinos (Daya Bay, RENO, and Double Chooz). For example, one can test the hypothesis of the 6% reactor anomaly due to the fourth generation sterile neutrino. From Ref. [@st2], the $\nu_\mu \rightarrow \nu_e$ appearance probability in vacuum will be altered by the additional sterile (fourth) neutrino as: $$\begin{aligned} \label{eq:ap1} P &=& 4|U_{\mu3}|^2|U_{e3}|^2\sin^2\left ( \frac{\Delta m^2_{31} L}{4E}\right) \nonumber\\ &+& 4|U_{\mu2}|^2|U_{e2}|^2\sin^2\left ( \frac{\Delta m^2_{21} L}{4E}\right) \nonumber\\ &+& 8|U_{\mu3}| |U_{e3}| |U_{\mu2}| |U_{e2}| \nonumber\\ &\times& \sin \left ( \frac{\Delta m^2_{31} L}{4E}\right) \sin \left ( \frac{\Delta m^2_{21} L}{4E}\right) \cos \left ( \frac{\Delta m^2_{32} L}{4E} -\delta_3 \right) \nonumber \\ &+&4|U_{\mu3}||U_{e3}||\beta''| \sin \left ( \frac{\Delta m^2_{31} L}{4E}\right) \sin \left ( \frac{\Delta m^2_{31} L}{4E} - \delta_1\right) \nonumber \\ &+&4|U_{\mu2}||U_{e2}||\beta''| \sin \left ( \frac{\Delta m^2_{21} L}{4E}\right) \sin \left ( \frac{\Delta m^2_{21} L}{4E} - \delta_2\right) \nonumber \\ &+& 2|U_{\mu4}|^2|U_{e4}|^2,\end{aligned}$$ where $\beta''= U^*_{\mu4} U_{e4}$, $\delta_1 = -\arg(U_{\mu3}U^*_{e3}\beta'')$, $\delta_2 = -\arg(U_{\mu2}U^*_{e2}\beta'')$, and $\delta_3 = \arg(U^*_{\mu3} U_{e3} U_{\mu 2} U^*_{e2})$. The only approximation made here is to average terms containing large sterile mass squared differences. If we neglect the $\Delta m^2_{21}$ oscillations and assume that $\delta_1 = 0$, the change in the effective $\sin^22\theta_{13}$ from the long-baseline $\nu_e$ appearance experiment can be estimated as $\frac{\Delta \sin^2 2\theta_{13}}{ \sin^2 2\theta_{13}} = \frac{|U_{\mu4}||U_{e4}|}{ |U_{\mu3}| |U_{e3}|}$. If we assume $U^*_{\mu4} U_{e4}=0.04$, which satisfies the 90% C.L. constrained from the latest ICARUS experiment [@icarus] ($2|U_{\mu4}|^2|U_{e4}|^2 < 3.4\times 10^{-3}$), the effective $\sin^22\theta^{eff}_{13}$ in the appearance channel could be higher than the true one by as much as 40%. On the other hand, given the 6% reactor anomaly, the effective $\sin^22\theta^{eff}_{13}$ obtained through the reactor antineutrino disappearance experiments could be only about a few percent higher than the true one (e.g. $\sin^22\theta^{eff}_{13} = \frac{\sin^22\theta_{13}}{0.94}$ with $\sin^22\theta_{13} := 4|U_{e3}|^2\cdot \left(|U_{e1}|^2+|U_{e2}|^2 \right)$) [^4]. In comparison, the projected precision of the Daya Bay experiment [@dayabay_proceeding], LBNE10 [^5], and full LBNE is about $<$4%, $\sim$10%, and $<$5%, respectively. Therefore, by comparing the measured $\sin^22\theta_{13}$ value from the reactor experiments to that measured in accelerator experiments, one would rule out the specific hypothesis described above, given the unitarity is truly conserved. The recent T2K $\nu_e$ appearance results [@T2K_ap1] favors a larger value of $\sin^22\theta_{13}$ than that from the reactor $\bar{\nu}_e$ disappearance results. The statistical significance is at about 2$\sigma$ level, whose actual value would depend on the assumption of the mass hierarchy and the value of CP phase $\delta$. Such a difference at present is consistent with an explanation of a statistical fluctuation. If the difference persists with increased statistics, the hypothesis of existence of new physics would be favored. Otherwise, the phase space of new physics can be strongly constrained. Furthermore, as shown in Eq. (\[eq:ap1\]), the existence of the fourth generation of sterile neutrino will likely not only change the effective mixing angle, but will also introduce additional spectrum distortion through non-zero $\delta_1$ or $\delta_2$ phases. Therefore, the wide band beam of LBNE together with its high statistics measurement of disappearance/appearance spectra would provide stringent tests for new physics. There is actually another group of indirect unitarity tests. For example, one can see that Eq. (\[eq:sq\]) is the same one as $P(\nu_{\mu}\rightarrow \nu_e)$ at $L=0$. Therefore, the search for appearance of $\nu_e$ with low backgrounds at very short baseline would effectively test unitarity. Such an experiment (e.g. ICARUS [@icarus]) is indeed very powerful in constraining the phase space of sterile neutrino models, which are motivated by the LSND [@LSND], MiniBooNe [@miniboone], and reactor [@anom] anomalies. #### **Conclusions:** In this paper we illustrate the direct and indirect unitarity tests of the PMNS matrix with a few simple examples. In order to calculate the sensitivity of direct unitarity test of the first row $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2 \stackrel{?}{=} 1$, we approximate SNO results as a measurement of $|U_{e2}|^2\cdot(|U_{e1}|^2+|U_{e2}|^2) + |U_{e3}|^4$. A critical assessment of this formula can be found in Ref. [@baha]. We also neglect the matter effects in the long baseline $\nu_e$ appearance measurement in illustrating the power of indirect unitarity tests with $\theta_{13}$. Although direct unitarity tests appear to be extremely challenging given limited experimentally available oscillation channels, we show that the combination of the medium-baseline reactor experiment, short-baseline reactor experiments, and the SNO solar results will make it possible to perform the first direct and model independent unitarity test of the PMNS matrix. At 68% C.L., the combination of JUNO, Daya Bay, and SNO will test $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2=1$ at a 4% level. This level of accuracy can be substantially reduced with an improved constraint from solar neutrino measurements. In addition, by comparing the $\sin^22\theta_{13}$ values measured by the current generation reactor neutrino experiment vs. current/next generation accelerator neutrino experiments, one can perform an indirect unitarity test, which would put strong constraints on the possible new physics (e.g. sterile neutrino, non-standard interaction etc.) beyond the three-neutrino model. Such constraints will be further enhanced by the precision measurement of disappearance/appearance spectra with a wide band beam. #### **Acknowledgments:** We would like to thank B. Viren and R. D. McKeown for fruitful discussions. This work was supported in part by the National Science Foundation, and the Department of Energy under contract DE-AC02-98CH10886. [^1]: [^2]: The uncertainties represent the 68% confidence intervals. The limit quoted for $\sin^22\theta_{23}$ corresponds to the projection of the 90% confidence interval in the $\sin^22\theta_{23}$-$\Delta m^2_{23}$ plane onto the $\sin^22\theta_{23}$ axis. [^3]: [^4]: [^5]: LBNE10 represents the phase I of the LBNE program. LBNE10 contains a 10 kt liquid argon time projection chamber. The running time includes 5 years neutrino and 5 years antineutrino running with a 708 kW beam.
Q: SSRS - Dataset using different dates (Table) I'm creating a report with all of the sales made from each product category. I added a table that contains on the column side: Months and for Rows I have the Product Category (i.e. Chairs, Tables, etc.). So it should be looking something like this: Product | January | February | March | April --------------------------------------------- Chairs | value | value | value | value Tables | value | value | value | value So the thing is that I need to calculate the total amount of chairs sold each month. I created the query: SELECT SUM(Quantity) FROM Sales WHERE PurchaseDate BETWEEN '2015-01-01' AND '2015-01-31' That query is inside a Dataset, but I'm looking for a way to use this same Dataset but pass it or change the PurchaseDate range in order to calculate the rest of the sales per month. A: Add a Matrix with the following fields and groups settings: In Row Groups add Category field and in Column group add a Parent Group, use the following expression: =MONTHNAME(MONTH(Fields!PurchaseDate.Value)) Use the above expression in the column at the right side of Category column. It will preview the following matrix: Note my example only include dates in January and February, in your case it should show every all months returned by your dataset. Let me know if this helps.
The first three terms are Lynch-Bell numbers (A115569). Table of n, a(n) for n=1..9. a(4) = 12324 as it is the least positive integer including at least one of each decimal digit 1, 2, 3 and 4, which is also divisible by each of these same numbers. Cf. A120674 (same but require distinct digits), A115569.
http://oeis.org/A120673
Charming pencil drawing "Doctoring Chickens." The frame measures 12.25 inches wide by 10.25 inches high; the image measures 9.5 inches wide by 7.5 inches high. Fannie (B. 1858 D. 1931) and Jennie (B. 1872 D. 1961) Burr were born into Monroe, CT's most prominent family. Their father was a very successful farmer. Both sisters graduated from Mt. Holyoke College and the Yale School of Fine Arts. They also both studied at the Art Students League in NYC. These accomplishments were very rare for young women in those days. Listed in Who Was Who in American Art by Falk. Exhibition at the New Britain Museum of American Art, New Britain, CT. Catalogue of work produced by The Connecticut Gallery, Inc. If you wish to browse our entire available inventory please go to OneofaKindAntiques.com. We also offer a consultation service AntiquesConsultant.com, ... as well as an online price guide at TheBestAntiquesPriceGuide.com. Antique English watercolour landscape painted by Henry Simpson (born 1853 died 1921). The bucolic scene depicts cows in a meadow with distant views. The frame measures 13.5 inches by 16.75 inches; the images measures 7.5 inches by 10.5 inches. Signed H Simpson at lower right. The condition is excellent with no fading.
https://oneofakindantiques.com/category/26/fineart-antiques-for-sale
Labour MPs have written to government officials urging them to provide schools and children with the digital provisions necessary to ensure every child can properly work from home during lockdown Labour has urged the government to take action to ensure all students, especially those from disadvantaged backgrounds, have access to education during the UK’s pandemic lockdown. Labour MPs Chi Onwurah and Wes Streeting wrote to education secretary Gavin Williamson and digital secretary Oliver Dowden to outline some of the ways the government could help young people access “excellent online education” while saying at home to reduce the spread of Covid-19, including ensuring children who are unable to access the internet are provided with connectivity, and making people available to help with technical support if needed. The call comes as the UK’s various lockdowns have forced children to learn from home, many of whom won’t have access to the internet or devices needed to enable home schooling. Wes Streeting, Labour’s shadow schools minister, said: “The government has had nine months since the start of the pandemic to tackle the digital divide in children’s learning, yet thousands of pupils are still unable to access online education. If ministers do not urgently adopt Labour’s proposals, the digital divide in access to education risks failing a generation.” When schools were originally told to switch to online learning during the UK’s first lockdown in March 2020, the government began issuing devices to children who needed them for home schooling, and has said up to one million devices will be provided to children by spring this year. The government also launched Get Help with Technology resources in 2020 to help schools claim laptops and internet access for disadvantages students, get support for setting up digital learning provisions, apply for funding to set up a digital education platform, and access training to deploy and use technology. To ensure as many children have access to education as possible, Labour advised the government to give devices to every child who needs one, to use the Get Help with Technology programme to ensure every child has internet access, and to establish a “minimum contact time” between schools and pupils to ensure children are receiving guidance. Labour also suggested ensuring there are people available to identify the need for and provide technical support and to “zero rate” websites used for education so that people are not charged for the data used when accessing educational resources. Abolishing data charges for education-related sites or giving families a higher data allowance could help people from under-privileged backgrounds to access school resources – Ofcom found that 7% of UK households rely on mobile data for internet access. Chi Onwurah, Labour’s shadow digital minister, said: “Labour has continually warned about the dangers of the digital divide which risks leaving so many children and young people behind. The government has yet again failed to deliver on digital provision for those who need it most.” While it shone a light on resources available to help people in the UK use digital to access school and work while at work, industry body TechUK also pointed out there is more to be done to close the UK’s digital divide, claiming it would be working alongside government, the tech sector and education providers to do so. During a statement to the House of Commons, prime minister Boris Johnson outlined some of the government’s work to provide young people with devices for remote learning. He claimed 560,000 laptops and tablets were provided to pupils in 2020, a further 50,000 were delivered to schools on 4 January 2021, and more than 100,000 would be deployed in total during the first week of term. Johnson thanked the BBC for providing educational resources to primary and secondary school children, and mobile operators for providing free mobile data to disadvantaged families to allow them to access online learning. “I know the whole House will join me in paying tribute to all the teachers, all the pupils and parents who are now making the rapid move to remote learning, and we will do everything possible to support that process,” he added. Read more about online education As it continues its mission to develop its gigabit network, West Country full-fibre infrastructure provider gets local educational establishments enrolled on broadband fast lane. Content Continues Below Download this free guide Project Brief: Inside Juniper's radical IT transformation Western Australian aged care provider Juniper embarked on an aggressive strategy to completely transform its IT operations in just two years. This 1-page brief summarises how this overhaul has helped free up the resources needed to provide better quality of care for residents. I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy. Please check the box if you want to proceed. I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time. Please check the box if you want to proceed. By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.
Feeding the family is a daily challenge. You strive for balance, flavour and variety. This smoked sausage couscous recipe checks off all the boxes for flavour and nutrition! For a more sophisticated twist, replace the smoked sausage with Toulouse or merguez. ingredients - 225 g package Olymel Smoked Sausages, cut into 5-cm (2-inch) slices - 2 cloves garlic, chopped - 1 onion, chopped - 2 carrots, diced - 1 red pepper, diced - Salt and freshly ground pepper to taste - 310 mL (1 1/4 cups) of chicken stock - 80 mL (1/3 cup) of raisins - 250 mL (1 cup) of frozen peas - 250 mL (1 cup) of corn kernels - 310 mL (1 1/4 cups) of medium wheat semolina (couscous) - 15 mL (1 tablespoon) of butter - 60 mL (1/4 cup) of maple syrup - 30 mL (2 tablespoons) of Dijon mustard - 45 mL (3 tablespoons) of water instructions - Heat 15 mL (1 tablespoon) oil in a pan over medium-high heat and sauté garlic with onion, carrots and red pepper. Cook for 3 to 4 minutes and season with salt and pepper. Add stock and raisins and bring to a boil. - Remove from heat and stir in peas, corn and couscous. Set aside for 5 minutes, covered, or until all the chicken stock has been absorbed. - Melt butter in a pan and brown sausages. Mix maple syrup with Dijon mustard and water, add to sausages and cook for 5 minutes. Serve sausages on top of couscous with vegetables.
https://www.olymel.com/en/quick-recipes/couscous-with-smoked-sausages/
Raleigh, N.C. — Raleigh could be forced to spend more than $250 million on a new water treatment plant because increasing pollution is overwhelming the current plant, according to city officials. Falls Lake provides drinking water for more than 450,000 Wake County residents, but runoff from farm fields and storm drains in Durham and Granville counties, near the lake's headwaters, has led to excessive algae growth and sediment. In a Dec. 28 report to City Manager Russell Allen, the city's Public Utilities Department said the E. M. Johnson Water Treatment Plant cannot handle the growing amount of carbon in the lake that is being produced by runoff from developed areas and the algae and bacteria in the lake that thrive on other nutrients in the runoff. "The consequence of declining water quality in Falls Lake will invariably lead to greater operational and capital expenditures to ensure compliance with Safe Drinking Water Act regulations," officials said in the report. "Additional (organic carbon) removal would require the construction of one or more 'advanced' treatment processes." Algal blooms also have the potential to clog the filters at the treatment plant's intake pipe in the lake, diminishing its water treatment capacity, officials said. Expanding the plant to handle 100 million gallons a day, from the current capacity of 86 million gallons a day, and upgrading it with various options to treat the pollution could cost the city from $265 million to $341 million, according to the report. The state Environmental Management Commission and the state Department of Environment and Natural Resources have set a January 2011 deadline for putting a plan in place to clean up the lake. Raleigh officials have said they would like all pollution cleared from the lake by 2016.
https://www.wral.com/news/local/story/6746623/
Powerlifter Julius Maddox, the world’s strongest bench presser, has been vocal about his ambition to be the first person to successfully lift 800 pounds in the raw bench press. This weekend, after months of training, he will finally make his attempt. The event will go out on CoreSports’ ‘Beasts of the Bench’ live-stream on Saturday June 20, at 12 p.m. EST, as part of its ongoing World’s Ultimate Strongman series. You can also watch live via the Rogue Fitness YouTube channel. If Maddox succeeds in his attempt, he’ll be taking the bench press world record from himself—a second time. He first set the record in August 2019 when he lifted 739.6 pounds, and then beat his own record at the Arnold Classic this year with a press of 770 pounds. Maddox has been keeping fans apprised of how he’s progressing in his bench press training, posting regular gym updates as he works his way up to 800 pounds. Just a week ago, he shared a clip of a bench session where he cranked out two reps at 725 pounds—where the world record stood just five years ago.
https://greenhealthlive.com/top/how-to-watch-julius-maddox039s-800-pound-bench-press-world-record-attempt/
These clients wanted a landscape that had a sense of flow, of unfolding, of mysteries revealed. They love art and were open to ideas beyond the traditional themes, and so the swirls and circles of artist Sonia Delaunay, became the inspiration for this gorgeous coastal property in Cohasset, MA. The painting Composition with Discs, by Sonia Delaunay became the inspiration for this garden. The concentric circles in rhythmic patterns really captured the dynamic feeling the clients were looking for. Along the entire length of the house we made a sweeping serpentine stone wall that swerves around like a great dragon tail. Notice how the paved landing is repeating the theme of shapes in the doorway. The circle of bluestone has specially chosen brown pieces to mimic the wooden blocks around the circle in the doorway. They also have a rippling texture, which is reminiscent of the shallow tide waters near their home. These rippling circles and echoing arcs become the serpentining stone wall, round patio entrance and swirling plant beds, while the yellow, blues and greens inspire the plant palette. The deep beds are filled with Euphorbia, Lavender, Geranium, Coreopsis, Salvia and Hakonechloa grass. They sing with blues, purples, and yellows that the homeowners love. At the far end of the serpentine wall we created a sculptural element with a large boulder we unearthed when excavating. I had the idea to repeat the circle theme pre-existent in the front door, so a hole was drilled in the boulder, creating a strong focal point and a unifying theme . The Japanese Maple, which was moved from another part of the property, balanced the strong composition perfectly. Lavender, Coreopsis and "Soft Touch" hollies create a contrast of form and texture with the stone wall and Japanese Maple. Plant forms create a sculptural sense of space as they interact with the structure of the stone wall. As one walks throughout the spaces, some viewpoints have a simple composition, such as this one with Hakonechloa grass and Japanese Maples, offering a moment of simplicity and calm. A granite bench sits between two cherry trees with a stunning view of the harbor. The entrance to the house meanders up this curving stone staircase, where surrounding plants act as a threshold one passes through before entering the more open space at the top. The gazebo, with its columns that echo the shape of the tree trunks, is surrounded by shaggy grasses and views of the harbor. Swaths of native plants surround the property, offering bright colors for the homeowners and habitat for the birds, bees and butterflies.
http://amymartinlandscape.com/delaunay-garden/
# Circle packing in a circle Circle packing in a circle is a two-dimensional packing problem with the objective of packing unit circles into the smallest possible larger circle. ## Table of Solutions, 1 ≤ n ≤ 20 If more than one equivalent solution exists, all are shown. ## Special Cases Only 26 optimal packings are thought to be rigid (with no circles able to "rattle"). Numbers in bold are prime: Proven for n = 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 19 Conjectured for n = 14, 15, 16, 17, 18, 22, 23, 27, 30, 31, 33, 37, 61, 91 Of these, solutions for n = 2, 3, 4, 7, 19, and 37 achieve a packing density greater than any smaller number > 1. (Higher density records all have rattles.)
https://en.wikipedia.org/wiki/Circle_packing_in_a_circle
Aerospace Electrical Assembly Technician Program Description: Today’s aircraft are highly complex machines with hundreds of electronic components and miles of wiring. Aerospace electrical assembly technicians ensure the electronic systems on modern aircraft operate at peak performance. They prepare wiring layouts, select and install a wide range of electrical components, perform scheduled maintenance, and complete inspections. Repairing, diagnosing and assembling the electronic components, they play a crucial role in ensuring worry-free flight. Access: Immediately Available (up to 1 year to complete) Duration: 161 Lesson Modules (approximately 180-200 hours of time to complete the training) Assessed Credit: 6 Hours of General Elective Credit (Certification exam must be completed for full 6 hours of college credit - this is an additional $35) Skills Include:
https://learning.indwes.edu/product?catalog=TL-180Aerospace-Electrical-Assembly-Technician
Dogs are affectionately known as man's best friend. And as well as providing years of loyal companionship, they can boost health among the older generation, a university study has revealed. Researchers from the University of East Anglia (UEA) and Centre for Diet and Activity Research (CEDAR) at the University of Cambridge found that owning or walking a dog boosted levels of physical activity in older people, especially during the winter. Just over 3,000 people from Norfolk took part in the study. It found that dog owners were inactive for 30 minutes less per day on average when compared to people who did not own dogs. You may also want to watch: Project lead Prof Andy Jones, from UEA's Norwich School of Medicine, said: 'We were amazed to find that dog walkers were on average more physically active and spent less time sitting on the coldest, wettest, and darkest days than non-dog owners were on long, sunny, and warm summer days. 'The size of the difference we observed between these groups was much larger than we typically find for interventions such as group physical activity sessions that are often used to help people remain active.' Most Read - 1 Famous Norwich firm locked in legal battle with Red Bull - 2 Huge village home with indoor swimming pool for sale for £1.2m - 3 'I couldn't believe my eyes' - snorkeller finds 125-year-old shipwreck - 4 Huge Christmas market returning to Norfolk Showground for 2021 - 5 End of an era as cafe owner hangs up apron after 26 years - 6 Britain's poshest train returning to Norwich for Christmas lunch - 7 Motorcyclist dies in crash on A11 - 8 Could you offer one of these rescue animals a forever home? - 9 People told to shut doors and windows after suspected gas leak - 10 Location revealed for new major music festival with '90s flavour' People who took part in the study were asked if they owned a dog and if they walked one. They also wore an accelerometer - a small electronic device that constantly measured their physical activity levels over seven days. The information was then linked to the weather conditions, sunrise and sunset times during each day of the study. Bad weather and short days are known to be one of the biggest barriers to staying active outdoors. The experts found out that on shorter days that were colder and wetter, all participants tended to be less physically active and spent more time sitting down. However, the dog walkers who were analysed were much less impacted by bad weather. Prof Jones added: 'Physical activity interventions typically try and support people to be active by focussing on the benefits to themselves, but dog walking is also driven by the needs of the animal. Being driven by something other than our own needs might be a really potent motivator and we need to find ways of tapping into it when designing exercise interventions in the future.' Views of Norwich dog owners Dog walking makes people more active and social as well as change lifestyle habits. That was the consensus from a group of dog owners taking their pets for a walk on Earlham Park, Norwich. One woman said: 'I have had times in my life when I have been dog-less and I have not taken near as much exercise as I do now. It definitely makes me more active. I have to exercise my dog whatever the weather.' Another dog owner, said: 'I go out twice a day every day, before and after work, and do long walks during the weekend. During frosty times we go on long walks and wrap up for them.' She added that having a dog had changed her habits in terms of general activity levels and even when she was not with her dog she would take long walks while on holiday. One dog owner said that as well as getting out every day with her pet, going on a walk made her talk to more people in her community.
https://www.edp24.co.uk/news/health/new-uea-study-proves-dog-walking-benefits-physical-health-in-1062256
--- abstract: | When using cylindrical algebraic decomposition (CAD) to solve a problem with respect to a set of polynomials, it is likely not the signs of those polynomials that are of paramount importance but rather the truth values of certain quantifier free formulae involving them. This observation motivates our article and definition of a Truth Table Invariant CAD (TTICAD). In ISSAC 2013 the current authors presented an algorithm that can efficiently and directly construct a TTICAD for a list of formulae in which each has an equational constraint. This was achieved by generalising McCallum’s theory of reduced projection operators. In this paper we present an extended version of our theory which can be applied to an arbitrary list of formulae, achieving savings if at least one has an equational constraint. We also explain how the theory of reduced projection operators can allow for further improvements to the lifting phase of CAD algorithms, even in the context of a single equational constraint. The algorithm is implemented fully in [Maple]{} and we present both promising results from experimentation and a complexity analysis showing the benefits of our contributions. address: - 'Department of Computer Science, University of Bath, Bath, BA2 7AY, UK' - 'School of Computing, Electronics and Maths, Faculty of Engineering, Environment and Computing, Coventry University, Coventry, CV1 5FB, UK' - 'Department of Computing, Macquarie University, NSW 2109, Australia' author: - Russell Bradford - 'James H. Davenport' - Matthew England - Scott McCallum - David Wilson bibliography: - 'CAD.bib' title: | Truth Table Invariant\ Cylindrical Algebraic Decomposition --- [^1] , , , and cylindrical algebraic decomposition ,equational constraint 68W30 ,03C10 Introduction {#sec:Intro} ============ A *cylindrical algebraic decomposition* (CAD) is a decomposition of $\R^n$ into cells arranged cylindrically (meaning the projections of any pair of cells are either equal or disjoint) each of which is (semi-)algebraic (describable using polynomial relations). CAD is a key tool in real algebraic geometry, offering a method for quantifier elimination in real closed fields. Applications include the derivation of optimal numerical schemes [@EH14], parametric optimisation [@FPM05], robot motion planning [@SS83II], epidemic modelling [@BENW06], theorem proving [@Paulson2012] and programming with complex functions [@DBEW12]. Traditionally CADs are produced *sign-invariant* to a given set of polynomials, (the signs of the polynomials do not vary within each cell). However, this gives far more information than required for most applications. Usually a more appropriate object is a *truth-invariant* CAD (the truth of a logical formula does not vary within cells). In this paper we generalise to define *truth table invariant* CADs (the truth values of a list of quantifier-free formulae do not vary within cells) and give an algorithm to compute these directly. This can be a tool to efficiently produce a truth-invariant CAD for a parent formula (built from the input list), or indeed the required object for solving a problem involving the input list. Examples of both such uses are provided following the formal definition in Section \[subsec:TTICAD\]. We continue the introduction with some background on CAD, before defining our object of study and introducing some examples to demonstrate our ideas which we will return to throughout the paper. We then conclude the introduction by clarifying the contributions and plan of this paper. Background on CAD {#subsec:Background} ----------------- A *Tarski formula* $F(x_1,\ldots,x_n)$ is a Boolean combination ($\land,\lor,\neg,\rightarrow$) of statements about the signs, ($=0,>0,<0$, but therefore $\ne0,\ge0,\le0$ as well), of certain polynomials $f_i(x_1,\ldots,x_n)$ with integer coefficients. Such statements may involve the universal or existential quantifiers ($\forall, \exists$). We denote by QFF a *quantifier-free Tarski formula*. Given a quantified Tarski formula $$\label{eqn:QT} Q_{k+1}x_{k+1}\ldots Q_nx_n F(x_1,\ldots,x_n)$$ (where $Q_i\in\{\forall,\exists\}$ and $F$ is a QFF) the *quantifier elimination problem* is to produce $\psi(x_1,\ldots,x_k)$, an equivalent QFF to . Collins developed CAD as a tool for quantifier elimination over the reals. He proposed to decompose $\R^n$ cylindrically such that each cell was sign-invariant for all polynomials $f_i$ used to define $F$. Then $\psi$ would be the disjunction of the defining formulae of those cells $c_i$ in $\R^k$ such that (\[eqn:QT\]) was true over the whole of $c_i$, which due to sign-invariance is the same as saying that (\[eqn:QT\]) is true at any one *sample point* of $c_i$. A complete description of Collins’ original algorithm is given by [@ACM84I]. The first phase, *projection*, applies a projection operator repeatedly to a set of polynomials, each time producing another set in one fewer variables. Together these sets contain the *projection polynomials*. These are used in the second phase, *lifting*, to build the CAD incrementally. First $\R$ is decomposed into cells which are points and intervals corresponding to the real roots of the univariate polynomials. Then $\R^2$ is decomposed by repeating the process over each cell in $\R$ using the bivariate polynomials at a sample point. Over each cell there are [*sections*]{} (where a polynomial vanishes) and [*sectors*]{} (the regions between) which together form a *stack*. Taking the union of these stacks gives the CAD of $\R^2$. This is repeated until a CAD of $\R^n$ is produced. At each stage the cells are represented by (at least) a sample point and an index: a list of integers corresponding to the ordered roots of the projection polynomials which locates the cell in the CAD. To conclude that a CAD produced in this way is sign-invariant we need delineability. A polynomial is *delineable* in a cell if the portion of its zero set in the cell consists of disjoint sections. A set of polynomials are *delineable* in a cell if each is delineable and the sections of different polynomials in the cell are either identical or disjoint. The projection operator used must be defined so that over each cell of a sign-invariant CAD for projection polynomials in $r$ variables (the word *over* meaning we are now talking about an $(r+1)$-dim space) the polynomials in $r+1$ variables are delineable. The output of this and subsequent CAD algorithms (including the one presented in this paper) depends heavily on the variable ordering. We usually work with polynomials in $\mathbb{Z}[{\bf x}]=\mathbb{Z}[x_1,\ldots,x_n]$ with the variables, ${\bf x}$, in ascending order (so we first project with respect to $x_n$ and continue to reach univariate polynomials in $x_1$). The *main variable* of a polynomial (${\rm mvar}$) is the greatest variable present with respect to the ordering. CAD has doubly exponential complexity in the number of variables [@BD07; @DH88]. There now exist algorithms with better complexity for some CAD applications (see for example [@BPR96]) but CAD implementations often remain the best general purpose approach. There have been many developments to the theory since Collin’s treatment, including the following: - Improvements to the projection operator [@Hong1990; @McCallum1988; @McCallum1998; @Brown2001a; @HDX14], reducing the number of projection polynomials computed. - Algorithms to identify the adjacency of cells in a CAD [@ACM84II; @ACM88] and following from this the idea of clustering [@Arnon1988] to minimise the lifting. - Partial CAD, introduced by [@CH91], where the structure of $F$ is used to lift less of the decomposition of $\R^k$ to $\R^n$, if it is sufficient to deduce $\psi$. - The theory of equational constraints, [@McCallum1999; @McCallum2001; @BM05], also aiming to deduce $\psi$ itself, this time using more efficient projections. - The use of certified numerics in the lifting phase to minimise the amount of symbolic computation required [@Strzebonski2006; @IYAY09]. - New approaches which break with the normal projection and lifting model: local projection [@Strzebonski2014a], the building of single CAD cells [@Brown2013; @JdM12] and CAD via Triangular Decomposition [@CMXY09]. The latter is now used for the CAD command built into <span style="font-variant:small-caps;">Maple</span>, and works by first creating a cylindrical decomposition of complex space. TTICAD {#subsec:TTICAD} ------ [@Brown1998] defined a *truth-invariant CAD* as one for which a formula had invariant truth value on each cell. Given a QFF, a sign-invariant CAD for the defining polynomials is trivially truth-invariant. Brown considered the refinement of sign-invariant CADs whilst maintaining truth-invariance, while some of the developments listed above can be viewed as methods to produce truth-invariant CADs directly. We define a new but related type of CAD, the topic of this paper. Let $\{ \phi_i\}_{i=1}^t$ refer to a list of QFFs. We say a cylindrical algebraic decomposition $\mathcal{D}$ is a [*Truth Table Invariant*]{} CAD for the QFFs (TTICAD) if the Boolean value of each $\phi_i$ is constant (either true or false) on each cell of $\mathcal{D}$. A sign-invariant CAD for all polynomials occurring in a list of formulae would clearly be a TTICAD for the list. However, we aim to produce smaller TTICADs for many such lists. We will achieve this by utilising the presence of equational constraints, a technique first suggested by [@Collins1998] with key theory developed by [@McCallum1999]. Suppose some quantified formula is given: $$\phi^* = (Q_{k+1} x_{k+1})\cdots(Q_n x_n) \phi({\bf x})$$ where the $Q_i$ are quantifiers and $\phi$ is quantifier free. An equation $f=0$ is an [*equational constraint*]{} (EC) of $\phi^*$ if $f=0$ is logically implied by $\phi$ (the quantifier-free part of $\phi^*$). Such a constraint may be either explicit (an atom of the formula) or otherwise implicit. In Sections \[sec:TTIProj\] and \[sec:Algorithm\] we will describe how TTICADs can be produced efficiently when there are ECs present in the list of formulae. There are two reasons to use this theory. 1. *As a tool to build a truth-invariant CAD efficiently:* If a parent formula $\phi^{*}$ is built from the formulae $\{\phi_i\}$ then any TTICAD for $\{\phi_i\}$ is also truth-invariant for $\phi^{*}$. We note that for such a formula a TTICAD may need to contain more cells than a truth-invariant CAD. For example, consider a cell in a truth-invariant CAD for $\phi^{*} = \phi_1 \lor \phi_2$ within which $\phi_1$ is always true. If $\phi_2$ changed truth value in such a cell then it would need to be split in order to achieve a TTICAD, but this is unnecessary for a truth-invariant CAD of $\phi^*$. Nevertheless, we find that our TTICAD theory is often able to produce smaller truth-invariant CADs than any other available approach. We demonstrate the savings offered via worked examples introduced in the next subsection. 2. *When given a problem for which truth table invariance is required:* That is, a problem for which the list of formulae are not derived from a larger parent formula and thus a truth-invariant CAD for their disjunction may not suffice. For example, decomposing complex space according to a set of branch cuts for the purpose of algebraic simplification [@BD02; @PBD10]. Here the idea is to represent each branch cut as a semi-algebraic set to give input admissible to CAD, (recent progress on this has been described by [@EBDW13]). Then a TTICAD for the list of formulae these sets define provides the necessary decomposition. Example \[ex:Kahan\] is from this class. Worked examples {#subsec:WE1} --------------- To demonstrate our ideas we will provide details for two worked examples. Assume we have the variable ordering $x \prec y$ (meaning 1-dimensional CADs are with respect to $x$) and consider the following polynomials, graphed in Figure \[fig:WE1\]. $$\begin{aligned} f_1 := x^2+y^2-1 \qquad \qquad \qquad & g_1 := xy - \tfrac{1}{4} \\ f_2 := (x-4)^2+(y-1)^2-1 \quad & g_2 := (x-4)(y-1) - \tfrac{1}{4}\end{aligned}$$ Suppose we wish to find the regions of $\R{}^2$ where the following formula is true: $$\label{eqn:ExPhi} \Phi:= \left(f_1 = 0 \land g_1 < 0 \right)\lor \left( f_2 = 0 \land g_2 < 0 \right).$$ Both <span style="font-variant:small-caps;">Qepcad</span> [@Brown2003a] and <span style="font-variant:small-caps;">Maple</span> 16 [@CMXY09] produce a sign-invariant CAD for the polynomials with 317 cells. Then by testing the sample point from each region we can systematically identify where the formula is true. ![The polynomials from the worked examples of Section \[subsec:WE1\]. The solid curves are $f_1$ and $g_1$ while the dashed curves are $f_2$ and $g_2$.[]{data-label="fig:WE1"}](WE1.jpg) At first glance it seems that the theory of ECs is not applicable to $\Phi$ as neither $f_1 = 0$ nor $f_2 = 0$ is logically implied by $\Phi$. However, while there is no explicit EC we can observe that $f_1f_2 = 0$ is an [*implicit*]{} constraint of $\Phi$. Using <span style="font-variant:small-caps;">Qepcad</span> with this declared (an implementation of [@McCallum1999]) gives a CAD with 249 cells. Later, in Section \[subsec:WE2\] we demonstrate how a TTICAD with 105 cells can be produced. We also consider the related problem of identifying where $$\label{eqn:ExPsi} \Psi:= \left(f_1 = 0 \land g_1 < 0 \right) \lor \left( f_2 > 0 \land g_2 < 0 \right)$$ is true. As above, we could use a sign-invariant CAD with 317 cells, but this time there is no implicit EC. In Section \[subsec:WE2\] we produce a TTICAD with 183 cells. Contributions and plan of the paper {#subsec:Plan} ----------------------------------- We review the projection operators of [@McCallum1998; @McCallum1999] in Section \[sec:ExistingProj\]. The former produces sign-invariant CADs[^2]and the latter CADs truth-invariant for a formula with an EC. The review is necessary since we use some of this theory to verify our new algorithm. It also allows us to compare our new contribution to these existing approaches. For this purpose we provide new complexity analyses of these existing theories in Section \[subsec:CA1\]. Sections \[sec:TTIProj\] and \[sec:Algorithm\] present our new TTICAD projection operator and verified algorithm. They follow Sections 2 and 3 of our ISSAC 2013 paper [@BDEMW13], but instead of requiring all QFFs to have an EC the theory here is applicable to all QFFs (producing savings so long as one has an EC). The strengthening of the theory means that a TTICAD can now be produced for $\Psi$ in Section \[subsec:WE1\] as well as $\Phi$. This extension is important since it means TTICAD theory now applied to cases where there can be no overall implicit EC for a parent formula. In these cases the existing theory of ECs is not applicable and so the comparative benefits offered by TTICAD are even higher. In Section \[sec:ImprovedLifting\] we discuss how the theory of reduced projection operators also allows for improvements in the lifting phase. This is true for the existing theory also but the discovery was only made during the development of TTICAD. In Section \[sec:CA\] we present a complexity analysis of our new contributions from Sections \[sec:TTIProj\] $-$ \[sec:ImprovedLifting\], demonstrating their benefit over the existing theory from Section \[sec:ExistingProj\]. We have implemented the new ideas in a <span style="font-variant:small-caps;">Maple</span> package, discussed in Section \[sec:Implementation\]. In particular, Section \[sec:Formulation\] summarises [@BDEW13] on the choices required when using TTICAD and heuristics to help. Experimental results for our implementation (extending those in our ISSAC 2013 paper) are given in Section \[sec:Experiment\], before we finish in Section \[sec:Conclusion\] with conclusions and future work. **Data access statement:** Data directly supporing this paper (code, <span style="font-variant:small-caps;">Maple</span> and <span style="font-variant:small-caps;">Qepcad</span> input) is openly available from `http://dx.doi.org/10.15125/BATH-00076`. Existing CAD projection operators {#sec:ExistingProj} ================================= Review: Sign-invariant CAD {#subsec:SI} -------------------------- Throughout the paper we let $\cont, \prim, \disc, \coeff$ and $\ldcf$ denote the content, primitive part, discriminant, coefficients and leading coefficient of polynomials respectively (in each case taken with respect to a given main variable). Similarly, we let $\res$ denote the resultant of a pair of polynomials. When applied to a set of polynomials we interpret these as producing sets of polynomials, so for example $$\res(A)=\left\{\res(f_i,f_j) \, | \, f_i \in A, f_j \in A, f_j \neq f_i \right\}.$$ The first improvements to Collins original projection operator were given by [@McCallum1988] and [@Hong1990]. They were both subsets of Collins operator, meaning fewer projection polynomials, fewer cells in the CADs produced and quicker computation time. McCallum’s is actually a strict subset of Hong’s, however, it cannot be guaranteed correct (incorrectness is detected in the lifting process) for a certain class of (statistically rare) input polynomials, where Hong’s can. Additional improvements have been suggested by [@Brown2001a] and [@Lazard1994]. The former required changes to the lifting phase while the latter had a flawed proof of validity (with current unpublished work suggesting it can still be safely used in many cases). In this paper we will focus on McCallum’s operators, noting that the alternatives could likely be extended to TTICAD theories too if desired. McCallum’s theory is based around the following condition, which implies sign-invariance. \[def:OI\] A CAD is *order-invariant* with respect to a set of polynomials if each polynomial has constant order of vanishing within each cell. Recall that a set $A \subset \mathbb{Z}[{\bf x}]$ is an *irreducible basis* if the elements of $A$ are of positive degree in the main variable, irreducible and pairwise relatively prime. Let $A$ be a set of polynomials and $B$ an irreducible basis of the primitive part of $A$. Then $$\label{eq:P} P(A):=\cont(A) \cup \coeff(B) \cup \disc(B) \cup \res(B)$$ defines the operator of [@McCallum1988]. We can assume some trivial simplifications such as the removal of constants and exclusion of entries identical to a previous one (up to constant multiple). The main theorem underlying the use of $P$ follows. \[thm:McC1\] Let $A$ be an irreducible basis in $\mathbb{Z}[{\bf x}]$ and let $S$ be a connected submanifold of $\mathbb{R}^{n-1}$. Suppose each element of $P(A)$ is order-invariant in $S$. Then each element of $A$ either vanishes identically on $S$ or is analytic delineable on $S$, (a slight variant on traditional delineability, see [@McCallum1998]). Further, the sections of $A$ not identically vanishing are pairwise disjoint, and each element of $A$ not identically vanishing is order-invariant in such sections. Theorem \[thm:McC1\] means that we can use $P$ in place of Collins’ projection operator to produce sign-invariant CADs so long as none of the projection polynomials with main variable $x_k$ vanishes on a cell of the CAD of $\R^{k-1}$; a condition that can be checked when lifting. Input with this property is known as *well-oriented*. Note that although McCallum’s operator produces order-invariant CADs, a stronger property than sign-invariance, it is actually more efficient that the pre-existing sign-invariant operators. We examine the complexity of CAD using this operator in Section \[subsec:CA1\]. Review: CAD invariant with respect to an equational constraint {#subsec:EC} -------------------------------------------------------------- The main result underlying CAD simplification in the presence of an EC follows. \[thm:McC2\] Let $f({\bf x}), g({\bf x})$ be integral polynomials with positive degree in $x_n$, let $r(x_1,\ldots,x_{n-1})$ be their resultant, and suppose $r \neq 0$. Let $S$ be a connected subset of $\mathbb{R}^{n-1}$ such that $f$ is delineable on $S$ and $r$ is order-invariant in $S$. Then $g$ is [*sign-invariant*]{} in every section of $f$ over $S$. ![Graphical representation of Theorem \[thm:McC2\].[]{data-label="fig:Theorem"}](smccfig1a) Figure \[fig:Theorem\] gives a graphical representation of the question answered by Theorem \[thm:McC2\]. Here we consider polynomials $f(x,y,z)$ and $g(x,y,z)$ of positive degree in $z$ whose resultant $r$ is non-zero, and a connected subset $S \subset \mathbb{R}^2$ in which $r$ is order-invariant. We further suppose that $f$ is delineable on $S$ (noting that Theorem \[thm:McC1\] with $n=3$ and $A = \{f\}$ provides sufficient conditions for this). We ask whether $g$ is sign-invariant in the sections of $f$ over $S$. Theorem \[thm:McC2\] answers this question affirmatively: the real variety of $g$ either aligns with a given section of $f$ exactly (as for the bottom section of $f$ in Figure \[fig:Theorem\]), or has no intersection with such a section (as for the top). The situation at the middle section of $f$ cannot happen. Theorem \[thm:McC2\] thus suggests a reduction of the projection operator $P$ relative to an EC $f = 0$: take only $P(f)$ together with the resultants of $f$ with the non-ECs. Let $A$ be a set of polynomials, $E \subset A$ contain only the polynomial defining the EC, $F$ be a square free basis of $A$, and $B$ be the subset of $F$ which is a square-free basis for $E$. The operator $$\label{eq:ECProj} P_{E}(A) := \cont(A) \cup P(F) \cup \{ {\rm res}_{x_n}(f,g) \mid f \in F, g \in B \setminus F \}$$ was presented by [@McCallum1999] along with an algorithm to produce a CAD truth-invariant for the EC and sign-invariant for the other polynomials when the EC was satisfied. It worked by applying first $P_E(A)$ and then building an order-invariant CAD of $\mathbb{R}^{n-1}$ using $P$. We call such CADs *invariant with respect to an equational constraint*. Note that as with [@McCallum1999] the algorithm only works for input satisfying a well-orientedness condition. Full details of the verification are given by [@McCallum1999] and a complexity analysis is given in the next subsection. New complexity analyses {#subsec:CA1} ----------------------- We provide complexity analyses of the algorithms from [@McCallum1998; @McCallum1999] for comparison with our new contributions later. An analysis for the latter has not been published before, while the analysis for the former differs substantially from the one in [@McCallum1985]: instead of focusing on computation time, we examine the number of cells in the CAD of $\mathbb{R}^n$ produced: the *cell count*. We compare the dominant terms in a cell count bound for each algorithm studied. This focus avoids calculations with less relevant parameters, identical for all the algorithms. We note that all CAD experimentation shows a strong correlation between the number of cells produced and the computation time. Our key parameters are the number of variables $n$, the number of polynomials $m$ and their maximum degree $d$ (in any one variable). Note that these are all restricted to positive integer values. We make much use of the following concepts. \[def:cd\] Consider a set of polynomials $p_j$. The **combined degree** of the set is the maximum degree (taken with respect to each variable) of the product of all the polynomials in the set: $ \textstyle \max_{i} \left( \deg_{x_i}\left( \prod_j p_j \right)\right). $ So for example, the set $\{x^2+1, x^2+y^3\}$ has combined degree $4$ (since the product has degree $4$ in $x$ and degree $3$ in $y$). \[def:md\] A set of polynomials has the $\bm{(m,d)}$**-property** if it can be partitioned into $m$ sets, such that each set has maximum combined degree $d$. So for example, the set of polynomials $ \{xy^3-x, x^4-xy, x^4-y^4+1\} $ has combined degree $9$ and thus the $(1,9)$-property. However, by partitioning it into three sets of one polynomial each, it also has the $(3,4)$-property. Partitioning into 2 sets will show it to have the $(2, 5)$, $(2,7)$ and $(2,8)$-properties also. The following result follows simply from the definitions. \[Prop:MD1\] If $A$ has the $(m,d)$-property then so does any squarefree basis of $A$. This contrasts with the facts that taking a square-free basis may not reduce the combined degree, but may cause exponential blow-up in the number of polynomials. \[Prop:MD2\] Suppose a set has the $(m,d)$-property. Then, by taking the union of groups of $\ell$ sets from the partition, it also has the $\left( \left\lceil \tfrac{m}{\ell} \right\rceil, \ell d \right)$-property. Note that in the case $\ell=2$ we have $\left\lceil \tfrac{m}{2} \right\rceil = \left\lfloor \tfrac{m+1}{2} \right\rfloor$. \[ex:md\] Let $S = \{ x^2y^4-x^3, x^2y^4+x^3 \}$ be a set of polynomials. Then $S$ has the $(2,4)$ and $(1,8)$-properties. A squarefree basis of $S$ is given by $S' = \{x^2, y^4-x, y^4+x\}$ which has the $(3,4)$ and $(1,8)$-properties. Proposition \[Prop:MD2\] states that $S'$ must also have the $(2,8)$-property, which can be checked by partitioning $S'$ so that $x^2$ is in a set of its own. However, from Proposition \[Prop:MD1\] we also know that $S'$ must have the $(2,4)$-property, which is obtained from either of the other partitions into two sets. $S'$ demonstrates the strength of the $(m,d)$-property. The trivial partition into sets of one polynomial is equivalent to the simple approach of just tracking the number of polynomials and maximum degree. In this example such an approach would lead us to 3 polynomials of degree 4, contributing a possible 12 real roots. However, by using more sophisticated partitions we replace this by 2 sets, for each of which the product of polynomial entries has degree 4, and so at most 8 real roots contributed. Though not used in this paper, we note an advantage of the $(m,d)$-property over the $(1,md)$-property is a better bound on root separation: any two roots require $O(2d)$ subdivisions to isolate, rather than the $O(md)$ implied by considering the product of all polynomials. We also recall the following classic identities for polynomials $f,g,h$: $$\begin{aligned} \res(fg,h) &= \res(f,h)\res(g,h); \label{eqn:ResProp1} \\ \disc(fg) &= \disc(f)\disc(g)\res(f,g)^2; \label{eqn:ResProp2} \\ \disc(f) &=(-1)^{\frac{1}{2}d(d-1)}\tfrac{1}{a_d}\res(f,f') \label{eqn:ResProp3} \end{aligned}$$ where $d$ is the degree of $f$, $f'$ its derivative and $a_d$ its leading coefficient (all taken with respect to the given main variable). \[L:SINew\] Suppose $A$ is a set of polynomials in $n$ variables with the $(m,d)$ property. Then $P(A)$ has the $(M,2d^2)$ property with $$\label{eq:SI-M} M = \left \lfloor \frac{(m+1)^2}2 \right\rfloor.$$ Partition $A$ as $S_1\cup\cdots\cup S_m$ according to its $(m,d)$-property. Let $B$ be a square-free basis for $\prim(A)$, $T_1$ the set of elements of $B$ which divide some element of $S_1$, and $T_i$ be those elements of $B$ which divide some element of $S_i$ but which have not already occurred in some $T_j: j<i$. 1. We first claim that each set $$\label{eq:T} \cont(S_i)\cup\ldcf(T_i)\cup\disc(T_i)\cup\res(T_i)$$ for $i = 1, \dots m$ has the $(1,2d^2)$ property. Let $c$ be the product of the elements of $\cont(S_i)$, $T_i=\{F_1,\ldots,F_{\mathfrak{t}}\}$ for some $\mathfrak{t}$ and $F:=cF_1,\ldots F_{\mathfrak{t}}$. Then $F$ divides the product of the elements of $S_i$ and so has degree at most $d$. Thus $\res(F,F')$ must have degree at most $2d^2$ because it is the determinant of a $(2d-1 \times 2d-1)$ matrix in which each element has degree at most $d$. Then by (\[eqn:ResProp3\]) and repeated application of (\[eqn:ResProp1\]) and (\[eqn:ResProp2\]) we see $\res(F,F')$ is a (non-trivial) power of $c$ multiplied by $$\textstyle \prod_{j=1}^{t} \ldcf(F_j) \prod_{j=1}^{t} \disc(F_j) \prod_{j<k}^{t} \res(F_j,F_k)^2.$$ Since this includes all the elements of (\[eq:T\]) the claim is proved. 2. We are still missing from $P(A)$ the $\res(f,g)$ where $f \in T_i, g \in T_j$ and $i \ne j$. For fixed $i,j$ consider $\res\left(\prod_{f\in T_i}f,\prod_{g\in T_j}g\right)$, which by (\[eqn:ResProp1\]) is the product of the missing resultants. This is the resultant of two polynomials of degree at most $d$ and hence will have degree at most $2d^2$. Thus for fixed $i,j$ the set of missing resultants has the $(1,2d^2)$-property, and so the union of all such sets the $\left( \tfrac{1}{2}m(m-1), 2d^2 \right)$-property. 3. We are now missing from $P(A)$ only the non-leading coefficients of $B$. The polynomials in the set $T_i$ have degree at most $d$ when multiplied together, and so, separately or together, have at most $d$ *non-leading* coefficients, each of which has degree at most $d$. Hence this set of *non-leading* coefficients has the $(1,d^2)$ property. This is the case for $i$ from $1$ to $m$ and thus together the non-leading coefficients of $B$ have the $(m,d^2)$-property. We can then pair up these sets to get a partition with the $(\lceil m/2\rceil, 2d^2)$-property (Proposition \[Prop:MD2\]). Hence $P(A)$ can be partitioned into $$m + \frac{m(m-1)}2 + \left\lceil \frac{m}{2} \right\rceil = \frac{m(m+1)}2 + \left\lfloor \frac{m+1}{2} \right\rfloor = \left \lfloor \frac{(m+1)^2}2 \right\rfloor$$ sets (where the final equality follows from $m(m+1)$ always being even) each with combined degree $2d^2$. This concerns a single projection, and we must apply it recursively to consider the full set of projection polynomials. Weakening the bound as in the following allows for a closed form solution. \[cor:SI2\] If $A$ is a set of polynomials with the $(m,d)$ property where $m>1$, then $P(A)$ has the $(m^2,2d^2)$-property. \[rem:SIComplexity\] 1. Note that if $A$ has the $(1,d)$-property then $P(A)$ has the $(2,2d^2)$ property and hence the need for $m>1$ to apply Corollary \[cor:SI2\]. As our paper continues we present new theory that applies to the first projection only. Hence for a fair and accurate complexity comparison we will use Lemma \[L:SINew\] for the first projection and then Corollary \[cor:SI2\] for subsequent ones, (applicable since even if we start with $m=1$ polynomial for the first projection, we can assume $m \geq 2$ thereafter). 2. The analysis so far resembles Section 6.1 of [@McCallum1985]. However, that thesis leads us to the $(m^2d,2d^2)$-property in place of Corollary \[cor:SI2\]. The extra dependency on $d$ was avoided by an improved analysis in the proof of Lemma \[L:SINew\] part (3). We consider the growth in projection polynomials and their degree when using the operator $P$ in Table \[tab:GeneralProjection\]. Here the column headings refer not to the number of polynomials and their degree, but to the number of sets and their combined degree when applying Definition \[def:md\]. We start with $m$ polynomials of degree $d$ and after one projection have a set with the $(M, 2d^2)$ property, using $M$ from Lemma \[L:SINew\]. We then use Corollary \[cor:SI2\] to model the growth in subsequent projections, and a simple induction to fill in the table. Variables Number Degree Product ----------- ------------------ ---------------------------- ---------------------------------------- $n$ $m$ $d$ $md$ $n-1$ $M$ $2d^2$ $2Md^2$ $n-2$ $M^2$ $8d^4$ $2^3M^2d^4$ $n-3$ $M^4$ $128d^8$ $2^{7}M^4d^8$ $n-r$ $M^{2^{r-1}}$ $2^{2^r-1}d^{2^r}$ $2^{2^r-1}d^{2^r}M^{2^{r-1}}$ 1 $M^{2^{n-2}}$ $2^{2^{n-1}-1}d^{2^{n-1}}$ $2^{2^{n-1}-1}d^{2^{n-1}}M^{2^{n-2}}$ Product $M^{2^{n-1}-1}m$ $2^{2^{n}-1-n}d^{2^{n}-1}$ $2^{2^{n}-n-1}d^{2^n-1}M^{2^{n-1}-1}m$ : Expression growth for CAD projection where: after the first projection we have polynomials with the ($M, 2d^2$)-property and thereafter we measure growth using Corollary \[cor:SI2\]. The value of $M$ could be (\[eq:SI-M\]), (\[eq:EC-M\]), (\[eq:TTI-M\]) , (\[eq:ECImplicit-M\]) or (\[eq:TTIGeneral-M\]) depending on which projection scheme we are analysing.[]{data-label="tab:GeneralProjection"} The size of the CAD produced depends on the number of real roots of the projection polynomials. We can hence bound the number of real roots in a set of polynomials with the $(m,d)$-property with $md$ (in practice many of them will be strictly complex). We can therefore bound the number of real roots of the univariate projection polynomials by the product of the two entries in the row of Table \[tab:GeneralProjection\] for 1 variable. The number of cells in the CAD of $\R^1$ is bounded by twice this plus 1. Similarly, the total number of cells in the CAD of $\R^n$ is bounded by the product of $2K+1$ where $K$ varies through the Product column of Table \[tab:GeneralProjection\], i.e. by $$(2Md + 1) \prod_{r=1}^{n-1} \left[ 2 \left( 2^{2^r-1}d^{2^r}M^{2^{r-1}} \right) + 1 \right].$$ Omitting the $+1$ will leave us with the dominant term of the bound, which can be calculated explicitly as $$\begin{aligned} &\qquad 2^{2^{n}-1}d^{2^{n}-1}M^{2^{n-1}-1}m, \label{bound:All} \\ &\leq 2^{2^{n}-1}d^{2^{n}-1}\left( \tfrac{1}{2}(m+1)^2 \right)^{2^{n-1}-1}m %\nonumber \\ = 2^{2^{n-1}}d^{2^{n}-1}(m+1)^{2^n-2}m. \label{bound:SINew}\end{aligned}$$ where the inequality was introduced by omitting the floor function in (\[eq:SI-M\]). This may be compared with the bound in Theorem 6.1.5 of [@McCallum1985], with the main differences explained by Remark \[rem:SIComplexity\](2). We now turn our focus to CAD invariant with respect to an EC. Recall that we use operator $P_E(A)$ for the first projection only and $P(A)$ thereafter. Hence we use Corollary \[cor:SI2\] for the bulk of the analysis, and the next lemma when considering the first projection. \[L:EC\] Suppose $A$ is a set of $m$ polynomials in $n$ variables each with maximum degree d, and that $E \subseteq A$ contains a single polynomial. Then the reduced projection $P_E(A)$ has the $(M,2d^2)$-property with $$\label{eq:EC-M} M = \left\lfloor \tfrac{1}{2}(3m+1) \right\rfloor.$$ Since $E$ contains a single polynomial its squarefree basis $F$ has the $(1,d)$-property. 1. The contents, leading coefficients and discriminants from $F$ form a set $R_1$ with combined degree $2d^2$ (see proof of Lemma \[L:SINew\] step 1) and the other coefficients a set $R_2$ with combined degree $d^2$ (see proof of Lemma \[L:SINew\] step 3). 2. The set of remaining contents $R_3 = \cont(A) \setminus \cont(E)$ has the $(m-1,d)$-property and thus trivially, the $(m-1,d^2)$-property. Then $R_2 \cup R_3$ has the $(m,d^2)$-property and thus also the $\left( \lceil \tfrac{m}{2} \rceil, 2d^2 \right)$-property (Proposition \[Prop:MD2\]). 3. It remains to consider the final set of resultants in (\[eq:ECProj\]). Following the approach from the proof of Lemma \[L:SINew\] step 2, we conclude that for each of $m-1$ polynomials in $A \setminus E$ there contributes a set with the $(1,2d^2)$-property. So together they form a set $R_4$ with the $(m-1, 2d^2)$-property. Hence $P_E(A)$ is contained in $R_1 \cup ( R_2 \cup R_3 ) \cup R_4$ which may be partitioned into $$1 + \left\lceil \tfrac{m}{2} \right\rceil + (m-1) = \left\lfloor \tfrac{1}{2}(m+1) \right\rfloor + m = \left\lfloor \tfrac{1}{2}(3m+1) \right\rfloor$$ sets of combined degree $2d^2$. We can use Table \[tab:GeneralProjection\] to model the growth in projection polynomials for the algorithm in [@McCallum1999] as well, since the only difference will be the number of polynomials produced by the first projection, and thus the value of $M$. Hence the dominant term in the bound on the total number of cells is given again by (\[bound:All\]), which in this case becomes (upon omitting the floor) $$\begin{aligned} &\qquad 2^{2^{n}-1}d^{2^{n}-1}( \tfrac{1}{2}(3m+1) )^{2^{n-1}-1}m %\nonumber \\ & = 2^{2^{n-1}}d^{2^{n}-1}(3m+1)^{2^{n-1}-1}m. \label{bound:EC}\end{aligned}$$ Since $P_E(A)$ is a subset of $P(A)$ a CAD invariant with respect to an EC should certainly be simpler than a sign-invariant CAD for the polynomials involved. Indeed, comparing the different values of $M$ we see that $$\tfrac{1}{2}(m+1)^2 > \tfrac{1}{2}(3m+1) \qquad \mbox{(strictly so for } m>1 \mbox{).}$$ Comparing the dominant terms in the cell count bounds, (\[bound:EC\]) and (\[bound:SINew\]), we see the main effect is a decrease in one of the double exponents by $1$. A projection operator for TTICAD {#sec:TTIProj} ================================ New projection operator {#subsec:TTIProjOp} ----------------------- In [@McCallum1999] the central concept is the reduced projection of a set of polynomials $A$ relative to a subset $E$ (defining the EC). The full projection operator is applied to $E$ and then supplemented by the resultants of polynomials in $E$ with those in $E \setminus A$, since the latter group only effect the truth of the formula when they share a root with the former. We extend this idea to define a projection for a list of sets of polynomials (derived from a list of formulae), some of which may have subsets (derived from ECs). For simplicity in [@McCallum1999] the concept is first defined for the case when $A$ is an irreducible basis. We emulate this approach, generalising for other cases by considering contents and irreducible factors of positive degree when verifying the algorithm in Section \[sec:Algorithm\]. So let $\mathcal{A} = \{ A_i\}_{i=1}^t$ be a list of irreducible bases $A_i$ and let $\mathcal{E} = \{ E_i \}_{i=1}^t$ be a list of subsets $E_i \subseteq A_i$. Put $A = \bigcup_{i=1}^t A_i$ and $E = \bigcup_{i=1}^t E_i$. Note that we use the convention of uppercase Roman letters for sets of polynomials and calligraphic letters for lists of these. \[def:TTIProj\] With the notation above the [*reduced projection of $\mathcal{A}$ with respect to $\mathcal{E}$*]{} is $$\label{eqn:TTIProj-S} P_{\mathcal{E}}(\mathcal{A}) := \textstyle{\bigcup_{i=1}^t} P_{E_i}(A_i) \cup {\rm RES}^{\times} (\mathcal{E})$$ where ${\rm RES}^{\times} (\mathcal{E}) $ is the cross resultant set $$\begin{aligned} {\rm RES}^{\times} (\mathcal{E}) &= \{ {\rm res}_{x_n}(f,\hat{f}) \mid \exists \, i,j \,\, \mbox{such that } f \in E_i, \hat{f} \in E_j, i<j, f \neq \hat{f} \} \label{eqn:RESX}\end{aligned}$$ and $$\begin{aligned} P_{E}(A) = P(E) \cup \left\{ {\rm res}_{x_n}(f,g) \mid f\in E, g \in A, g \notin E \right\}, %\label{eqn:McCProjEC} \\ P(A) = \{ {\rm coeffs}(f), {\rm disc}(f), {\rm res}_{x_n}(f,g) \, | \, f,g \in A, f \neq g \}. %\label{eqn:McCProj}\end{aligned}$$ \[thm:Main\] Let $S$ be a connected submanifold of $\mathbb{R}^{n-1}$. Suppose each element of $P_{\mathcal{E}}(\mathcal{A})$ is order invariant in $S$. Then each $f \in E$ either vanishes identically on $S$ or is analytically delineable on $S$; the sections over $S$ of the $f \in E$ which do not vanish identically are pairwise disjoint; and each element $f \in E$ which does not vanish identically is order-invariant in such sections. *Moreover*, for each $i$, in $1 \leq i \leq t$ every $g \in A_i \setminus E_i$ is sign-invariant in each section over $S$ of every $f \in E_i$ which does not vanish identically. The crucial observation for the first part is that $P(E) \subseteq P_{\mathcal{E}}(\mathcal{A})$. To see this, recall equation and note that we can write $$P(E) = {\textstyle \bigcup_i }P(E_i) \cup {\rm RES}^{\times}(\mathcal{E}).$$ We can therefore apply Theorem \[thm:McC1\] to the set $E$ and obtain the first three conclusions immediately, leaving only the final conclusion to prove. Let $i$ be in the range $1 \leq i \leq t$, let $g \in A_i \setminus E_i$ and let $f \in E_i$. Suppose that $f$ does not vanish identically on $S$. Now ${\rm res}_{x_n}(f,g) \in P_{\mathcal{E}}(\mathcal{A})$, and so is order-invariant in $S$ by hypothesis. Further, we already concluded that $f$ is delineable. Therefore by Theorem \[thm:McC2\], $g$ is sign-invariant in each section of $f$ over $S$. Theorem \[thm:Main\] is the key tool for the verification of our TTICAD algorithm in Section \[sec:Algorithm\]. It allows us to conclude the output is correct so long as no $f \in E$ vanishes identically on the lower dimensional manifold, $S$. A polynomial $f$ in $r$ variables that vanishes identically at a point $\alpha \in \R{}^{r-1}$ is said to be *nullified* at $\alpha$. The theory of this subsection appears identical to the work in [@BDEMW13]. The difference is in the application of the theory in Section \[sec:Algorithm\]. We suppose that the input is a list of QFFs, $\{\phi_i\}$, with each $A_i$ defined from the polynomials in each $\phi_i$. In [@BDEMW13] there was an assumption (no longer made) that each of these formulae had a designated EC $f_i=0$ from which the subsets $E_i$ are defined. Instead, we define $E_i$ to be a basis for $\{f_i\}$ if there is such a designated EC and define $E_i = A_i$ otherwise. That is, we need to treat all the polynomials in QFFs with no EC with the importance usually reserved for ECs. Comparison with using a single implicit equational constraint {#subsec:Implicit} ------------------------------------------------------------- It is clear that in general the reduced projection $P_{\mathcal{E}}(\mathcal{A})$ will lead to fewer projection polynomials than using the full projection $P$. However, a comparison with the existing theory of equational constraints requires a little more care. First, we note that the TTICAD theory is applicable to a sequence of formulae while the theory of [@McCallum1999] is applicable only to a single formula. Hence if the truth value of each QFF is needed then TTICAD is the only option; a truth-invariant CAD for a parent formula will not necessarily suffice. Second we note that even if the sequence do form a parent formula then this must have an overall EC to use [@McCallum1999] while the TTICAD theory is applicable even if this is not the case. Let us consider the situation where both theories are applicable, i.e. we have a sequence of formulae (forming a parent formula) for which each has an EC and thus the parent formula an implicit EC (their product). In the context of Section \[subsec:TTICAD\] this corresponds to using $\prod_i f_i$ as the EC. The implicit EC approach would correspond to using the reduced projection $P_E(A)$ of [@McCallum1999], with $E=\cup_i E_i$ and $A=\cup_i A_i$. We make the simplifying assumption that $A$ is an irreducible basis. In general $P_{\mathcal{E}}(\mathcal{A})$ will still contain fewer polynomials than $P_E(A)$ since $P_E(A)$ contains all resultants res$(f,g)$ where $f \in E_i, g \in A_j$ (and $g \notin E$), while $P_{\mathcal{E}}(\mathcal{A})$ contains only those with $i=j$ (and $g \notin E_i$). Thus even in situations where the previous theory applies there is an advantage in using the new TTICAD theory. These savings are highlighted by the worked examples in the next subsection and the complexity analysis later. Worked examples {#subsec:WE2} --------------- In Section \[sec:Algorithm\] we define an algorithm for producing TTICADs. First we illustrate the savings with our worked examples from Section \[subsec:WE1\], which satisfy the simplifying assumptions from Section \[subsec:TTIProjOp\]. We start by considering $\Phi$ from equation (\[eqn:ExPhi\]). In the notation above we have: $$\begin{aligned} A_1 &:= \{f_1,g_1\}, \qquad E_1:=\{ f_1 \}; \\ A_2 &:= \{f_2,g_2\}, \qquad E_2:=\{ f_2 \}.\end{aligned}$$ We construct the reduced projection sets for each $\phi_i$, $$\begin{aligned} P_{E_1}(A_1) &= \left\{ x^2-1, x^4 - x^2 + \tfrac{1}{16} \right\}, \\ P_{E_2}(A_2) &= \left\{ x^2 - 8x +15, x^4 -16x^3 + 95x^2-248x + \tfrac{3841}{16} \right\},\end{aligned}$$ and the cross-resultant set $${\rm Res}^{\times} (\mathcal{E}) = \{{\rm res}_{y}(f_1,f_2)\} = \{ 68x^2 -272x + 285\}.$$ $P_{\mathcal{E}}(\mathcal{A})$ is then the union of these three sets. In Figure \[fig:WE2\] we plot the polynomials (solid curves) and identify the 12 real solutions of $P_{\mathcal{E}}(\mathcal{A})$ (solid vertical lines). We can see the solutions align with the asymptotes of the $f_i$’s and the important intersections (those of $f_1$ with $g_1$ and $f_2$ with $g_2$). ![The polynomials from $\Phi$ in equation (\[eqn:ExPhi\]) along with the roots of $P_{\mathcal{E}}(\mathcal{A})$ (solid lines), $P_E(A)$ (dashed lines) and $P(A)$ (dotted lines).[]{data-label="fig:WE2"}](WE2){width="4.7in"} ![Magnified region of Figure \[fig:WE2\].[]{data-label="fig:WE3"}](WE3){width="2.5in"} ![The polynomials from $\Psi$ in equation (\[eqn:ExPsi\]) along with the roots of $P_{\mathcal{E}}(\mathcal{A})$.[]{data-label="fig:WE4"}](WE4){width="4.7in"} ![Magnified region of Figure \[fig:WE4\].[]{data-label="fig:WE5"}](WE5){width="2.5in"} If we were to instead use a projection operator based on an implicit EC $f_1f_2=0$ then in the notation above we would construct $P_E(A)$ from $A=\{f_1,f_2,g_1,g_2\}$ and $E=\{f_1,f_2\}$. This set provides an extra 4 solutions (the dashed vertical lines) which align with the intersections of $f_1$ with $g_2$ and $f_2$ with $g_1$. Finally, if we were to consider $P(A)$ then we gain a further 4 solutions (the dotted vertical lines) which align with the intersections of $g_1$ and $g_2$ and the asymptotes of the $g_i$’s. In Figure \[fig:WE3\] we magnify a region to show explicitly that the point of intersection between $f_1$ and $g_1$ is identified by $P_{\mathcal{E}}(\mathcal{A})$, while the intersections of $g_2$ with both $f_1$ and $g_1$ are ignored. The 1-dimensional CAD produced using $P_{\mathcal{E}}(\mathcal{A})$ has 25 cells compared to 33 when using $P_E(A)$ and 41 when using $P(A)$. However, it is important to note that this reduction is amplified after lifting (using Theorem \[thm:Main\] and and Algorithm \[alg:TTICAD\]). The 2-dimensional TTICAD has 105 cells and the sign-invariant CAD has 317. Using [Qepcad]{} to build a CAD invariant with respect to the implicit EC gives us 249 cells. Next we consider determining the truth of $\Psi$ from equation (\[eqn:ExPsi\]). This time $$\begin{aligned} A_1 &:= \{f_1,g_1\}, \,\, E_1:=\{f_1\}, \\ A_2 &:= \{f_2,g_2\}, \,\, E_2:=\{f_2,g_2\},\end{aligned}$$ and so $P_{E_1}(A_1)$ is as above but $P_{E_2}(A_2)$ contains an extra polynomial $x-4$ (the coefficient of $y$ in $g_2$). The cross-resultant set ${\rm RES}^{\times} (\mathcal{E})$ also contains an extra polynomial, $${\rm res}_{y}(f_1,g_2) = x^4-8x^3+16x^2+\tfrac{1}{2}x-\tfrac{31}{16}.$$ These two extra polynomials provide three extra real roots and hence the 1-dimensional CAD produced using $P_{\mathcal{E}}(\mathcal{A})$ this time has 31 cells. In Figure \[fig:WE4\] we again graph the four curves this time with solid vertical lines highlighting the real solutions of $P_{\mathcal{E}}(\mathcal{A})$. By comparing with Figure \[fig:WE2\] we see that more points in the CAD of $\R^1$ have been identified for the TTICAD of $\Psi$ than the TTICAD of $\Phi$ (15 instead of 12) but that there is still a saving over the sign-invariant CAD (which had 20, the five extra solutions indicated by dotted lines). The lack of an EC in the second clause has meant that the asymptote of $g_2$ and its intersections with $f_1$ have been identified. However, note that the intersections of $g_1$ with $f_2$ and $g_2$ and have not been. Figure \[fig:WE5\] magnifies a region of Figure \[fig:WE4\]. Compare with Figure \[fig:WE3\] to see the dashed line has become solid, while the dotted line remains unidentified by the TTICAD. Note that we are unable to use [@McCallum1999] to study $\Psi$ as there is no polynomial equation logically implied (either explicitly or implicitly) by this formula. Hence there are no dashed lines and the choice is between the sign-invariant CAD with 317 cells or the TTICAD, which for this example has 183 cells. Algorithm {#sec:Algorithm} ========= Description and Proof {#subsec:Algorithm} --------------------- We describe carefully Algorithm \[alg:TTICAD\]. This will create a TTICAD of $\R{}^n$ for a list of QFFs $\{ \phi_i \}_{i=1}^t$ in variables ${\bf x} = x_1 \prec x_2 \prec \cdots \prec x_n$, where each $\phi_i$ has at most one designated EC $f_i = 0$ of positive degree (there may be other non-designated ECs). It uses a subalgorithm `CADW`, which was validated by [@McCallum1998]. The input of [CADW]{} is: $r$, a positive integer and $A$, a set of $r$-variate integral polynomials. The output is a boolean $w$ which if true is accompanied by an order-invariant CAD for $A$ (represented as a list of indices $I$ and sample points $S$). Let $A_i$ be the set of all polynomials occurring in $\phi_i$. If $\phi_i$ has a designated EC then put $E_i = \{f_i\}$ and if not put $E_i=A_i$. Let $\mathcal{A}$ and $\mathcal{E}$ be the lists of the $A_i$ and $E_i$ respectively. Our algorithm effectively defines the reduced projection of $\mathcal{A}$ with respect to $\mathcal{E}$ in terms of the special case of this definition from the previous section. The definition amounts to $$\label{eqn:TTIProj-G} P_{\mathcal{E}}(\mathcal{A}) := C \cup P_{\mathcal{F}}(\mathcal{B}).$$ Here $C$ is the set of contents of all the elements of all $A_i$; $\mathcal{B}$ the list $\{B_i\}_{i=1}^t$ such that $B_i$ is the finest[^3] squarefree basis for the set ${\rm prim}(A_i)$ of primitive parts of elements of $A_i$ which have positive degree; and $\mathcal{F}$ is the list $\{F_i\}_{i=1}^t$, such that $F_i$ is the finest squarefree basis for ${\rm prim}(E_i)$. (The reader may notice that this notation and the definition of $P_\mathcal{E}(\mathcal{A})$ here is analogous to the work in Section 5 of [@McCallum1999].) \[alg:TTICAD\] Set $F \leftarrow \cup_{i=1}^t F_i$ We shall prove that, provided the input satisfies the condition of well-orientedness given in Definition \[def:WO\], the output of Algorithm \[alg:TTICAD\] is indeed a TTICAD for $\{\phi_i\}$. We first recall the more general notion of well-orientedness from [@McCallum1998]. The boolean output of `CADW` is false if the input set was not well-oriented in this sense. \[def:WO-original\] A set $A$ of $n$-variate polynomials is said to be *well oriented* if whenever $n > 1$, every $f \in {\rm prim}(A)$ is nullified by at most a finite number of points in $\R^{n-1}$, and (recursively) $P(A)$ is well-oriented. This condition is required for `CADW` since the validity of this algorithm relies on Theorem \[thm:McC1\] which holds only when polynomials do not vanish identically. The conditions allows for a finite number of these nullifications since this indicates a problem on a zero cell, that is a single point. In such cases it is possible to replace the nullified polynomial by a so called *delineating polynomial* which is not nullified and can be used in place to ensure the delineability of the other. The use of these is part of the verified algorithm `CADW` [@McCallum1998] and they are studied in detail by [@Brown2005a]. We now define our new notion of well-orientedness for the lists of sets $\mathcal{A}$ and $\mathcal{E}$. \[def:WO\] We say that $\mathcal{A}$ is [*well oriented with respect to*]{} $\mathcal{E}$ if, whenever $n > 1$, every polynomial $f \in E$ is nullified by at most a finite number of points in $\R^{n-1}$, and $P_{\mathcal{F}}(\mathcal{B})$ is well-oriented in the sense of Definition \[def:WO-original\]. It is clear than Algorithm \[alg:TTICAD\] terminates. We now prove that it is correct using the theory developed in Section \[sec:TTIProj\]. The output of Algorithm \[alg:TTICAD\] is as specified. We must show that when the input is well-oriented the output is a TTICAD, (each $\phi_i$ has constant truth value in each cell of $\mathcal{D}$), and **FAIL** otherwise. If the input was univariate then it is trivially well-oriented. The algorithm will construct a CAD $\mathcal{D}$ of $\R^1$ using the roots of the irreducible factors of the polynomials in $E$ (steps \[step:base1\] to \[step:base2\]). At each 0-cell all the polynomials in each $\phi_i$ trivially have constant signs, and hence every $\phi_i$ has constant truth value. In each 1-cell no EC can change sign and so every $\phi_i$ has constant truth value $false$, unless there are no ECs in any clause. In this case the algorithm would have constructed a CAD using all the polynomials and hence on each 1-cell no polynomial changes sign and so each clause has constant truth value. From now on suppose $n > 1$. If $\mathfrak{P} = C \cup P_{\mathcal{F}}(\mathcal{B})$ is not well-oriented in the sense of Definition \[def:WO-original\] then `CADW` returns $w'$ as false. In this case the input is not well oriented in the sense of Definition \[def:WO\] and Algorithm \[alg:TTICAD\] correctly returns **FAIL** in step \[step:notWO1\]. Otherwise, we have $w'= true$ with $I'$ and $S'$ specifying a CAD, $\mathcal{D}'$, which is order-invariant with respect to $\mathfrak{P}$ (by the correctness of `CADW`, as proved in [@McCallum1998]). Let $c$, a submanifold of $\R{}^{n-1}$, be a cell of $\mathcal{D}'$ and let $\alpha$ be its sample point. We suppose first that the dimension of $c$ is positive. If any polynomial $f \in E$ vanishes identically on $c$ then the input is not well oriented in the sense of Definition \[def:WO\] and the algorithm correctly returns **FAIL** at step \[step:notWO2\]. Otherwise, we know that the input list was certainly well-oriented. Since no polynomial $f \in E$ vanishes then no element of the basis $F$ vanishes identically on $c$ either. Hence, by Theorem \[thm:Main\], applied with $\mathcal{A} = \mathcal{B}$ and $\mathcal{E} = \mathcal{F}$, each element of $F$ is delineable on $c$, and the sections over $c$ of the elements of $F$ are pairwise disjoint. Thus the sections and sectors over $c$ of the elements of $F$ comprise a stack $\Sigma$ over $c$. Furthermore, the last conclusion of Theorem \[thm:Main\] assures us that, for each $i$, every element of $B_i \setminus F_i$ is sign-invariant in each section over $c$ of every element of $F_i$. Let $1 \le i \le t$. We shall show that each $\phi_i$ has constant truth value in both the sections and sectors of $\Sigma$. If $\phi_i$ has a designated EC then let $f_i$ denote the constraint polynomial; otherwise let $f_i$ denote an arbitrary element of $A_i$. Consider first a section $\sigma$ of $\Sigma$. Now $f_i$ is a product of its content ${\rm cont}(f_i)$ and some elements of the basis $F_i$. But ${\rm cont}(f_i)$, an element of $\mathfrak{P}$, is sign-invariant (indeed order-invariant) in the whole cylinder $c \times \R$ and hence, in particular, in $\sigma$. Moreover all of the elements of $F_i$ are sign-invariant in $\sigma$, as was noted previously. Therefore $f_i$ is sign-invariant in $\sigma$. If $\phi_i$ has no constraint (and so $f_i$ denotes an arbitrary element of $A_i$) then this implies that $\phi_i$ has constant truth value in $\sigma$. So consider from now on the case in which $f_i = 0$ is the designated constraint polynomial of $\phi_i$. If $f_i$ is positive or negative in $\sigma$ then $\phi_i$ has constant truth value $false$ in $\sigma$. So suppose that $f_i = 0$ throughout $\sigma$. It follows that $\sigma$ must be a section of some element of the basis $F_i$. Let $g \in A_i \setminus E_i$ be a non-constraint polynomial in $A_i$. Now, by the definition of $B_i$, we see $g$ can be written as $$g = {\rm cont}(g) h_1^{p_1} \cdots h_k^{p_k}$$ where $h_j \in B_i, p_j \in \mathbb{N}$. But ${\rm cont}(g)$, in $\mathfrak{P}$, is sign-invariant (indeed order-invariant) in the whole cylinder $c \times \R$, and hence in particular in $\sigma$. Moreover each $h_j$ is sign-invariant in $\sigma$, as was noted previously. Hence $g$ is sign-invariant in $\sigma$. (Note that in the case where $g$ does not have main variable $x_n$ then $g = {\rm cont}(g)$ and the conclusion still holds). Since $g$ was an arbitrary element of $A_i \setminus E_i$, it follows that all polynomials in $A_i$ are sign-invariant in $\sigma$, hence that $\phi_i$ has constant truth value in $\sigma$. Next consider a sector $\sigma$ of the stack $\Sigma$, and notice that at least one such sector exists. As observed above, ${\rm cont}(f_i)$ is sign-invariant in $c$, and $f_i$ does not vanish identically on $c$. Hence ${\rm cont}(f_i)$ is non-zero throughout $c$. Moreover each element of the basis $F_i$ is delineable on $c$. Hence $f_i$ is nullified by no point of $c$. It follows from this that the algorithm does not return **FAIL** during the lifting phase. It follows also that $f_i \neq 0$ throughout $\sigma$. Hence $\phi_i$ has constant truth value $false$ in $\sigma$. It remains to consider the case in which the dimension of $c$ is 0. In this case the roots of the polynomials in the lifting set $L_c$ constructed by the algorithm determine a stack $\Sigma$ over $c$. Each $\phi_i$ trivially has constant truth value in each section (0-cell) of this stack, and the same can routinely be shown for each sector (1-cell) of this stack. TTICAD via the ResCAD Set {#subsec:ResCAD} ------------------------- When no $f \in E$ is nullified there is an alternative implementation of TTICAD which would be simple to introduce into existing CAD implementations. Define $$\mathcal{R}(\{ \phi_i\}) = E \cup {\textstyle \bigcup_{i=1}^t} \left\{ {\rm res}_{x_n}(f,g) \mid f\in E_i, g \in A_i, g \notin E_i \right\}.$$ to be the [*ResCAD Set*]{} of $\{\phi_i\}$. \[thm:ResCAD\] Let $\mathcal{A} = ( A_i)_{i=1}^t$ be a list of irreducible bases $A_i$ and let $\mathcal{E} = ( E_i )_{i=1}^t$ be a list of non-empty subsets $E_i \subseteq A_i$. Then we have $${P}(\mathcal{R}(\{ \phi_i\})) = {P}_{\mathcal{E}}(\mathcal{A}).$$ The proof is straightforward and so omitted here. \[cor:ResCAD\] If no $f \in E$ is nullified by a point in $\R{}^{n-1}$ then inputting $\mathcal{R}(\{ \phi_i\})$ into any algorithm which produces a sign-invariant CAD using McCallum’s projection operator $P$ will result in the TTICAD for $\{ \phi_i\}$ produced by Algorithm \[alg:TTICAD\]. Corollary \[cor:ResCAD\] gives a simple way to compute TTICADs using existing CAD implementations based on McCallum’s approach, such as [Qepcad]{}. Utilising projection theory for improvements to lifting {#sec:ImprovedLifting} ======================================================= Consider the case when the input to Algorithm \[alg:TTICAD\] is a single QFF $\{\phi\}$ with a declared EC. In this case the reduced projection operator $P_{\mathcal{E}}(\mathcal{A})$ produces the same polynomials as the operator $P_{E}(A)$ and so one may expect the TTICAD produced to be the same as the CAD produced by an implementation of [@McCallum1999] such as [Qepcad]{}. In practice this is not the case because Algorithm \[alg:TTICAD\] makes use of the reduced projection theory in the lifting phase as well as the projection phase. [@McCallum1999] discussed how the theory of a reduced projection operator would improve the projection phase of CAD, by creating fewer projection polynomials. The only modification to the lifting phase of Collins’ CAD algorithm described was the need to check the well-orientedness condition of Definition \[def:WO-original\]. In this section we note two subtleties in the lifting phase of Algorithm \[alg:TTICAD\] which result in efficiencies that could be replicated for use with the original theory. In fact, the `ProjectionCAD` package [@EWBD14] discussed in Section \[subsec:PCAD\] has commands for building CADs invariant with respect to a single EC which does this. A finer check for well-orientedness {#subsec:IL_finerWO} ----------------------------------- Theorem 2.3 of [@McCallum1999] verified the use of $P_E(A)$. The proof uses Theorem \[thm:McC1\] to conclude sign-invariance for the polynomial defining the EC, and Theorem \[thm:McC2\] to conclude sign-invariance for the other polynomials only when the EC was satisfied. To apply Theorem \[thm:McC1\] here we need the EC polynomial and the projection polynomials obtained by repeatedly applying $P$ to have a finite number of nullification points. Meanwhile, the application of Theorem \[thm:McC2\] requires that the resultants of the EC polynomial with the others polynomials have no nullification points. Both these requirements are guaranteed by the input satisfying Definition \[def:WO-original\], the condition used in [@McCallum1999]. However, this also requires that other projection polynomials, including the non-ECs in the input, to have no nullification points. In Algorithm \[alg:TTICAD\], step \[step:CleverCheck\] only checks for nullification of the polynomials in $E_i$ (in this context meaning only the EC). Hence this algorithm is checking the necessary conditions but not whether the non-ECs (in the main variable) are nullified. Assume the variable ordering $x \prec y \prec z \prec w$ and consider the polynomials $$f = x+y+z+w, \qquad g = zy - x^2w$$ forming the formula $f=0 \wedge g<0$. We could analyse this using a sign-invariant CAD with 557 cells but it is more efficient to make use of the EC. Our implementation of Algorithm \[alg:TTICAD\] produces a CAD with 165 cells, while declaring the EC in QEPCAD results in 221 cells (the higher number is explained in subsection \[subsec:IL\_smallerL\]). <span style="font-variant:small-caps;">Qepcad</span> also prints: Error! Delineating polynomial should be added over cell(2,2)! indicating the output may not be valid. The error message was triggered by the nullification of $g$ when $x=y=0$ which does not actually invalidate the theory. <span style="font-variant:small-caps;">Qepcad</span> is checking for nullification of all projection polynomials leading to unnecessary errors. In fact, we can take this idea further in the case where $E_i=A_i$ for some $i$: in such a case we do not need to check any elements of (that particular) $E_i$ for nullification (since we are using the theory of [@McCallum1998] and it is the final lift meaning only sign- (rather than order-) invariance is required. Smaller lifting sets {#subsec:IL_smallerL} -------------------- Traditionally in CAD algorithms the projection phase identifies a set of projection polynomials, which are then used in the lifting phase to create the stacks. However when making use of ECs we can actually be more efficient by discarding some of the projection polynomials before lifting. The non-ECs (in the main variable) are part of the set of projection polynomials, required in order to produce subsequent projection polynomials (when we take their resultant with the EC). However, these polynomials are not (usually) required for the lifting since Theorem \[thm:McC2\] can (usually) be used to conclude them sign-invariant in those sections produced when lifting with the EC. Note that in Algorithm \[alg:TTICAD\] the projection polynomials are formed from the input polynomials (in the main variable) and the set of polynomials $\mathfrak{P}$ constructed in step \[step:mfP\] which are not in the main variable. The lower dimensional CAD $D$ constructed in step \[step:cadw\] is guaranteed to be sign-invariant for $\mathfrak{P}$. In particular, $\mathfrak{P}$ contains the resultants of the EC with the other constraints and thus $D$ is already decomposing the domain into cells such that the presence of an intersection of $f$ and $g$ is invariant in each cell. Hence for the final lift we need to build stacks with respect to $f$. The following examples demonstrate these efficiencies. Consider from Section \[subsec:WE1\] the circle $f_1$, hyperbola $g_1$ and sub-formula $\phi_1 := f_1=0 \land g_1<0$. Building a sign-invariant CAD for these polynomials uses 83 cells with the induced CAD of $\R$ identifying 7 points. Declaring the EC in QEPCAD results in a CAD with 69 cells while using our implementation of Algorithm \[alg:TTICAD\] produces a CAD with 53 cells. Both implementations give the same induced CAD of $\R$ identifying 6 points but <span style="font-variant:small-caps;">Qepcad</span> uses more cells for the CAD of $\R^2$. In particular, `ProjectionCAD` has a cell where $x<-2$ and $y$ is free while <span style="font-variant:small-caps;">Qepcad</span> uses three cells, splitting where $g_1$ changes sign. The splitting is not necessary for a CAD invariant with respect to the EC since $f_1$ is non-zero (and $\phi_1$ hence false) for all $x<-2$. Now consider all four polynomials from Section \[subsec:WE1\] and the formula $\Phi$ from equation (\[eqn:ExPhi\]). In Section \[subsec:WE2\] we reported that a TTICAD could be built with 105 cells compared to a CAD with 249 cells built invariant with respect to the implicit EC $f_1f_2=0$ using [Qepcad]{}. The improved projection resulted in the induced CAD of $\R$ identifying 12 points rather than 16. We now observe that some of the cell savings was actually down to using smaller sets of lifting polynomials. We may simulate the projection with respect to the implicit EC via Algorithm \[alg:TTICAD\] by inputting a set consisting of the single formula $$\Phi' = f_1f_2=0 \land \Phi$$ (note that logically $\Phi = \Phi'$). The implementation in `ProjectionCAD` would then produce a CAD with 145 cells. So we may conclude that improved lifting allowed for a saving of 104 cells and improved projection a further saving of 40 cells. In this example 72% of the cell saving came from improved lifting and 28% from improved projection, but we should not conclude that the former is more important. The improvement is to the final lift (from a CAD of $\R^{n-1}$ to one of $\R^n$) and the first projection (from polynomials in $n$ variables to those with $n-1$). Hence the savings from improved projection get magnified throughout the rest of the algorithm, and so as the number of variables in a problem increases so will the importance of this. \[ex:3dPhi\] We consider a simple 3d generalisation of the previous example. Let $$\begin{aligned} \Phi^{3d} &= \big( x^2+y^2+z^2-1=0 \land xyz - \tfrac{1}{4}<0 \big) \\ &\qquad \lor \big( (x-4)^2+(y-1)^2+(z-2)^2-1=0 \land (x-4)(y-1)(z-2) - \tfrac{1}{4}<0 \big)\end{aligned}$$ and assume variable ordering $x \prec y \prec z$. Using Algorithm \[alg:TTICAD\] on the two QFFs joined by disjunction gives a CAD with 109 cells while declaring the implicit EC in [Qepcad]{} gives 739 cells. Using Algorithm \[alg:TTICAD\] on the single formula conjuncted with the implicit EC gave a CAD with 353 cells. So in this case the improved lifting saves 386 cells and the improved projection a further 244 cells. Moving from 2 to 3 variables has increased the proportion of the saving from improved projection from 28% to 39%. The complexity analysis in the next section will further demonstrate the importance of improved projection, especially for the problem classes where no implicit EC exists (see also the experiments in Section \[subsec:IncreasedBenefit\]). Complexity analyses of new contributions {#sec:CA} ======================================== In this Section we closely follow the approach of our new analysis for the existing theory given in Section \[subsec:CA1\]. We will first study the special case of TTICAD when every QFF has an EC, before moving to the general case. This is because such formulae may be studied using [@McCallum1999] and so our comparison must be with this as well as [@McCallum1998] in order to fully clarify the advantages of our new projection operator. When every QFF has an equational constraint {#subsec:CA-TTI1} ------------------------------------------- We consider a sequence of $t$ QFFs which together contain $m$ constraints and are thus defined by at most $m$ polynomials. We suppose further that each QFF has at least one EC, and that the maximum degree of any polynomial in any variable is $d$. Let $\mathcal{A}$ be the sequence of sets of polynomials $A_i$ defining each formula, $\mathcal{E}$ the sequence of subsets $E_i \subset A_i$ defining the ECs, and denote the irreducible bases of these by $B_i$ and $F_i$. \[L:TTICAD1\] Under the assumptions above, $P_{\mathcal{E}}(\mathcal{A})$ has the $(M,2d^2)$-property with $$\label{eq:TTI-M} M = \left\lfloor \tfrac{1}{2}(3m+1) \right\rfloor + \tfrac{1}{2}(t-1)t.$$ From equations (\[eqn:TTIProj-G\]) and (\[eqn:TTIProj-S\]) we have $$P_{\mathcal{E}}(\mathcal{A}) = \cont(\mathcal{A}) \cup \textstyle{\bigcup_{i=1}^t} P_{F_i}(B_i) \cup {\rm Res}^{\times} (\mathcal{F}). \label{eq:TTIProj}$$ 1. Consider first the cross resultant set. Let $T_1$ be the set of elements of $B_i$ which divide some element of $F_1$, and $T_i, i=2, \dots, t$ be those elements of $B_i$ which divide some element of $F_i$ and do not already occur in some $T_j: j<i$. Then using the same argument as in the proof of Lemma \[L:SINew\] step 2 we see that the cross-resultant set can be partitioned into $\tfrac{1}{2}(t-1)t$ sets of combined degrees at most $2d^2$. 2. We now consider the $P_{E_i}(A_i)$ since $$\label{eq:ttiProof} \cont(\mathcal{A}) \cup \textstyle{\bigcup_{i=1}^t} P_{F_i}(B_i) = \textstyle{\bigcup_{i=1}^t} P_{E_i}(A_i).$$ 1. Let $m_i$ be the polynomials defining $A_i$. We follow Lemma \[L:EC\] to say that for each $i$: the contents, leading coefficients and discriminants for $E_i$ form a set $R_{i,1}$ with combined degree $2d^2$; the other coefficients for $E_i$ form a set $R_{i,2}$ with combined degree $d^2$; the remaining contents of each $A_i$ form a set $R_{i,3} = \cont(A_i) \setminus \cont(R_{i,1})$ with the $(m_i-1,d^2)$-property; the final set of resultants in (\[eq:ECProj\]) for each $i$ form a set $R_{i,4}$ with the $(m_i-1, 2d^2)$-property. 2. $R_1 = \textstyle{\bigcup_{i=1}^t} R_{i,1}$ has the $(t, 2d^2)$-property while $R_4 = \textstyle{\bigcup_{i=1}^t} R_{i,4}$ may be partitioned into $\textstyle{\sum_{i=1}^t} m_i -1 = m - t$ sets of combined degree $2d^2$. 3. The union $R_{23} = \textstyle{\bigcup_{i=1}^t} R_{i,2} \cup R_{i,3}$ may be partitioned into $$%$ \textstyle{\sum_{i=1}^t} m_i - 1 + 1 = m %$$$ sets of combined degree $d^2$, and so has the $\big( \lfloor \tfrac{1}{2}(m+1) \rfloor, 2d^2\big)$-property. Hence (\[eq:ttiProof\]), which equals $R_1 \cup R_{23} \cup R_4$, has the $\left( \left\lfloor \tfrac{1}{2}(3m+1) \right\rfloor, 2d^2 \right)$ property. So together we see that (\[eq:TTIProj\]) has the $(M,2d^2)$-property with $M$ as given in (\[eq:TTI-M\]). To analyse Algorithm \[alg:TTICAD\] we will apply Lemma \[L:TTICAD1\] once and then Corollary \[cor:SI2\] repeatedly. The growth in factors is given by Table \[tab:GeneralProjection\], with $M$ this time representing (\[eq:TTI-M\]). Thus the dominant term in the bound is calculated from (\[bound:All\]) (omitting the floor in $M$) as $$\begin{aligned} &\qquad 2^{2^{n}-1}d^{2^{n}-1}( \tfrac{1}{2}(3m+1) + \tfrac{1}{2}(t-1)t )^{2^{n-1}-1}m \nonumber \\ &= 2^{2^{n-1}}d^{2^{n}-1}(3m+t^2-t+1)^{2^{n-1}-1}m. \label{bound:TTIbasic}\end{aligned}$$ Actually, this bound can be lowered by noting that for the final lift we use only the $t$ ECs rather than all $m$ of the input polynomials, reducing the bound to $$\label{bound:TTI1} 2^{2^{n-1}}d^{2^{n}-1}(3m+t^2-t+1)^{2^{n-1}-1}t.$$ Observe that if $t=1$ then the value of $M$ for TTICAD in (\[eq:TTI-M\]) becomes (\[eq:EC-M\]), the value for a CAD invariant with respect to an EC. Similarly, if $t=m$ then (\[eq:TTI-M\]) becomes (\[eq:SI-M\]), the value for sign-invariant CAD. Actually, in these two situations the TTICAD projection operator reverts to the previous ones. These are the extremal values of $t$ and provide the best and worse cases respectively. We can conclude from the remark that TTICAD is superior to sign-invariant CAD (strictly so unless $t=m$). Comparing the bounds (\[bound:TTI1\]) and (\[bound:SINew\]) we see the effect is a reduction in the double exponent of the factor dependent on $m$ for $t \ll m$, which gradually reduces as $t$ gets closer to $m$. It would be incorrect to conclude from the remark that the theory of [@McCallum1999] is superior to TTICAD. In the case $t=1$ the algorithms and their analysis are equal up to the final lifting stage. As discussed in Section \[sec:ImprovedLifting\] this can be applied to the case $t=1$ also, with the effect of reducing the bound (\[bound:EC\]) by a factor of $m$ to $$\label{bound:ECImproved} 2^{2^{n-1}}d^{2^{n}-1}(3m+1)^{2^{n-1}-1}.$$ If $t>1$ then [@McCallum1999] cannot be applied directly since it requires a single formula with an EC. However, it can be applied indirectly by considering the parent formula formed by the disjunction of the individual QFFs which has the product of the individual ECs as an implicit EC. A CAD for this parent formula produced using [@McCallum1999] would also be a TTICAD for the sequence of QFFs. Thus we provide a complexity analysis for this case. ### With a parent formula and implicit EC-CAD By working with the extra implicit EC we are starting with one extra polynomial, whose degree is $td$. However, we know the factorisation into $t$ polynomials so suppose we start from here (indeed, this is what our implementation does). \[L:ECImplicit\] Consider a set $A$ of $m$ polynomials in $n$ variables with maximum degree $d$, and a subset $E=\{f_1, \dots, f_t\} \subseteq A$. Then $P_E(A)$, has the $(M,2d^2)$-property with $$\label{eq:ECImplicit-M} %M = \tfrac{1}{2} \big( t(2m-t+1)+m+1 \big). M = \tfrac{1}{2}(2m-t+1)t + \left\lfloor \tfrac{1}{2}(m+1) \right\rfloor$$ Partition $E$ into subsets $S_i=\{f_i\}$ for $i=1, \dots, t$. Then $P_E(A)$ from (\[eq:ECProj\]) is $$\begin{aligned} &{\textstyle \cont(A \setminus E) + \bigcup_{i=1}^t} P(S_i) + \{ {\rm res}_{x_n}(f,g) \mid f \in F, g \in F, g \neq f \} \nonumber \\ &\qquad + \{ {\rm res}_{x_n}(f,g) \mid f \in F, g \in B \setminus F \}. \label{eq:ECImpProof}\end{aligned}$$ 1. We start by considering the first two terms in (\[eq:ECImpProof\]). 1. For each $P(S_i)$: the contents, leading coefficients and discriminants form a set $R_{i,1}$ with combined degree $2d^2$, and the other coefficients a set $R_{i,2}$ with combined degree $d^2$. 2. The remaining contents $R_3 = \cont(A) \setminus \cont(E)$ has the $(m-t,d^2)$-property. 3. Together, the set $R_1 = {\textstyle \bigcup_{i=1}^t} R_{1,i}$ has the $(t, 2d^2)$-property. 4. Together, $R_{23} = R_3 \cup {\textstyle \bigcup_{i=1}^t} R_{2,i}$ has the $(m,d^2)$-property. It can be further partitioned into $\lfloor \tfrac{1}{2}(m+1) \rfloor$ sets of combined degree $2d^2$. The first two terms of (\[eq:ECImpProof\]) may be partitioned into $R_1 \cup R_{23}$ and thus further into $t + \lfloor \tfrac{1}{2}(m+1) \rfloor$ sets of combined degree $2d^2$. 2. The first set of resultants in (\[eq:ECImpProof\]) has size $\tfrac{1}{2}(t-1)t$ and maximum degree $2d^2$. 3. The second set of resultants in (\[eq:ECImpProof\]) may be decomposed as $${\textstyle \bigcup_{i=1}^t} \{ {\rm res}_{x_n}(f,g) \mid f \in S_i, g \in B \setminus F \}.$$ Since $|S_i|=1$ and $|B \setminus F|$ has the $(m-t,d)$-property, each of these subsets has $(m-t, 2d^2)$-property (following Lemma \[L:SINew\] step 2). Thus together the set of them has the $(t(m-t), 2d^2)$-property. Hence $P_E(A)$ as given in (\[eq:ECImpProof\] may be partitioned into $$t + \lfloor \tfrac{1}{2}(m+1) \rfloor + \tfrac{1}{2}(t-1)t + t(m-t) = \tfrac{1}{2}(2m-t+1)t + \left\lfloor \tfrac{1}{2}(m+1) \right\rfloor$$ sets of combined degree $2d^2$. Thus the growth of projection polynomials in this case is given by Table \[tab:GeneralProjection\] with $M$ from (\[eq:ECImplicit-M\]). The dominant term in the cell count bound is calculated from (\[bound:All\]) as $$\begin{aligned} &\qquad 2^{2^{n}-1}d^{2^{n}-1}( \tfrac{1}{2}(t(2m-t+1)+m+1) )^{2^{n-1}-1}m \nonumber \\ &= 2^{2^{n-1}}d^{2^{n}-1}(t(2m-t+1)+m+1)^{2^{n-1}-1}m.\end{aligned}$$ If we follow Section \[sec:ImprovedLifting\] to simplify the final lift this reduces to $$\label{bound:ImplicitEC2} 2^{2^{n-1}}d^{2^{n}-1}(t(2m-t+1)+m+1)^{2^{n-1}-1}t.$$ ### Comparison Observe that if $t=1$ then the value of $M$ in (\[eq:ECImplicit-M\]) becomes (\[eq:EC-M\]), while if $t=m$ it becomes (\[eq:SI-M\]), just like TTICAD. However, since the difference between (\[eq:ECImplicit-M\]) and (\[eq:TTI-M\]) is $$mt - t^2 - m + t = (t-1) (m-t).$$ we see that for all other possible values of $t$ the TTICAD projection operator has a superior $(m,d)$-property. This means fewer polynomials and a lower cell count, as noted earlier in Section \[subsec:Implicit\]. Comparing the bounds (\[bound:TTI1\]) and (\[bound:ImplicitEC2\]) we see the effect is a reduction in the base of the doubly exponential factor dependent on $m$. A general sequence of QFFs {#subsec:CA-TTI2} -------------------------- We again consider $t$ QFFs formed by at a set of at most $m$ polynomials with maximum degree $d$, however, we no longer suppose that each QFF has an EC. Instead we denote by $\mathfrak{e}$ the number of QFFs with one; by $A_{\mathfrak{e}}$ the set of polynomials required to define those $\mathfrak{e}$ QFFs; and by $m_\mathfrak{e}$ the size of the set $A_{\mathfrak{e}}$. Then analogously we define $\mathfrak{n} = t-\mathfrak{e}$ as the number of QFFs without an EC; $A_{\mathfrak{n}} = A \setminus A_{\mathfrak{e}}$ as the additional polynomials required to define them; and $m_{\mathfrak{n}} = m - m_{\mathfrak{e}}$ as their number. Let $\mathcal{A}$ be the sequence of sets of polynomials $A_i$ defining each formula. If QFF $i$ is one of the $\mathfrak{e}$ with an EC then set $E_i$ to be the set containing just that EC, and otherwise set $E_i = A_i$. As before, denote the irreducible bases of these by $B_i$ and $F_i$. \[L:TTICAD2\] Under the assumptions above $P_{\mathcal{E}}(\mathcal{A})$ has the $(M,2d^2)$-property with $$\label{eq:TTIGeneral-M1} %M = \tfrac{1}{2} \left( (m_{\mathfrak{n}}+1)^2 + (3m_{\mathfrak{e}}+1) + \mathfrak{e}(\mathfrak{e}-1) + 2\mathfrak{e}m_{\mathfrak{n}} \right). M = \left\lfloor \tfrac{1}{2} (m_{\mathfrak{n}}+1)^2 \right\rfloor + \left \lfloor \tfrac{1}{2}(3m_{\mathfrak{e}}+1) \right\rfloor + \tfrac{1}{2}\mathfrak{e}(\mathfrak{e}-1 + 2m_{\mathfrak{n}} ).$$ Without loss of generality suppose the QFFs are labelled so the $\mathfrak{e}$ QFFs with an EC come first. We will decompose the cross resultant set (\[eqn:RESX\]) as $R^{\times}_1 \cup R^{\times}_2 \cup R^{\times}3$ where $$\begin{aligned} R^{\times}_1 &= \{ {\rm res}_{x_n}(f,\hat{f}) \mid \exists i,j : \, f \in F_i, \hat{f} \in F_j, i<j\leq \mathfrak{e}, f \neq \hat{f} \}, \\ R^{\times}_2 &= \{ {\rm res}_{x_n}(f,\hat{f}) \mid \exists i,j : \, f \in F_i, \hat{f} \in F_j, i\leq \mathfrak{e}<j, f \neq \hat{f} \}, \\ R^{\times}_3 &= \{ {\rm res}_{x_n}(f,\hat{f}) \mid \exists i,j : \, f \in F_i, \hat{f} \in F_j, \mathfrak{e}<i<j, f \neq \hat{f} \}.\end{aligned}$$ Then the projection set (\[eq:TTIProj\]) may be decomposed as $$\begin{aligned} P_{\mathcal{E}}(\mathcal{A}) &= \cont(\mathcal{A}) \cup \textstyle{\bigcup_{i=1}^t} P_{F_i}(B_i) \cup {\rm Res}^{\times} (\mathcal{F}) \nonumber \\ &= \left( \textstyle{ \bigcup_{i=1}^{\mathfrak{e}} } \cont(A_i) \cup P_{F_i}(B_i) \right) \nonumber \\ &\qquad \cup \left( R^{\times}_3 \cup \textstyle{ \bigcup_{i=\mathfrak{e}+1}^{t} } \cont(A_i) \cup P_{F_i}(B_i) \right) \cup R^{\times}_1 \cup R^{\times}_2. \label{eq:TTIGenProof}\end{aligned}$$ 1. The first collection of sets in (\[eq:TTIGenProof\]) has the $\left( \left\lfloor \tfrac{1}{2}(3m_{\mathfrak{e}}+1) \right\rfloor, 2d^2\right)$-property. The argument is identical to the proof of Lemma \[L:TTICAD1\], except that here $\mathfrak{e}$ plays the role of $t$, and $m_{\mathfrak{e}}$ the role of $m$. 2. The second collection of sets in (\[eq:TTIGenProof\]) refer to those with $E_i=A_i$. Since $P_{B_i}(B_i) = P(B_i)$ we see that the union of $\cont(A_i) \cup P(B_i)$ for $i=\mathfrak{e}+1, \dots, t$ contains all the polynomials in $P(A_{\mathfrak{n}})$ except for the cross-resultants of polynomials from different $B_i$. These are exactly given by $R^{\times}_3$, and thus we can follow the proof of Lemma \[L:SINew\] to partition the second collection into $\left\lfloor \tfrac{1}{2}(m_{\mathfrak{n}}+1)^2 \right\rfloor$ sets of combined degree $2d^2$. 3. Next let us consider $R^{\times}_1$. This concerns those subsets $E_i$ with only one polynomial, and hence their square free bases $F_i$ each have the $(1,d)$-property. Following the proof of Lemma \[L:SINew\] step 2 this set of resultants may be partitioned into $\tfrac{1}{2}\mathfrak{e}(\mathfrak{e}-1)$ sets of combined degree at most $2d^2$. 4. Finally we consider $R^{\times}_3$. This concerns resultants of the $\mathfrak{e}$ polynomials forming the $\mathfrak{e}$ single polynomial subsets $E_i$, taken with polynomials from the other subsets (together giving the set $A_{\mathfrak{n}}$ of $m_{\mathfrak{n}}$ polynomials). There are at most $\mathfrak{e}m_{\mathfrak{n}}$ of these. Of course, as before, we are actually dealing with square free bases (moving from polynomials of degree $d$ to sets with the $(1,d)$-property) and then consider the coprime subsets (as in Lemma \[L:SINew\]), to conclude $R^{\times}_3$ has the $(\mathfrak{e}m_{\mathfrak{n}}, 2d^2)$-property. Summing up then gives the desired result. \[cor:TTIGen\] The bound in (\[eq:TTIGeneral-M1\]) may be improved to $$\label{eq:TTIGeneral-M} M = \left\lfloor \tfrac{1}{2} \left( (m_{\mathfrak{n}}+1)^2 + 3m_{\mathfrak{e}} \right) \right\rfloor + \tfrac{1}{2} \left( \mathfrak{e}(\mathfrak{e}-1 + 2m_{\mathfrak{n}})\right).$$ We have asserted that the sum of the two floors is equal to the floor of the sum minus a half. In both steps 1 and 2 of the proof of Lemma \[L:TTICAD2\] we pair up sets of maximum combined degree $d^2$ to get half as many with maximum combined degree $2d^2$. We introduce the floor of the polynomial one greater to cover the case with an odd number of sets to begin with. However, in the case that both step 1 and step 2 had an odd number of starting sets the left over couple could themselves be paired. Instead, if we considering combining these sets and then pairing we have the floor as stated in (\[eq:TTIGeneral-M\]). We analyse Algorithm \[alg:TTICAD\] by applying Lemma \[L:TTICAD2\] once and then Lemma \[L:SINew\] repeatedly. As usual, the growth is given by Table \[tab:GeneralProjection\], this time with $M$ as in (\[eq:TTIGeneral-M\]). The dominant term in the bound on cell count is then calculated from (\[bound:All\]) as $$\begin{aligned} &\qquad 2^{2^{n}-1}d^{2^{n}-1}( \tfrac{1}{2} \left( (m_{\mathfrak{n}}+1)^2 + (3m_{\mathfrak{e}}+1) + \mathfrak{e}(\mathfrak{e}-1) + 2\mathfrak{e}m_{\mathfrak{n}} - 1 \right) )^{2^{n-1}-1}m \nonumber \\ &= 2^{2^{n-1}}d^{2^{n}-1}((m_{\mathfrak{n}}+1)^2 + (3m_{\mathfrak{e}}+1) + \mathfrak{e}(\mathfrak{e}-1) + 2\mathfrak{e}m_{\mathfrak{n}} -1 )^{2^{n-1}-1}m.\end{aligned}$$ Once again, we can improve this by noting the reduction at the final lift, which will involve $m_{\mathfrak{n}} + \mathfrak{e} \leq m$ polynomials instead of $m$. Thus the bound becomes $$\label{bound:TTI2} 2^{2^{n-1}}d^{2^{n}-1}((m_{\mathfrak{n}}+1)^2 + (3m_{\mathfrak{e}}+1) + \mathfrak{e}(\mathfrak{e}-1) + 2\mathfrak{e}m_{\mathfrak{n}} - 1)^{2^{n-1}-1}(m_{\mathfrak{n}} + \mathfrak{e}).$$ ### Comparison {#comparison-1 .unnumbered} First we consider three extreme cases for the TTICAD algorithm: 1. If no QFF has an EC then $\mathfrak{e}=0, m_{\mathfrak{e}}=0, m_{\mathfrak{n}}=m$ and (\[eq:TTIGeneral-M\]) becomes $\lfloor \tfrac{1}{2}(m+1)^2 \rfloor$. The latter is (\[eq:SI-M\]) for sign-invariant CAD. 2. The other unfortunate case is when $\mathfrak{e} = m_{\mathfrak{e}}$, i.e. all those QFFs with an EC contain no other constraints. In this case (\[eq:TTIGeneral-M\]) becomes $\left\lfloor \tfrac{1}{2}(\mathfrak{e}+m_{\mathfrak{n}}+1)^2 \right\rfloor$, which will be equal to (\[eq:SI-M\]) for sign-invariant CAD. 3. The third extreme case is where all QFFs have an EC. Then $\mathfrak{e}=t, m_{\mathfrak{e}}=m, m_{\mathfrak{n}}=0$ and (\[eq:TTIGeneral-M\]) becomes $\left\lfloor\tfrac{1}{2}(3m+1)\right\rfloor + \tfrac{1}{2}(t-1)t$. This is the same as (\[eq:TTI-M\]) for the restricted case of TTICAD studied in Section \[subsec:CA-TTI1\]. In all three cases the general TTICAD algorithm behaves identically to those previous approaches. In the first two extreme cases the general TTICAD algorithm performs the same as [@McCallum1998] which produces a sign-invariant CAD. Let us demonstrate that it is superior otherwise. Assume $0 < \mathfrak{e} < m_{\mathfrak{e}}$ (meaning at least one QFF has an EC and at least one such QFF has additional constraints). Then comparing the values of $M$ in (\[eq:SI-M\]) and (\[eq:TTIGeneral-M\]) we have: $$\begin{aligned} M_{SI} - M_{TTI} &= %\left( (m_{\mathfrak{e}} + m_{\mathfrak{n}}+1)^2 \right) %%\\ &\qquad %- \left( (m_{\mathfrak{n}}+1)^2 + (3m_{\mathfrak{e}}+1) + \mathfrak{e}(\mathfrak{e}-1) + 2\mathfrak{e}m_{\mathfrak{n}} - 1 \right) \\ %&= \left\lfloor \tfrac{1}{2} (m_{\mathfrak{e}}-\mathfrak{e})( \mathfrak{e}+m_{\mathfrak{e}} + 2m_{\mathfrak{n}}-1) \right\rfloor.\end{aligned}$$ The first factor is positive by assumption, and the second is $\geq 2$. Thus the bound on the cell count for TTICAD is better than for sign-invariant CAD by at least a doubly exponential factor: $2^{2^{n-1}-1}$. There is no need to compare the complexity for TTICAD in this general case to any use of [@McCallum1999]. The latter can only be applied to a parent formula with an overall (possibly implicit) EC and the construction from the previous subsection would only be possible when $\mathfrak{e}=t$: the case of the previous subsection for which we have already concluded the superiority of TTICAD. It is now clear that the extension to general QFFs provided by this paper is a more important contribution than the restricted case of [@BDEMW13], even though the former has a lower complexity bound: - In the restricted case TTICAD was an improvement on the best available alternative projection operator, $P_E(A)$ from [@McCallum1999], but its improvements were to the base of a double exponential factor. - Outside of this restricted case (and the two other extreme cases) TTICAD offers a complexity improvement to a double exponent when compared with the best available alternative projection operator, $P(A)$ from [@McCallum1998]. Our implementation in Maple {#sec:Implementation} =========================== There are various implementations of CAD already available including: [Mathematica]{} [@Strzebonski2006; @Strzebonski2010]; [Qepcad]{} [@Brown2003a]; the `Redlog` package for <span style="font-variant:small-caps;">Reduce</span> [@SS03]; the `RegularChains` Library [@CMXY09] for [Maple]{}, and `SyNRAC` [@YA06] (another package for [Maple]{}). None of these can (currently) be used to build CADs which guarantee order-invariance, a property required for proving the correctness of our TTICAD algorithm. Hence we have built our own CAD implementation in order to obtain experimental results for our ideas. ProjectionCAD {#subsec:PCAD} ------------- Our implementation is a third party [Maple]{} package which we call `ProjectionCAD`. It gathers together algorithms for producing CADs via projection and lifting to complement the CAD commands which ship with <span style="font-variant:small-caps;">Maple</span> and use the alternative approach based on the theory of regular chains and triangular decomposition. All the projection operators discussed in Sections \[sec:ExistingProj\] and \[sec:TTIProj\] have been implemented and so `ProjectionCAD` can produce CADs which are sign-invariant, order-invariant, invariant with respect to a declared EC, and truth table invariant. Stack generation (step \[step:lifting2\] in Algorithm \[alg:TTICAD\]) is achieved using an existing command from the `RegularChains` package, described fully in Section 5.2 of [@CMXY09]. To use this we must first process the input to satisfy the assumptions of that algorithm: that polynomials are co-prime and square-free when evaluated on the cell (*separate above the cell* in the language of regular chains). This is achieved using other commands from the `RegularChains` library. Utilising the `RegularChains` code like this means that `ProjectionCAD` can represent and present CADs in the same way. In particular this allows for easy comparison of CADs from the different implementations; the use of existing tools for studying the CADs; and the ability to display CADs to the user in the easy to understand `piecewise` representation [@CDMMXXX09]. Figure \[fig:Maple\] shows an example of the package in use. > f := x^2+y^2-1: > cad := CADFull([f], vars, method=McCallum, output=piecewise); $$\begin{cases} SP & \quad x<-1 \\ \begin{cases} SP & \quad y<0 \\ SP & \quad y=0 \\ SP & \quad 0<y \end{cases} & \quad x=-1 \\ \begin{cases} SP & \qquad\qquad y<-\sqrt{-{x}^{2}+1} \\ SP & \qquad\qquad y=-\sqrt {-{x}^{2}+1} \\ SP & {\it And} \left( -\sqrt {-{x}^{2}+1}<y,y<\sqrt {-{x}^{2}+1} \right) \\ SP & \qquad\qquad y=+\sqrt {-{x}^{2}+1} \\ SP & \qquad\qquad \sqrt {-{x}^{2}+1}<y \end{cases} &{\it And} \left( -1<x,x<1 \right) \\ \begin{cases} SP & \quad y<0 \\ SP & \quad y=0 \\ SP & \quad 0<y \end{cases} & \quad x=1 \\ SP & \quad 1<x \end{cases}$$ > CADNumCellsInPiecewise(cad); $$13$$ Unlike [Qepcad]{}, `ProjectionCAD` has an implementation of delineating polynomials (actually the minimal delineating polynomials of [@Brown2005a]) and so it can solve certain problems without unnecessary warnings. It is also the only CAD implementation that can reproduce the theoretical algorithm [CADW]{}. Other notable features of `ProjectionCAD` include commands to present the different formulations of problems for the algorithms and heuristics to help choose between these. For more details on `ProjectionCAD` and the algorithms implemented within see [@EWBD14], while the package itself is freely available from the authors along with documentation and examples demonstrating the functionality. To run the code users need a version of [Maple]{} and the `RegularChains` Library. Minimising failure of TTICAD {#subsec:Excl} ---------------------------- Algorithm \[alg:TTICAD\] was kept simple to aid readability and understanding. Our implementation does make some extra refinements. Most of these are trivial, such as removing constants from the set of projection polynomials or when taking coefficients in order of degree, stopping if the ones already included can be shown not to vanish simultaneously. The well-orientedness conditions can often be overly cautious. [@Brown2005a] discussed cases where non-well oriented input can still lead to an order-invariant CAD. Similarly here, we can sometimes allow the nullification of an EC on a positive dimensional cell. Define the [*excluded projection polynomials*]{} for each $i$ as: $$\begin{aligned} {\rm ExclP}_{E_i}(A_i) &:= P(A_i) \setminus P_{E_i}(A_i) \label{eq:Excl} \\ &= \{ {\rm coeffs}(g), {\rm disc}_{x_n}(g), {\rm res}_{x_n}(g,\hat{g}) \mid g, \hat{g} \in A_i \setminus E_i, g \neq \hat{g} \}.\nonumber\end{aligned}$$ Note that the total set of excluded polynomials from $P(A)$ will include all the entries of the ${\rm ExclP}_{E_i}(A_i)$ as well as missing cross resultants of polynomials in $A_i \setminus E_i$ with polynomials from $A_j \neq A_i$. \[lem:ConstPolys\] Let $f_i$ be an EC which vanishes identically on a cell $c \in \mathcal{D}'$ constructed during Algorithm \[alg:TTICAD\]. If all polynomials in ${\rm ExclP}_{E_i}(A_i)$ are constant on $c$ then any $g \in A_i \setminus E_i$ will be delineable over $c$. Suppose first that $A_i$ and $E_i$ satisfy the simplifying conditions from Section \[subsec:TTIProjOp\]. Rearranging we see $P(A_i) = P_{E_i}(A_i) \cup {\rm ExclP}_{E_i}(A_i)$. However, given the conditions of the lemma, this is equivalent (after the removal of constants which do not affect CAD construction) to $P_{E_i}(A_i)$ on $c$. So here $P(A_i)$ is a subset of $P_{\mathcal{E}}(\mathcal{A})$ and we can conclude by Theorem \[thm:McC1\] that all elements of $A_i$ vanish identically on $c$ or are delineable over $c$. We can draw the same conclusion in the more general case of $A_i$ and $E_i$ because $P(A_i) = C_i \cup P_{F_i}(B_i) \cup {\rm ExclP}_{F_i}(B_i) \subseteq \mathfrak{P}$. Hence Lemma \[lem:ConstPolys\] allows us to extend Algorithm \[alg:TTICAD\] to deal safely with such cases. Although we cannot conclude sign-invariance we can conclude delineability and so instead of returning failure we can proceed by extending the lifting set $L_c$ to the full set of polynomials (similar to the case of nullification on a cell of dimension zero dealt with in step \[step:addthebi\] of Algorithm \[alg:TTICAD\]). In particular, this allows for ECs $f_i$ which do not have main variable $x_n$. Our implementation makes use of this. Note that the widening of the lifting step here (and also in the case of the zero dimensional cell) is for the generation of the stack over a single cell. The extension is only performed for the necessary cells thus minimising the cell count while maximising the success of the algorithm, as shown in Example \[ex:ExtendingLiftingSet\]. Since a polynomial cannot be nullified everywhere such case distinction will certainly decrease the amount of lifting. \[ex:ExtendingLiftingSet\] Consider the polynomials $$f = z+yw, \quad g = yx+1, \quad h = w(z+1)+1,$$ the single formula $f=0 \wedge g<0 \wedge h<0$ and assume the variable ordering $x \prec y \prec z \prec w$. Using the `ProjectionCAD` package we can build a TTICAD with 467 cells for this formula. The induced CAD of $\R^3$, $D$, has 169 cells and on five of these cells the polynomial $f$ is nullified. On these five cells both $y$ and $z$ are zero, with $x$ being either fixed to $0,4$ or belonging to the three intervals splitting at these points. In this example ExclP$_E(A) = \{z+1\}$ arising from the coefficient of $h$. This is a constant value of 1 on all five of those cells. Thus the algorithm is allowed to proceed without error, lifting with respect to all the projection polynomials on these cells. The lifting set varies from cell to cell in $D$. For example, the stack over the cell $c_1 \in D$ where $x=y=z=0$ uses three cells, splitting when $w=-1$. This is required for a CAD invariant with respect to $f$ since $f=0$ on $c$ but $h$ changes sign when $w=-1$. Compare this with, for example, the cell $c_2 \in D$ where $x=y=0$ and $z<-1$. The stack over $c_2$ has only one cell, with $w$ free. The polynomial $h$ will change sign over this cell, but this is not relevant since $f$ will never be zero. This occurs because $h$ is included in the lifting set only for the five cells of $D$ where $f$ was nullified. In theory, we could go further and allow this extension to apply when the polynomials in ${\rm ExclP}_{E_i}(A_i)$ are not necessarily all constant, but have no real roots within the cell $c$. However, identifying such cases would, in general, require answering a separate quantifier elimination question, which may not be trivial, and so this has yet to be implemented. Formulating problems for TTICAD {#sec:Formulation} ------------------------------- When using Algorithm \[alg:TTICAD\] various choices may be required which can have significant effects on the output. We briefly discuss some of these possibilities here. ### Variable ordering {#subsec:VarOrd} Algorithm \[alg:TTICAD\] runs with an ordering on the variables. As with all CAD algorithms this ordering can have a large effect, even determining whether a computation is feasible. [@BD07] presented problem classes where one ordering gives a constant cell count, and another a cell count doubly exponential in the number of variables. Some of the ordering may already be determined. For example, when using a CAD for quantifier elimination the quantified variables must be eliminated first. However, even then we are free to change the ordering of the free variables, or those in quantifier blocks. Various heuristics have been developed to help with this choice: [@Brown2004]: : Choose the next variable to eliminate according to the following criteria on the input, starting with the first and breaking ties with successive ones: (1) lowest overall degree in the input with respect to the variable; (2) lowest (maximum) total degree of those terms in the input in which it occurs; (3) smallest number of terms in the input which contain the variable. sotd [@DSS04]: : Construct the full set of projection polynomials for each ordering and select the ordering whose set has the lowest *sum of total degree* for each of the monomials in each of the polynomials. ndrr [@BDEW13]: : Construct the full projection set and select the one with the lowest *number of distinct real roots* of the univariate polynomials. fdc [@WEBD14]: : Construct all full-dimensional cells for different orderings (requires no algebraic number computations) and select the smallest. The Brown heuristic perform well despite being low cost. A machine learning experiment by [@HEWDPB14] showed that each heuristic had classes of examples where it was superior, and that a machine learned choice of heuristic can perform better than any one. \[ex:Kahan\] [@Kahan87] gives a classic example for algebraic simplification in the presence of branch cuts. He considers a fluid mechanics problem leading to the relation $$\label{eq:Kahan} 2\rm{arccosh}\left(\frac{3+2z}3\right)-\rm{arccosh}\left(\frac{5z+12}{3(z+4)}\right)= 2\rm{arccosh}\left(2(z+3)\sqrt{\frac{z+3}{27(z+4)}}\right).$$ This is true over all $\mathbb{C}$ except for the small teardrop region shown on the left of Figure \[fig:Kahan\]: a plot of the imaginary part of the difference between the two sides of (\[eq:Kahan\]). Recent work described in [@EBDW13] allows for the systematic identification of semi-algebraic formula to describe branch cuts. This, along with visualisation techniques, now forms part of <span style="font-variant:small-caps;">Maple</span>’s `FunctionAdvisor` [@EC-TBDW14]. For this example the technology produces the plot on the right of Figure \[fig:Kahan\] and describes the branch cuts using 7 pairs of equations and inequalities. With `ProjectionCAD`, a sign-invariant CAD for these polynomials has 409 cells using $x \prec y$ and 1143 with $y \prec x$, while a TTICAD has 55 cells using $x \prec y$ and 39 with $y \prec x$. So the best choice of variable ordering differs depending on the CAD algorithm used. For the sign-invariant CAD, all three heuristics described above identify the correct ordering, so it would have been best to use the cheapest, `Brown`. However, for the TTICAD only the more expensive `ndrr` heuristic selects the correct ordering. ![Plots relating to equation (\[eq:Kahan\]) from Example \[ex:Kahan\].[]{data-label="fig:Kahan"}](KahanIm "fig:"){width="4.5cm"} ![Plots relating to equation (\[eq:Kahan\]) from Example \[ex:Kahan\].[]{data-label="fig:Kahan"}](KahanBC "fig:"){width="4.5cm"} ### Equational constraint designation and logical formulation {#subsec:LogicalForm} If any QFF has more than one EC present then we must choose which to designate for speical use in Algorithm \[alg:TTICAD\]. As with the variable ordering choice, this leads to two different projection sets which could be compared using the $\texttt{sotd}$ and $\texttt{ndrr}$ measures. However, note that this situation actually offers more choice than just the designation. If $\phi_i$ had two ECs then it would be admissible to split it into two QFFs $\phi_{i,1}, \phi_{i,2}$ with one EC assigned to each and the other constraints partitioned between them in any manner. Admissible because any TTICAD for $\phi_{i,1}, \phi_{i,2}$ is also a TTICAD for $\phi_i$. This is a generalisation of the following observation: given a formula $\phi$ with two ECs a CAD could be constructed using either the original theory of [@McCallum1999] or the TTICAD algorithm applied to two QFFs. The latter option would certainly lead to more projection polynomials. However, a specific EC may have a comparatively large number of intersections with another constraint, in which case, separating them into different QFFs could still offer benefits (with the increase in projection polynomials offset by them having less real roots). The following is an example of such a situation. \[ex:TTIorNot\] Assume $x \prec y$ and consider again $\Phi := (f_1 = 0 \wedge g_1>0) \vee (f_2 = 0 \wedge g_2<0)$ but this time with polynomials below. These are plotted in Figure \[fig:FormEx\] where the solid curve is $f_1$, the solid line $g_1$, the dashed curve $f_2$ and the dashed line $g_2$. $$\begin{aligned} f_1 &:= (y-1) - x^3+x^2+x, \qquad \quad g_1 := y - \tfrac{x}{4}+\tfrac{1}{2}, \\ f_2 &:= (-y-1) - x^3+x^2+x, \qquad \, g_2 := -y - \tfrac{x}{4}+\tfrac{1}{2},\end{aligned}$$ If we use the algorithm by [@McCallum1999] with the implicit EC $f_1f_2=0$ designated then a CAD is constructed which identifies all the intersections except for $g_1$ with $g_2$ This is visualised by the plot on the left while the plot on the right relates to a TTICAD with two QFFs. In this case only three 0-cells are identified, with the intersections of $g_2$ with $f_1$ and $g_1$ with $f_2$ ignored. The TTICAD has 31 cells, compared to 39 cells for the other two. Both `sotd` and `ndrr` identify the smaller CAD, while `Brown` would not discriminate. ![Plots visualising the CADs described for Example \[ex:TTIorNot\].[]{data-label="fig:FormEx"}](FormEx1 "fig:"){width="6.0cm"} ![Plots visualising the CADs described for Example \[ex:TTIorNot\].[]{data-label="fig:FormEx"}](FormEx2 "fig:"){width="6.0cm"} More details on the issues around the logical formulation of problems for TTICAD is given by [@BDEW13]. ### Preconditioning input QFFs {#subsec:Grobner} Another option available before using Algorithm \[alg:TTICAD\] is to precondition the input. [@BH91] conducted experiments to see if Gröbner basis techniques could help CAD. They considered replacing any input polynomials which came from equations by a purely lexicographical Gröbner basis for them. In [@WBD12_GB] this idea was investigated further with a larger base of problems tested and the idea extended to include Gröbner reduction on the other polynomials. The preconditioning was shown to be highly beneficial in some cases, but detrimental in others. A simple metric was posited and shown to be a good indicator of when preconditioning was useful [@BDEW13] consider using Gröbner preconditioning for TTICAD by constructing bases for each QFF. This can produce significant reductions in the TTICAD cell counts and timings. The benefits are not universal, but measuring the `sotd` and `ndrr` of the projection polynomials gives suitable heuristics. ### Summary {#subsec:FormulationSummary} We have highlighted choices we may need to make before using Algorithm \[alg:TTICAD\] and its implementation in `ProjectionCAD`. The heuristics discussed are also available in that package. An issue of problem formulation not described in the mathematical derivation of the problem itself. We note that this can have a great effect on the tractability of using CAD (see [@WDEB13] for example). For the experimental results in Section \[sec:Experiment\] we use the specified variable ordering for a problem if it has one and otherwise test all possible orderings. If there are questions of logical formulation or EC designation we use the heuristics discussed here. No Gröbner preconditioning was used as the aim is to analyse the TTICAD theory itself. It is important to note that the heuristics are just that, and as such can be misled by certain examples. Also, while we have considered these issues individually they of course intersect. For example, the TTICAD formulation with two QFFs was the best choice in Example \[ex:TTIorNot\] but if we had assumed the other variable ordering then a single QFF is superior. Taken together, all these choices of formulation can become combinatorially overwhelming and so methods to reduce this, such as the greedy algorithm in [@DSS04] or the suggestion in Section 4 of [@BDEW13] are important. Experimental Results {#sec:Experiment} ==================== Description of experiments {#subsec:ERDescription} -------------------------- Our timings were obtained on a Linux desktop (3.1GHz Intel processor, 8.0Gb total memory) with [Maple]{} 16 (command line interface), [Mathematica]{} 9 (graphical interface) and [Qepcad-B]{} 1.69. For each experiment we produce a CAD and give the time taken and cell count. The first is an obvious metric while the second is crucial for applications performing operations on each cell. For [Qepcad]{} the options [+N500000000]{} and [+L200000]{} were provided, the initialization included in the timings and ECs declared when possible (when they are explicit or formed by the product of ECs for the individual QFFs). In [Mathematica]{} the output is not a CAD but a formula constructed from one [@Strzebonski2010], with the actual CAD not available to the user. Cell counts for the algorithms were provided by the author of the [Mathematica]{} code. TTICADs are calculated using our `ProjectionCAD` implementation described in Section \[sec:Implementation\]. The results in this section are not presented to claim that our implementation is state of the art, but to demonstrate the power of the TTICAD theory over the conventional theory, and how it can allow even a simple implementation to compete. Hence the cell counts are of most interest. The time is measured to the nearest tenth of a second, with a time out ([**T**]{}) set at $5000$ seconds. When [**F**]{} occurs it indicates failure due to a theoretical reason such as not well-oriented (in either sense). The occurrence of Err indicates an error in an internal subroutine of [Maple]{}’s `RegularChains` package, used by `ProjectionCAD`. This error is not theoretical but a bug, which will be fixed shortly. We started by considering examples originating from [@BH91]. However these problems (and most others in the literature) involve conjunctions of conditions, chosen as such to make them amenable to existing technologies. These problems can be tackled using TTICAD, but they do not demonstrate its full strength. Hence we introduce some new examples. The first set, those denoted with a $\dagger$, are adapted from [@BH91] by turning certain conjunctions into disjunctions. The second set were generated randomly as examples with two QFFs, only one of which has an EC (using random polynomials in 3 variables of degree at most 2). Two further examples came from the application of branch cut analysis for simplification. We included Example \[ex:Kahan\] along with the problem induced by considering the validity of the double angle formulae for arcsin. Finally we considered the worked examples from Section \[subsec:WE1\] and the generalisation to three dimensions presented in Example \[ex:3dPhi\]. Note that A and B following the problem name indicate different variable orderings. Full details for all examples can be found in the CAD repository [@WBD12_EX] available freely at `http://dx.doi.org/10.15125/BATH-00069`. Results {#subsec:ERResults} ------- We present our results in Table \[table:Results\]. For each problem we give the name used in the repository, $n$ the number of variables, $d$ the maximum degree of polynomials involved and $t$ the number of QFFs used for TTICAD. We then give the time taken (T) and number of cells of $\mathbb{R}^n$ produced (C) by each algorithm. ----------------- ----- ------- ----------- ------- ----------- ------- -------- ------- -------- ------ -------------------------------------------- Name ndt T C T C T C T C T C IntA 321 360 3707 1.7 269 4.5 825 — Err 0.0 3 IntB 321 332 2985 1.5 303 4.5 803 50.2 2795 0.0 3 RanA 331 269 2093 4.5 435 4.6 1667 23.0 1267 0.1 657 RanB 331 443 4097 8.1 711 5.4 2857 48.1 1517 0.0 191 Int$\dagger$A 322 360 3707 68.7 575 4.8 3723 — Err 0.1 601 Int$\dagger$A 322 332 2985 70.0 601 4.7 3001 50.2 2795 0.1 549 Ran$\dagger$A 332 269 2093 223 663 4.6 2101 23.0 1267 0.2 808 Ran$\dagger$B 332 443 4097 268 1075 142 4105 48.1 1517 0.2 1156 Ell$\dagger$A 542 — [**F**]{} — [**F**]{} 292 500609 1940 81193 11.2 80111 Ell$\dagger$B 542 **T** — **T** — **T** — **T** — 2911 $\genfrac{}{}{0pt}{1}{16,603,}{\quad 131}$ Solo$\dagger$A 432 678 54037 46.1 [**F**]{} 4.9 20307 1014 54037 0.1 260 Solo$\dagger$B 432 2009 154527 123 [**F**]{} 6.3 87469 2952 154527 0.1 762 Coll$\dagger$A 442 265 8387 267 8387 5.0 7813 376 7895 3.6 7171 Coll$\dagger$B 442 — Err — Err **T** — **T** — 592 $\genfrac{}{}{0pt}{1}{1,234,}{\quad 601}$ Ex\[ex:Kahan\]A 247 10.7 409 0.3 55 4.8 261 15.2 409 0.0 72 Ex\[ex:Kahan\]B 247 87.9 1143 0.3 39 4.8 1143 154 1143 0.1 278 AsinA 244 2.5 225 0.3 57 4.6 225 3.3 225 0.0 175 AsinB 244 6.5 393 0.2 25 4.5 393 7.8 393 0.0 79 Ex$\Phi$A 222 5.7 317 1.2 105 4.7 249 6.3 317 0.0 24 Ex$\Phi$B 222 6.1 377 1.5 153 4.5 329 7.2 377 0.0 175 Ex$\Psi$A 222 5.7 317 1.6 183 4.9 317 6.3 317 0.1 372 Ex$\Psi$B 222 6.1 377 1.9 233 4.8 377 7.2 377 0.1 596 Ex\[ex:3dPhi\]A 332 3796 5453 5.0 109 5.3 739 — Err 0.1 44 Ex\[ex:3dPhi\]B 332 3405 6413 5.8 153 5.7 1009 — Err 0.1 135 Rand1 322 16.4 1533 76.8 1533 4.9 1535 25.7 1535 0.2 579 Rand2 322 838 7991 132 2911 5.2 8023 173 8023 0.8 2551 Rand3 322 259 8889 98.1 4005 5.3 8913 77.9 5061 0.7 3815 Rand4 322 1442 11979 167 4035 5.4 12031 258 12031 1.3 4339 Rand5 322 310 11869 110 4905 5.5 11893 104 6241 0.9 5041 ----------------- ----- ------- ----------- ------- ----------- ------- -------- ------- -------- ------ -------------------------------------------- : Comparing TTICAD to other CAD types and other CAD implementations.[]{data-label="table:Results"} We first compare our TTICAD implementation with the sign-invariant CAD generated using `ProjectionCAD` with McCallum’s projection operator. Since these use the same architecture the comparison makes clear the benefits of the TTICAD theory. The experiments confirm the fact that, since each cell of a TTICAD is a superset of cells from a sign-invariant CAD, the cell count for TTICAD will always be less than or equal to that of a sign-invariant CAD produced using the same implementation. Ellipse$\dagger$ A is not well-oriented in the sense of [@McCallum1998], and so both methods return [**FAIL**]{}. Solotareff$\dagger$ A and B are well-oriented in this sense but not in the stronger sense of Definition \[def:WO\] and hence TTICAD fails while the sign-invariant CADs can be produced. The only example with equal cell counts is Collision$\dagger$ A in which the non-ECs were so simple that the projection polynomials were unchanged. Examining the results for the worked examples and the 3d generalisation we start to see the true power of TTICAD. In 3D Example A we see a 759-fold reduction in time and a 50-fold reduction in cell count. We next compare our implementation of TTICAD with the state of the art in CAD: [Qepcad]{} [@Brown2003a], [Maple]{} [@CMXY09] and [Mathematica]{} [@Strzebonski2006; @Strzebonski2010]. [Mathematica]{} is the quickest, however TTICAD often produces fewer cells. We note that [Mathematica]{}’s algorithm uses powerful heuristics and so actually used Gröbner bases on the first two problems, causing the cell counts to be so low. When all implementations succeed TTICAD usually produces far fewer cells than [Qepcad]{} or [Maple]{}, especially impressive given [Qepcad]{} is producing partial CADs for the quantified problems, while TTICAD is only working with the polynomials involved. Reasons for the TTICAD implementation struggling to compete on speed may be that the [Mathematica]{} and [Qepcad]{} algorithms are implemented directly in [C]{}, have had more optimization, and in the case of [Mathematica]{} use validated numerics for lifting [@Strzebonski2006]. However, the strong performance in cell counts is very encouraging, both due its importance for applications where CAD is part of a wider algorithm (such as branch cut analysis) and for the potential if TTICAD theory were implemented elsewhere. The increased benefit of TTICAD {#subsec:IncreasedBenefit} ------------------------------- We finish by demonstrating that the benefit of TTICAD over the existing theory should increase with the number of QFFs and that this benefit is much more pronounced if at least one of these does not have an EC. \[ex:IncreasedBenefit\] We consider a family of examples (to which our worked examples belong). Assume $x \prec y$ and for $j$ a non-negative integer define $$\begin{aligned} f_{j+1} &:= (x-4j)^2 + (y-j)^2 - 1, \qquad g_{j+1} := (x-4j)(y-j) - \tfrac{1}{4}, \\ F_{j+1} &:= \{f_k, g_k\}_{k=1 \dots j+1}, \hspace*{0.7in} \textstyle \Phi_{j+1} := \bigvee_{k=1}^{j+1} ( f_{k}=0 \land g_{k}<0 ), \\ \Psi_{j+1} &:= \textstyle \left( \bigvee_{k=1}^{j} ( f_{k}=0 \land g_{k}<0 ) \right) \lor (f_{j+1}<0 \land g_{j+1}<0).\end{aligned}$$ Then $\Phi_2$ is $\Phi$ from equation (\[eqn:ExPhi\]) and $\Psi_2$ is $\Psi$ from equation (\[eqn:ExPsi\]). Table \[tab:IncreasedBenefit\] shows the cell counts for various CADs produced for studying the truth of the formulae, and Figure \[fig:IB\] plots these values. Both $\Phi_i$ and $\Psi_i$ may be studied by a sign-invariant CAD for the polynomials $F_i$, shown in the column marked `CADFull`. The remaining CADs are specific to one formula. For each formula a TTICAD has been constructed using Algorithm \[alg:TTICAD\] on the natural sub-formulae created by the disjunctions, while the $\Phi_i$ have also had a CAD constructed using the theory of ECs alone. This was simulated by running Algorithm \[alg:TTICAD\] on the single formula declaring the product of the $f_i$s as an EC (column marked `ECCAD`). All the proceeding CADs were constructed with `ProjectionCAD`. For each formula a CAD has also been created with [Qepcad]{}, with the product of $f_i$ declared as an EC for $\Phi_i$. We see that the size of a sign-invariant CAD is grows much faster than the size of a TTICAD. For a problem with fixed variable ordering the TTICAD theory seems to allow for linear growth in the number of formulae. Considering the `ECCAD` and <span style="font-variant:small-caps;">Qepcad</span> results shows that when all QFFs have an EC (the $\Phi_i$) using the implicit EC also makes significant savings. However, it is only when using the improved lifting discussed in Section \[sec:ImprovedLifting\] that these savings restrict the output to linear growth. In the case where at least one QFF does not have an EC (the $\Psi_i$) the existing theory of ECs cannot be used. So while the comparative benefit of TTICAD over sign-invariant CAD is slightly less, the benefit when comparing with the best available previous theory is far greater. ------- --------- ----------- ------------------------------------------------------ ----------- ---------- ------------------------------------------------------ **j** **$F_j$** `ECCAD` `TTICAD` <span style="font-variant:small-caps;">Qepcad</span> `CADFull` `TTICAD` <span style="font-variant:small-caps;">Qepcad</span> 2 145 105 249 317 183 317 3 237 157 509 695 259 695 4 329 209 849 1241 335 1241 5 421 261 1269 1979 411 1979 6 513 313 1769 2933 487 2933 ------- --------- ----------- ------------------------------------------------------ ----------- ---------- ------------------------------------------------------ : Table detailing the number of cells in CADs constructed to analyse the truth of the formulae from Example \[ex:IncreasedBenefit\].[]{data-label="tab:IncreasedBenefit"} ![Plots of the results from Table \[tab:IncreasedBenefit\]. The $x$-axis measures $j$ and the $y$-axis the number of cells. On the left are the algorithms relating to $\Phi_j$ which from top to bottom are: `CADFull`, <span style="font-variant:small-caps;">Qepcad</span>, `ECCAD`, `TTICAD`. On the right are the algorithms relating to $\Psi_j$ which from top to bottom are: `CADFull` and `TTICAD`. []{data-label="fig:IB"}](Table3Phi "fig:"){width="2.5in"} ![Plots of the results from Table \[tab:IncreasedBenefit\]. The $x$-axis measures $j$ and the $y$-axis the number of cells. On the left are the algorithms relating to $\Phi_j$ which from top to bottom are: `CADFull`, <span style="font-variant:small-caps;">Qepcad</span>, `ECCAD`, `TTICAD`. On the right are the algorithms relating to $\Psi_j$ which from top to bottom are: `CADFull` and `TTICAD`. []{data-label="fig:IB"}](Table3Psi "fig:"){width="2.5in"} Conclusions {#sec:Conclusion} =========== We have defined truth table invariant CADs and by building on the theory of equational constrains have provided an algorithm to construct these efficiently. We have extended the our initial work in ISSAC 2013 so that it applies to a general sequence of formulae. The new complexity analyses show that the benefit over previously applicable CAD projection operators is even greater for the new problems now covered. The algorithm has been implemented in [Maple]{} giving promising experimental results. TTICADs in general have much fewer cells than sign-invariant CADs using the same implementation and we showed that this allows even a simple implementation of TTICAD to compete with the state of the art CAD implementations. For many problems the TTICAD theory offers the smallest truth-invariant CAD for a parent formula, and there are also classes of problems for which TTICAD is exactly the desired structure. The benefits of TTICAD increase with the number of QFFs in a problem and is magnified if there is a QFF with no EC (as then the previous theory is not applicable). Future Work {#subsec:FutureWork} ----------- There is scope for optimizing the algorithm and extending it to allow less restrictive input. Lemma \[lem:ConstPolys\] gives one extension that is included in our implementation while other possibilities include removing some of the caution implied by well-orientedness, analogous to [@Brown2005a]. Of course, the implementation of TTICAD used here could be optimised in many ways, but more desirable would be for TTICAD to be incorporated into existing state of the art CAD implementations. In fact, since the ISSAC 2013 publication [@BCDEMW14] have presented an algorithm to build TTICADs using the `RegularChains` technology in [Maple]{} and work continues in dealing with issues of problem formulation for this approach [@EBCDMW14; @EBDW14]. We see several possibilities for the theoretical development of TTICAD: - Can we apply the theory recursively instead of only at the top level to make use of bi-equational constraints? For example by widening the projection operator to allow enough information to conclude order-invariance, as in [@McCallum2001]. When doing this we may also consider further improvements to the lifting phase as recently discussed in [@EBD15]. - Can we make use of the ideas behind partial CAD to avoid unnecessary lifting once the truth value of a QFF on a cell is determined? - Can we implement the lifting algorithm in parallel? - Can we modify the lifting algorithm to only return those cells required for the application? Approaches which restrict the output to cells of a certain dimension, or cells on a certain variety, are given by [@WBDE14]. - Can anything be done when the input is not well oriented? We are grateful to A. Strzeboński for assistance in performing the Mathematica tests and to the anonymous referees of both this and our ISSAC 2013 paper for their useful comments. We also thank the rest of the Triangular Sets seminar at Bath (A. Locatelli, G. Sankaran and N. Vorobjov) for their input, and the team at Western University (C. Chen, M. Moreno Maza, R. Xiao and Y. Xie) for access to their [Maple]{} code and helpful discussions. [^1]: This work was supported by EPSRC grant EP/J003247/1. [^2]: Actually order-invariant CADs (see Definition \[def:OI\]). [^3]: A decomposition into irreducibles. This avoids various technical problems.
Your task - if you accept it - is to write a program which helps understanding my proposal on meta by calculating the winner of a code-golf-reversed competition. Of course, answers to this question will be treated as proposed, so your program (if correct) can calculate whether your answer will become the accepted answer. Rules - The program reads a file with multiple lines of the following format (see example below): [Language] TAB [NumberOfCharacters] TAB [LinkToAnswer] - The file name is passed as an argument to your program or the file is redirected to standard input of your program. It's your choice, please mention the method when giving the answer - It's expected that the input format is correct. There's no need for error handling. - The number of characters is positive. Your program must handle lengths up to 65535. 64k should be enough for everybody :-) - The program outputs those lines on standard output which meet the idea of the meta proposal, that is - the shortest code of a particular programming language wins (reduction phase) - the longest code among all programming languages wins (sorting phase) - in case of a draw, all answers with the same length shall be printed - The order of the output is not important - Although the longest code wins, this is not code-bowling. Your code must be as short as possible for your programming language. - Answers on seldom programming languages which are not attempting to shorten the code deserve a downvote, because they try to bypass the intention of this kind of question. If there's only one answer for a specific programming language, it would be considered as a winner candidate, so you could start blowing its code. Example input file (separated by single tabs if there should be an issue with formatting): GolfScript 34 http://short.url/answer/ags GolfScript 42 http://short.url/answer/gsq C# 210 http://short.url/answer/cs2 Java 208 http://short.url/answer/jav C# 208 http://short.url/answer/poi J 23 http://short.url/answer/jsh Ruby 67 http://short.url/answer/rub C# 208 http://short.url/answer/yac GolfScript 210 http://short.url/answer/210 Expected output (order is not important): C# 208 http://short.url/answer/poi C# 208 http://short.url/answer/yac Java 208 http://short.url/answer/jav Update Some programs rely on the fact that there is a single maximum (like the C# 210 character program). Derived from reality, someone can also write a GolfScript program with 210 characters. The output would remain the same. I have added such a GolfScript to the input. Update 2 As suggested I have retagged (still code-golf as well) and the deadline is 2014-03-06 (which looks like an arbitrary date, but I'll be back to Germany from travelling then). Final results I decided to vote like the following: - Answers where the number of characters cannot be confirmed get a comment to explain the count. - Answers which can easily be reduced get a comment, an edit suggestion and go into the result with the lower count value. (Hopefully I have seen that in advance). - Answers which do not compile get a downvote. (Quite a hard task as it turns out). - Answers which are not golfed get a downvote (as described in the rules already). - Answers which produce expected output get an upvote. Due to some answers which do not work as expected, I use 4 different input files and check against the expected result. Finally, the winner is determined by providing the qualifying answers table as input to my reference program (plus double checking the result manually). If my own answer would be the winning one, I'd exclude it from the list. In case of several winners, I would have to pick only one. Therefore, some bonuses can be earned: - answers which accept more input than expected (e.g. outside the defined ranges) - answers which use a clever idea of making it short I have taken a snapshot of the answers on 6th of March 2014, 19:45 UTC+1. The analysis is ongoing. Checking all the answers is harder than expected...
https://codegolf.stackexchange.com/questions/21718/kind-of-meta-get-the-longest-of-the-shortest-answers/21936
Our Dental Practice Remains Open - National Lockdown Announcement 31st October 2020 This evening Prime Minister Boris Johnson announced a second national lockdown due to the rising number of cases of Covid-19. Old Town Dental Practice will still be open. There are no restrictions with regards to medical treatment and you are actively encouraged to keep your appointments. At our practice we have all the necessary Personal Protective Equipment (PPE) and will continue to follow our high standards in cross infection control. What next? Please continue to attend your appointments. We have excellent infection control procedures along with a robust triaging policy to ensure the safety of both patients and staff. If you are worried about your attendance please contact us to discuss your concerns and we can always postpone your treatment till a later date. We continue to recommend that patients in the high-risk category delay non-essential treatment until further guidance has been given by the relevant authorities. When you arrive: It is important that you wear a mask at all times while inside the dental practice and only remove your mask at the request of the dental professional. We will continue to check your temperature and if this is above 37.8C, you will be advised to rebook your treatment and to follow government guidelines. Please do not arrive more than 5 minutes early for an appointment due to social distancing guidelines. As you enter the building you will need to use the alcohol disinfectant rub to disinfectant your hands. The treatment rooms will be prepared prior to your appointment with all surfaces disinfected after each patient. The air exchange units will continue to be used throughout the day in each surgery. We have sourced PPE and unlike the start of the Pandemic (March 2020 in the UK), there is no shortage. Please do not be concerned as we continue to maintain high standards of cross infection control. Stay safe and please contact us if you have any questions. Kind regards Team @ Old Town Dental Practice 01/07/2020 Dear Patients, After a few months of not being able to see patients face to face due to Covid-19 restrictions we are glad to announce that our practice will be re-opened to see patients on a phased return to work arrangement. Due to the virus still being around we need to take necessary precautions to ensure the safety of our patients and staff and follow government guidelines. We have modified our practice and our standard operative procedures to accommodate current situation. With safety being paramount for us all, we are installing mechanical extractors in each surgery and we will be seeing reduced number of patients due to extra cross infection control measures to ensure the highest standards for the safety of our patients. The following are highlights of what we have put in place that we believe you should be aware of: Avoid exposing our patients to risks of Covid-19 infection by: - Taking comprehensive history of your complaint to assess if a face to face consultation is required. If this is not required, we will still give you advise, analgesic or antibiotics remotely where necessary - Carrying out risk assessment over the telephone to identify patients who have been exposed to corona virus infection and all that are on the shielded patient’s list or are clinically vulnerable - Staff will be monitored on a daily basis for Covid-19 symptoms, including temperature check and a verbal medical assessment will be taken - Checking patient’s body temperature on arrival - Closing our toilet facilities as we respectfully ask that you use the toilet before coming into the practice - Maintaining the social distancing as practically as possible Reducing time spent and contact with other patients in the reception by: - Reducing the number of patients in the practice at any one time. This means patient will have to wait outside/in the car awaiting our call/sms. Only one person will be allowed in the practice (the patient) except where patients cannot consent to treatment by themselves or there is a special need - We will only be seeing patients that have appointments booked, no walk-in enquiries or appointments. All enquiries will be dealt with by the way of telephone or e-mail - Staggering appointments - We will ask patients to complete their medical history and consent forms prior to arrival at the practice - We will endeavour to complete treatment needed on the day to avoid repeat visits to the practice, however this is not always possible. If patients need to return for treatment, we will ask them to sign treatment plan consent remotely and send it back to us prior to their next appointment. Failure to send back a signed treatment plan will mean the treatment will not be done on the day. Changing our cross-infection control process as follows: - We will be asking patients to wear face masks before coming into the practice and to keep it on until the dental staff ask them to take it off just before the procedure starts. If you are not able to get a face mask, we will provide one for you at the entrance. - Patients are to use hand disinfectant immediately they step into the practice - Patient coats and bags to be kept in the box provided just outside the surgery but within the practice. Where possible, we will advise that you leave all unnecessary belongings in the car. - Wearing of FFP2 or FFP3 face masks during aerosol generating procedures (AGP) by both dentist and nurse - Wearing full surgical gowns /overalls during AGP by both dentist and dental nurse - All the above to be done in addition to what we already do routinely such as wearing face shields/visors, safety goggles or loupes, gloves, and bibs for patients - We will be asking patients to use mouthwash to rinse prior to dentist examining their mouth. This mouthwash has shown evidence of viricidal activity - The surgeries are being cleaned and disinfected after each patient as we have always done so, each surgery is left with the extractor fan switched on for 1 hour after the appointment. In addition, the box holding patient’s belongings gets wiped down - Wiping down door handles and reception desk and surfaces regularly With our best wishes The Team at Old Town Dental Practice ABOUT US What Can You Expect from Us? - We will listen to you and try to address your dental needs - Your safety is our primary concern. We will follow the latest guidelines in infection control - We will keep you informed of the latest techniques and use the best materials available - We will give you a written estimate of any treatment - We will try not to keep you waiting but if we do, it is because we are giving someone else the care you would expect - We will continue our active clinical governance 7 audit program to enhance quality assurance We are a welcoming team treating patients in a warm and friendly environment. We are dedicated to maintaining the highest standards and accommodating individual needs. All of our Clinicians and Dental Care Professionals have been enhanced CRB checked and are registered with the General Dental Council. Spread the cost of dental treatment with interest free credit Our financing plan is a popular option for patients considering dental treatments. It allows you to spread the cost of your treatment over an agreed period that fits within your budget. Call today for a consultation. A complete dental service Routine Exams During a dental exam, the dentist will: - Evaluate your overall health and oral hygiene - Evaluate your risk of tooth decay, root decay, and gum or bone disease - Evaluate your need for tooth restoration or tooth replacement - Check your bite and jaw for problems - Demonstrate proper cleaning techniques for your teeth or dentures - Assess your need for fluoride - Take dental X-rays or, if necessary, do other diagnostic procedures - Provide a treatment plan Dental Hygiene Dental hygienists are an important part in helping people care for their teeth and are responsible for a lot of preventative dental treatment, cleaning teeth to minimise the risk of gum disease developing. Polish and scale treatments should not hurt but if there is some discomfort or pain then the hygienists will have anaesthetic gels that can be used to make sure that the treatment is comfortable. Dental Fillings A filling is a way to restore a tooth damaged by decay back to its normal function and shape. When a dentist gives you a filling, he or she first removes the decayed tooth material, cleans the affected area, and then fills the cleaned out cavity with a filling material. By closing off spaces where bacteria can enter, a filling also helps prevent further decay. Materials used for fillings include gold, porcelain and a composite resin (tooth-colored fillings). Teeth Whitening Teeth can discolour for various reasons. The dentist will recommend the most ideal method based on your oral condition after an in-office examination to establish the cause and nature of your tooth discolouration, as well as provide you with more information on the various types of whitening procedures available, duration & frequency of treatment. Dental Implants At Old town Dental Practice we have years of experience in the placing of quality dental implants. Implants one of the most aesthetic and cost efficient ways to replace poor or missing teeth. Used to support a single crown, bridge containing two or more teeth or a denture covering your entire upper or lower jaw, implants offer a long lasting, effective solution and most importantly, help to bring back that confident smile. Dentures & Veneers Veneers Veneers are thin shells of tooth coloured material (ceramic or composite) that are “cemented” to the front surface of teeth to improve their cosmetic appearance used to mask stained, misaligned or chipped and broken teeth. Veneers are usually placed on front teeth and are generally not suitable in load bearing areas like back teeth. Dentures Dentures are used to replace missing teeth and are removable. There are two types of dentures – complete or ‘full’ (where all your teeth are missing) or partial dentures (used to replace one or more missing teeth). These can be made of plastic or metal and anchor onto the remaining teeth for stability. They also provide structure to your smile and prevent surrounding facial muscles from sagging. Your dentist will also be able to offer you the ‘Flexible Denture’ which is based on a new flexible material, making them much easier to wear, more comfortable and able to adapt to slight changes in the shape of your mouth, for example, when you are eating or drinking. Emergency Dentistry We provide an emergency dental care throughout the year. The emergency dental service is provided to local and national patients and is not restricted. If you require emergency dental care please contact us. Our goal is immediate dental treatment and pain relief, so don't suffer in pain, help is only a call away. Emergency appointments and walk in appointments available. We will always endeavour to see you on the same day. We cannot always be flexible with the emergency appointments so please call as soon as possible and we will allocate you an appointment. We can also offer you a sit and wait service. OUR DENTISTS Dr. Carl Agyeman GDC: 229332 Dr. Marta Zdrodowska GDC: 250095 Dr. Ashmi Parekh GDC: 271738 Dr. Rakhee Raichoora GDC: 264521 HAPPY PATIENTS Went to the dentist for the first time in many years here today, after developing a.... slight fear after a previous dentist was a tad ruff in another place I used to go to. I've left asking myself why I was put off so badly. If you have a fear I can certainly recommend Susan as a hygienist, no pain and I've booked to return here. Thanks again Susan for returning a clean smile just before my wedding. P. Ottaway Fantastic service. I was so scared, not having been to a dentist for a number of years, but all the staff made me feel so welcome. They are all very friendly and understanding. S. James Gave great, clear advice and undertook the treatment thoroughly. My treatment was quite a challenge and Dr. Marta made me feel at ease. Lorry P. Best dental clinic ever. Professional dentists and staff. No pain, recommend them. Petko S. More reviews can be found on Google...
https://www.oldtowndentalpractice.com/home
Ay Carmela! Spain, 1989 (98 minutes). Director: Carlos Saura. Interpreters: Andrés Pajares, Carmen Maura, Gabino Diego. After two controversial movies like The Golden Y The dark night, Carlos Saura was launched to prepare a magnificent adaptation of the work of Sanchis Sinisterra, co-written with Rafael Azcona. Ay Carmela! It is a wise combination of tragedy and comedy, in which Saura flaunts narrative dominance, which approaches the Civil War with a look between bitter and emotional. A round film that monopolized several Goya awards. 19.20 / All chains The television networks overturn with the general elections In La 1, at 19.20, it will begin 10N It's up to you, a special program presented by Ana Blanco and Carlos Franganillo, who, until 1.30, will be aware of what happens in the elections. For its part, Antena 3 will broadcast from 19.30 Public mirror: Special Elections, with Susanna Griso in front, accompanied by Vicente Vallés and Matías Prats. In Cuatro, Pedro Piqueras will be responsible for coordinating since 19.45 General Elections 10N, 1 program in which Javier Ruiz and a group of political analysts will also be present. In La Sexta, starting at 19.25, Al Rojo Vivo will follow in detail all the electoral events with García Ferreras as driver. 21.25 / The 2 ‘Essentials’, with Ignacio Aldecoa On the 50th anniversary of the death of Ignacio Aldecoa, Essentials broadcast the documentary Aldecoa: the escape to paradise, which is based on the writer's book about the Canary Islands. For Aldecoa, his stay in the islands meant the discovery of a territory and some people who helped him mitigate the grief of the dictatorship and transported him to another Spain that was still almost virgin. 21.30 / Cinema Ñ I dock at three Spain, 1962 (90 minutes). Director: José María Forqué. Interpreters: Cassen, José Luis López Vázquez, Gracita Morales, Manuel Alexandre. About 60 years of life, I dock at three It remains one of the best comedies of Spanish cinema. A staging as fluid as agile, always in search of the precise framing, gives luster to a plot that parodies the perfect robberies, according to the model marked by the Italian Rufufú, combined with a tender and ironic social look. An unrepeatable cast gives life to the heroes of the story, some office workers who devise a plan to make money they believe they deserve. 21.30 / DMAX New appointment with ‘Road Control’ The Road Control documentary series accompanies different Civil Guard teams in enclaves such as Madrid, Galicia and Andalusia to show their preventive work and their campaigns against speeding or consumption of alcohol and drugs behind the wheel. The series shows episodes like a fire on the Costa del Sol whose flames go down the hill to the shoulder or the attention of a motorist seriously injured in Madrid, when the traffic jam resulting from the accident complicates the arrival of the mobile UVI. 22.30 / The 2 A wild god France, 2011 (80 minutes). Director: Roman Polanski. Interpreters: Kate Winslet, Jodie Foster, John C. Reilly, Christoph Waltz. After a fight between their children, two couples try to resolve the conflict in a friendly way. The play by Yasmina Reza serves Roman Polanski to lock his protagonists in an apartment and follow them with a millimeter chamber, while digging into his prejudices, contradictions and cruelty. The staging of the filmmaker fills the screen with density and scrutinizes his characters with absolute impudity.
https://spainsnews.com/what-to-watch-today-on-tv-sunday-november-10-2019-television/
What a Difference a Season Makes Anybody recognize this photo? It’s the same location as the one on the my home page. Victoria Park, Truro’s world famous jewel. No lolling around on rustic park benches today though. Couldn’t even find the rustic park benches. They were buried under a couple of meters of snow. In fact the trails themselves were a meter or more above the ground. One sure found that out in a hurry if they stepped off the trail. Sank right to the knee. But it was a beautiful day, sunny and around 0c and no wind. Lovely day for a walk if you watched your footing carefully and Marilyn and I really enjoyed it. We weren’t the only ones either. Lots of families out with their kids, some couples and even a few old folks like us. Plenty of dogs too, from the large to the vertically challenged. No cats though. Cats don’t seem to be great fanatics for snowy walks in the park. Victoria Falls was a bit of a disappointment but no doubt it will be back in all it’s glory when all this white stuff melts in the spring. I don’t know if it will be possible to get near it though once the trails soften up. And how does one follow up all this fresh air and exercise? Why an Angus burger of course. Not quite as exciting as the snow packed trails and natures beauty but almost as satisfying.
https://williamhgould.me/what-a-difference-a-season-makes/
A pc is a device that can be instructed to undertake sequences of arithmetic or reasonable operations automatically via computer programming. Contemporary computers have the ability to follow generalized models of operations, called programs. Programming. Pc and information research scientists style new programming languages that are utilized to write software. The new languages create software writing more efficient by enhancing an existing language, such as Java, or even by making a specific aspect of programming, for example image processing, easier. vpn review Personal computers analysts, sometimes called systems designers, study an organization’s current personal computers and procedures, and design methods to help the organization operate more efficiently plus effectively. They bring business plus information technology (IT) together by learning the needs and limitations of each. Information science. Computer and information analysis scientists write algorithms that are utilized to detect and analyze patterns within very large datasets. They improve methods to sort, manage, and display information. Computer scientists build algorithms straight into software packages that make the data easier with regard to analysts to use. For example , they may make an algorithm to analyze a very large group of medical data in order to find new methods to treat diseases. They may also search for patterns in traffic data to assist clear accidents faster. Some pc scientists may work on teams along with electrical engineers, computer hardware engineers, along with other specialists on multidisciplinary projects. Computer and information research researchers create and improve computer software plus hardware. The Computer Technology Computer plus Graphics Technology program is designed mainly for students seeking employment along with organizations that use computers to procedure, design, manage, and communicate details.
http://cadastru-office.ro/history-of-computer-technology/
"Finding Top-k Dominance on Incomplete Big Data Using Map Reduce Framew" by Payam Ezatpoor, Justin Zhan et al. Incomplete data is one major kind of multi-dimensional dataset that has random-distributed missing nodes in its dimensions. It is very difficult to retrieve information from this type of dataset when it becomes large. Finding top-k dominant values in this type of dataset is a challenging procedure. Some algorithms are present to enhance this process, but most are efficient only when dealing with small incomplete data. One of the algorithms that make the application of top-k dominating (TKD) query possible is the Bitmap Index Guided (BIG) algorithm. This algorithm greatly improves the performance for incomplete data, but it is not designed to find top-k dominant values in incomplete big data. Several other algorithms have been proposed to find the TKD query, such as Skyband Based and Upper Bound Based algorithms, but their performance is also questionable. Algorithms developed previously were among the first attempts to apply TKD query on incomplete data; however, these algorithms suffered from weak performance. This paper proposes MapReduced Enhanced Bitmap Index Guided Algorithm (MRBIG) for dealing with the aforementioned issues. MRBIG uses the MapReduce framework to enhance the performance of applying top-k dominance queries on large incomplete datasets. The proposed approach uses the MapReduce parallel computing approach involving multiple computing nodes. The framework separates the tasks between several computing nodes to independently and simultaneously work to find the result. This method has achieved up to two times faster processing time in finding the TKD query result when compared to previously proposed algorithms. Ezatpoor, P., Zhan, J., Wu, J. M., Chiu, C. (2018). Finding Top-k Dominance on Incomplete Big Data Using Map Reduce Framework. IEEE Access, 6 7872-7887.
https://digitalscholarship.unlv.edu/compsci_fac_articles/70/
This article considers how ransomware is evolving globally and we call out what could and should be done about it. Explore content Critical infrastructure assets are high value targets for state-based cyber espionage and asymmetric warfare, and increasingly, active ransomware criminal groups. Aided by rapid digitisation, 2020 was characterised by a significant increase in cyber-criminal activity, in particular ransomware attacks. Research indicates a seven-fold rise in ransomware attacks over the first half of 2020. Indeed, all our essential services are increasingly at risk, as a successful cyber attack on critical infrastructure can: - disrupt operations and the supply of electricity, oil, gas, water, waste management, and transport - further threaten the safety of workers and citizens as dependent services, including emergency services and health facilities, suffer shortages or are compromised as collateral damage - impact revenue, result in reputational damage, and lead to litigation or regulatory consequences to the service outage - bring an economy to a standstill in a serious and sustained scenario, due to the domino effects described earlier, and the possibility of public disturbance and civil unrest - be leveraged to weaken a country’s government and essential services in preparation for a conventional military attack by another nation-state. The ransomware landscape Numerous other incidents in the Asia Pacific region have increased both public and private awareness of the domino effect of a cyber attack on a critical industry, and the need for both preventative measures and robust recovery plans to avoid and mitigate local disasters. In 2020, a report from cybersecurity company Lumu reported that more than half of all companies in the region were affected by ransomware1. According to Cybersecurity Ventures, there will be a ransomware attack on businesses across the world every 14 seconds in 20212. There is no escaping this threat, and it is becoming more and more potent. Why are ransomware attacks so successful? By denying access to core systems, ransomware can cause an organisation to run its operations in a highly degraded state. In addition to the growing sophistication of ransomware groups, changing expectations have increased the risk to critical infrastructure. To meet stakeholders’ demands for simplicity, efficiency and value while meeting budget constraints, organisations increasingly embrace digitisation, including converging IT with Operational Technology (OT) and leveraging cloud and Industrial Internet of Things (IIoT) technologies. In addition, the pandemic forced many organisations to quickly enable remote access for their OT personnel. These changes result in OT environments being more exposed to increasingly sophisticated cyber threats. Ten questions to move forward Critical infrastructure organisations need to create transparency around key cyber risks such as ransomware, so that leadership, Boards and the C-suite can better monitor and address them—and maintain safety and reliability while modernising their operations. We’ve compiled ten key questions to help you kickstart or re-evaluate your efforts to protect critical operational processes and systems against the threat of ransomware: Read our full report to find out more. 1. R. Dallon Adams, “Ransomware attacks by industry, continent, and more,” TechRepublic, October 12, 2020. 2. Steve Morgan, “Global Ransomware Damage Costs Predicted To Reach $20 Billion (USD) By 2021,” Cybersecurity Ventures, October 21, 2019. Published: March 2021 Recommendations Building cyber security into critical infrastructure Protecting industrial control systems in Asia Pacific.
https://www2.deloitte.com/au/en/pages/risk/articles/ransomware-in-critical-infrastructure.html?icid=wn_ransomware-in-critical-infrastructure
While cross-site scripting (XSS) is a website vulnerability that’s existed since the 1990s, XSS is still prominent today. Cross-site scripting is one of the most commonly detected vulnerabilities in Verizon's 2020 Data Breach Investigations Report and has been listed as one of the Open Web Application Security Project's top 10 vulnerabilities since its first publication. Here's a closer look at the challenge of how to mitigate cross-site scripting. What is cross-site scripting? With cross-site scripting, an attacker injects their own code onto a legitimate website; the code then gets executed when the site is loaded onto the victim's browser. How does cross-site scripting work? XSS works because web browsers inherently trust that the code behind the websites they load will be "normal" and secure. In popular XSS attacks, malicious code is either added to the end of a URL or posted directly onto a page that displays user-generated content. These attacks succeed because vulnerabilities are widespread and can happen whenever a web application fails to validate or encode user input. In many cases, the unsuspecting user's browser will trust—and therefore execute—the malicious script. What is cross-site scripting's aim? An XSS attack's primary goal is to take over access to the user's resources or data. With the right access, the attacker can read data, impersonate the user, intercept confidential data or even make website changes. What are the risks of cross-site scripting? The risks of an XSS are dangerous yet straightforward. A successful attack allows the attacker to perform all the available actions of the target user in a web application—including sending messages, capturing keystrokes or conducting financial transactions. XSS scripts may access cookies or session tokens or other sensitive browser data. How to mitigate cross-site scripting risks Preventing cross-site scripting isn't a simple fix that you can turn on or off. Depending on the web application, protection strategies will differ. That said, it's important to consider the following strategies for how to mitigate cross-site scripting. - Whenever possible, prohibit HTML code in inputs. Preventing users from posting HTML code into form inputs is a straightforward and effective measure. - Validate inputs. If you're going to accept form inputs, validating the data to ensure it meets specific criteria will be helpful - Sanitize data. Similar to validation, sanitizing occurs after data has been posted but before it is executed. Look for online tools like HTMLSanitizer to sanitize HTML code online for XSS vulnerabilities. - Use a web application firewall (WAF). Rules can be created on a WAF to specifically address XSS by blocking abnormal server requests. A robust WAF should be a key component of your organization's security strategy, as it can also prevent SQL injection attacks, distributed denial-of-service attacks and other common threats. When it comes to how to mitigate cross-site scripting, a vulnerability assessment or penetration test (or preferably both) can be incredibly helpful to identify not only XSS but also any other vulnerabilities within your network. Learn how to mitigate cross-site scripting with Verizon's Web Application Firewall solution.
https://www.verizon.com/business/resources/articles/s/how-to-mitigate-cross-site-scripting/
Clarke MJ, et al. Global Spine J. 2017. OBJECTIVE: Surgical decompression and reconstruction of symptomatic spinal metastases has improved the quality of life in cancer patients. However, most data has been collected on cohorts of patients with mixed tumor histopathology. We systematically reviewed the literature for prognostic factors specific to the surgical treatment of prostate metastases to the spine. METHODS: A systemic review of the literature was conducted to answer the following questions: Question 1. Describe the survival and functional outcomes of surgery or vertebral augmentation for prostate metastases to the spine. Question 2. Determine whether overall tumor burden, Gleason score, preoperative functional markers, and hormonal naivety favor operative intervention. Question 3. Establish whether clinical outcomes vary with the evolution of operative techniques. RESULTS: A total of 16 studies met the preset inclusion criteria. All included studies were retrospective series with a level of evidence of IV. Included studies consistently showed a large effect of hormone-naivety on overall survival. Additionally, studies consistently demonstrated an improvement in motor function and the ability to maintain/regain ambulation following surgery resulting in moderate strength of recommendation. All other parameters were of insufficient or low strength. CONCLUSIONS: There is a dearth of literature regarding the surgical treatment of prostate metastases to the spine, which represents an opportunity for future research. Based on existing evidence, it appears that the surgical treatment of prostate metastases to the spine has consistently favorable results. While no consistent preoperative indicators favor nonoperative treatment, hormone-naivety and high Karnofsky performance scores have positive effects on survival and clinical outcomes. Next story Prognostic factors in patients with spinal metastasis: a systematic review and meta-analysis. Previous story End of life care for glioblastoma patients at a large academic cancer center.
https://neurocirurgiabr.com/2017/11/26/systematic-review-of-the-outcomes-of-surgical-treatment-of-prostate-metastases-to-the-spine-2/
execute concurrently across nodes on different replicas. Replication, however, raises the question of how threads coordinate access to the replicas and maintain them in sync. For efficiency, NR uses different mechanisms to coordinate threads within nodes and across nodes. At the highest level, NR leverages the fact that coordination within a node is cheaper than across nodes. Within each node, NR uses flat combining (a technique from concurrent computing5). Flat combining batches operations from multiple threads and then executes the batch using a single thread, called the combiner. The combiner is analogous to a leader in distributed systems. In NR, we batch operations from threads in the same node, using one combiner per node. The combiner of a node is responsible for checking if threads within the node have any outstanding update operations, and then it executes all such operations on behalf of the other threads. Which thread is the combiner? The choice is made dynamically among threads within a node that have outstanding operations. The combiner changes over time: it abdicates when it finishes executing the outstanding updates, up to a maximum number. Batching can gather many operations, because there are many threads per node (e.g., 28 in our machine). Batching in NR is advantageous because it localizes synchronization within a node. Across nodes, threads coordinate through a shared log (a technique from distributed systems1). The combiner of each node reserves entries in the log, writes the outstanding update operations to the log, brings the local replica up-to-date by replaying the log if necessary, and executes the local outstanding update operations. Node Replication applies an optimization to read-only operations (operations that do not change the state of the data structure). Such operations execute without going through the log, by reading directly the local replica. To ensure consistency (linearizability8), the operation must ensure that the local replica is fresh: the log must be replayed at least until the last operation that completed before the read started. We have considered an additional optimization, which dedicates a thread to run the combiner for each node; this thread replays the log proactively. This optimization is sensible for systems that have many threads per node, which is an ongoing trend in processor architecture. However, we have not employed this optimization in the results we present here. The techniques above provide a number of benefits:• Reduce Cross-Node Synchronization and Contention: NRappends to the log without acquiring locks; instead, ituses the atomic Compare-And-Swap (CAS) instructionon the log tail to reserve new entries in the log. The CASinstruction incurs little cross-node synchronization Node Replication is a NUMA-aware algorithm for concurrent data structures. Unlike traditional algorithms, which target a specific data structure, NR implements all data structures at once. Furthermore, NR is designed to work well under operation contention. Specifically, under update-heavy contended workloads, some algorithms drop performance as we add more cores; in contrast, NR can avoid the drops, so that the parallelizable parts of the application can benefit from more cores without being hindered by the data structures. NR cannot always outperform specialized data structures with tailored optimizations, but it can be competitive in a broad class of workloads. While NR can provide any concurrent data structures, it does not automatically convert entire single-threaded applications to multiple threads. Applications have a broad interface, unlike data structures, so they are less amenable to black-box methods. To work with an arbitrary data structure, NR expects a single-threaded implementation of the data structure provided as four generic methods: Create() → ptr Execute(ptr, op, args) → result IsReadOnly(ptr, op) → Boolean Destroy() The Create method creates an instance of the data structure, returning its pointer. The Execute method takes a data structure pointer, an operation, and its arguments; it executes the operation on the data structure, returning the result. The method must produce side effects only on the data structure and it must not block. Operation results must be deterministic, but we allow nondeterminism inside the operation execution and the data structure (e.g., levels of nodes in a skip list). Similarly, operations can use randomization internally, but results should not be random (results can be pseudorandom with a fixed initial seed). The IsReadOnly method indicates if an operation is read-only; we use this information for read-only optimizations in NR. The Destroy method deallocates the replicas and the log. NR provides a new method ExecuteConcurrent that can be called concurrently from different threads. For example, to implement a hash table, a developer provides a Create method that creates an empty hash table; an Execute method that recognizes three op parameters (insert, lookup, remove) with the args parameter being a key-value pair or a key; and a IsReadOnly method that returns true for op=lookup and false otherwise. The Execute method implements the three operations of a hash table in a single-threaded setting (not thread-safe). NR then provides a concurrent (thread-safe) implementation of the hash table via a new method ExecuteConcurrent. For convenience, the developer may subsequently write three simple wrappers (insert, lookup, remove) that invoke ExecuteConcurrent with the appropriate op parameter.
https://mags.acm.org/communications/december_2018/?pg=101
Our whole school e-safety policy can be viewed below. Staying safe when accessing the internet is becoming increasingly important. Each term we have a whole-school e-Safety objective which provides focus for development and discussion within classes at appropriate levels for each individuals understanding. This is the whole school e-safety objective for the current term: The list of e-safety objectives in full can be viewed by clicking on the document below. Whole School E-Safety Objectives Overview We regularly conduct individual and small group sessions with the students we identify as in need of more detailed input and guidance, to teach the importance of staying safe online and strategies to support this. E-Safety at Home To help both parents and our pupils when using the internet at home, our Assistant Head Cathie has put together a list of links to support information which is particularly important as many pupils will be using new platforms and websites for online learning. Please use these links for information on keeping your child safe online including: - Thinkuknow provides advice from the National Crime Agency (NCA) on staying safe online. - Parent info is a collaboration between Parentzone and the NCA providing support and guidance for parents from leading experts and organisations. - Childnet offers a toolkit to support parents and carers of children of any age to start discussions about their online life, to set boundaries around online behaviour and technology use, and to find out where to get more help and support. - Internet Matters provides age-specific online safety checklists, guides on how to set parental controls on a range of devices, and a host of practical tips to help children get the most out of their digital world. - London Grid for Learning has support for parents and carers to keep their children safe online, including tips to keep primary aged children safe online. - Net-aware has support for parents and carers from the NSPCC and O2, including a guide to social networks, apps and games. - Let’s Talk About It has advice for parents and carers to keep children safe from online radicalisation. - UK Safer Internet Centre has tips, advice, guides and other resources to help keep children safe online, including parental controls offered by home internet providers and safety tools on social networks and other online services. If you require any further information or guidance then please do not hesitate to contact your child's class teacher using the usual channels. Additional Links: To help support safer internet use at home, 'The Safety Detectives' have analysed a range of antivirus software to assess the level of parental control support available. Their review can be found using the link below. Safety Detectives - Antivirus Parental Controls Review Vodafone also offer a monthly 'digital parenting magazine' which offers lots of useful updates regrading specific websites and general tips to hep support your child's safety online. A link to their page can be found below. Vodafone - Digital Parenting Information Alternatively, please take a look at some of the links below which may provide further information our guidance regarding e-safety for discussion either at school or at home.
https://www.harfordmanor.norfolk.sch.uk/e-Safety/
SALT LAKE CITY — Salt Lake City is considering ranked-choice voting for this year’s city elections. At its meeting Tuesday night, the city council decided to wait until April 20 to figure out whether it will use ranked-choice voting. RELATED: Utah Legislature passes bill to allow more cities to experiment with ranked-choice voting In a traditional election, voters choose one candidate. But under ranked-choice voting, you list them in order of preference from first to least desirable. The council talked about costs, how the process works and public education efforts related to the alternative. RELATED: Herbert defends vote-by-mail system, endorses ranked-choice voting “Is there enough time to raise awareness and educate the public about changing the election method to ranked choice voting? And note that the time to educate voters significantly changes whether there is or is not a primary,” said Benjamin Luedtke, an economic and public policy analyst. Former Governor Gary Herbert has endorsed ranked-choice voting and several cities are testing it.
https://www.fox13now.com/news/local-news/slc-considers-ranked-choice-voting
--- author: - 'I.Klebanov, P.Gritsay, N.Ginchitskii' title: 'Exact solution of the Percus-Yevick integral equation for collapsinghard spheres' --- > *Chelyabinsk State Pedagogical University, Departament of Mathematics, 454080 Chelyabinsk, Russia* > > By Wertheim-method the exact solution of the Percus-Yevick integral equation for a system of particles with the repulsive step potentialinteracting (collapsinghard spheres) is obtained. On the basis of this solution the state equation for the repulsive step potentialis built and determined, that the Percus-Yevick equation does not show phase transition for collapsinghard spheres.\ > PACS: 61.20.Ja,64.70.Ja [2]{} In 1963 Wertheim and Thiele independently gained the exact analytical solution of the Percus-Yevick integral equation for hard spheres [@Wertheim:63]-[@Thiele:63]. This solution is so far the only strict analytical one of non-linear integral equation for pair distribution function. The given work shows the possibility of Wertheim method to express the solution of the Percus-Yevick equation in closed analytical form for more complicated particles interaction potential — repulsive step potential [@Ryzhov:03]\ $$V(r)= \begin{cases} \infty, &r<a\\ V_{0},&a\leq r\leq b\\ 0,&r>b. \end{cases}$$ where $V_{0}$ — positive constant, $r$ — particles range (collapsinghard spheres). Such a potential used to have a great application in modeling phase transition in liquid under the high pressure, isostructural phase transitions in crystals, transformations in colloid systems, etc. by means of molecular dynamics within the framework of thermodynamic perturbation theory [@Ryzhov:03; @Mikheenkov:04].\ Let’s consider the Percus-Yevick equation $$\begin{split} n_{2}(r)=1-n&\!\int\left[e^{\beta V(\vec{s})}-1\right]n_{2}(\vec{s})\times\\ &\times\left[n_{2}(\vec{r}-\vec{s})-1\right]\vec{ds} \end{split}$$ where $n_{2}(r)$ — the pair distribution function, $\beta\!\!=\!\!\dfrac{1}{kT}$, $n$ — the particle density. Moving to bipolar coordinates and integrating for angle variable for the repulsive step potential we gain that $$\begin{split} &h(r)=Ar-2\pi n\int\limits^{a}_{0}h(s)ds\int\limits^{r+s}_{\left|r-s\right|}h(t)e^{-\beta V(t)}dt\,-\\ &-2\pi n(1-e^{-\beta V_{0}})\int\limits^{b}_{a}h(s)ds\int\limits^{r+s}_{\left|r-s\right|}h(t)e^{-\beta V(t)}\,dt \end{split}$$ where $$\begin{split} h(r)=rn_{2}(r)&e^{\beta V(r)}=\\=& \begin{cases} -rC(r),&r<a\\ \dfrac{-rC(r)}{1-e^{-\beta V_{0}}},&a\leq r\leq b\\ n_{2}(r),&r>b \end{cases} \end{split}$$ $C(r)$ — direct correlation function. In the approach of Percus-Yevick $$C(r)=(1-e^{\beta V(r)})n_{2}(r)$$ $n_{2}(r)=0$ at $r<a$, $C(r)=0$ at $r>b$, and $e^{-\beta V(t)}=e^{-\beta V_{0}}\Theta(t-a)\Theta(b-t)+\Theta(t-b)$; $\Theta(x)$ — Haviside step function; $$\begin{split} A=1+4\pi &n\int\limits^{a}_{0}\!\!s\,h(s)ds+\\+ &4\pi n(1-e^{-\beta V_{0}})\int\limits^{b}_{a}\!\!s\,h(s)ds \end{split}$$ We take the one-side Laplace transform for (3) $\hat L(h(r))=\int\limits^{\infty}_{0}h(r)e^{-zr}dr$ and change the order of integration for $r$ and $t$, we finally obtain: $$\psi(s)=\frac{\dfrac{A+\gamma z\delta(z)}{z^{2}}-L(z)}{1-\dfrac{2\pi n}{z}\left[L(z)-L(-z)\right]}$$ where $$\begin{split} &\psi(z)=\hat L(rn_{2}(r))=G(z)+e^{-\beta V_{0}}K(z)\\ &L(z)=\hat L(-rC(r))=F(z)+(1-e^{-\beta V_{0}})K(z)\\ &F(z)=\int\limits^{a}_{0}h(s)e^{-zs}ds\\ &K(z)=\int\limits^{b}_{a}h(s)e^{-zs}ds\\ &G(z)=\int\limits^{\infty}_{b}h(s)e^{-zs}ds\\ &\delta(z)=\alpha(z)-\alpha(-z)\\ &\gamma=2\pi\,ne^{-\beta V_{0}}(1-e^{-\beta V_{0}}) \end{split}$$ For further investigation we lead in the following function $$H(z)=z^{4}\psi(z)\left[\frac{A+\gamma z\delta(z)}{z^{2}}-L(-z)\right]$$ Discussions like in [@Wertheim:64] show that $$H(z)=\lambda_{1}+\lambda_{2}z^{2},$$ where $\lambda_{1},\lambda_{2}$ — are contstants. Not including $\psi(z)L(-z)$ from (5) and (6) and turning the Laplace transform into the $r\leq b$ area, we gain the explicit expression for $h(r)$ $$h(r)=-(C_{0}+C_{1}r+C_{2}r^{2}+C_{3}r^{4})$$ where $$\begin{split} &C_{0}=2\pi ne^{-\beta V_{0}}(\lambda_{1}k_{2}+k_{0}(\gamma\delta_{1}-l_{0}))\\ &C_{1}=\lambda_{1}(-1+2\pi ne^{-\beta V_{0}}k_{1})\\ &C_{2}=\pi n(-\lambda_{2}+\lambda_{1}k_{0}e^{-\beta V_{0}})\\ &C_{3}=-\frac{\pi n\lambda_{1}}{12} \end{split}$$ and the constants $\lambda_{1},\lambda_{2},k_{0},k_{1},k_{2},l_{0}$ and $\delta_{1}$ as density, temperature and potential parametres $V_{0},a,b$ functions can be obtained from the system of equations $$\begin{split} &\lambda_{1}=A=1+4\pi n\int\limits^{a}_{0}rh(r)dr-4\pi n(1-e^{-\beta V_{0}})k_{1}\\ &\lambda_{2}=2\gamma\delta_{1}-2l_{0}-\frac{2\pi n}{3}\times\\ &\times\left[\int\limits^{a}_{0}r^{3}h(r)dr+(1-e^{-\beta V_{0}})\int\limits^{b}_{a}r^{3}h(r)dr\right]\\ &k_{m}=\frac{(-1)^{m}}{m!}\int\limits^{b}_{a}r^{m}h(r)dr,\,\,\,m=0,1,2,\\ &l_{0}=\int\limits^{a}_{0}h(r)dr+(1-e^{-\beta V_{0}})k_{0}\\ &\delta_{1}=\int\limits^{b}_{a}\int\limits^{b}_{s}h(s)h(t)(t-s)dtds \end{split}$$ Inserting (7) into (5), we gain the Laplace image form for $rn_{2}(r)$.\ The system of collapsing hard spheres state equation can be shown as $$\begin{split} &\frac{P}{nkT}=1-\frac{n}{6kT}\int \!\!rn_{2}(r)\left(\frac{dV}{dr}\right)\vec{dr}=\\ &=1+\frac{2\pi n}{3}\left[e^{-\beta V_{0}}a^{3}\tau(a)+(1-e^{-\beta V_{0}})b^{3}\tau(b)\right] \end{split}$$ where $\tau(r)=h(r)r^{-1}$ and inverse isothermic compressibility $$\begin{split} &\left(\frac{\partial P}{\partial n}\right)_{T}\frac{1}{kT}=\\ &=1-n\int \!\!C(r)\vec{dr}=\lambda_{1}(n,T) \end{split}$$ It can be seen from (10) and (8) that if $V_{0}>0$ $$\left(\frac{\partial P}{\partial n}\right)_{T}>0$$ i.e. the Van der Waals loop is absent in the isotherm. This result coincides with those ones wich were taken in numerical analysis of the state equation (9).\ Thus, the Percus-Yevick equation for the system of collapsinghard spheres allow the solution in closed analytical form going into the Wertheim-Thiele classical solution for hard spheres when $a=b$. As in the case of hard spheres the Percus-Yevick solution doesnt show the phase transition in the system of collapsinghard spheres. [9]{} *M.S.Wertheim*, Phys. Rev. Lett. [**10**]{}, 321 (1963).\ *M.S.Wertheim*, J. of Math. Phys. [**5**]{}, 643 (1964).\ *E.Thiele*, J Chem. Phys. [**38**]{}, 1959 (1963).\ *Valentin N.Ryzhov and Sergei M.Stishov*, Phys. Rev. E [**67**]{}, 010201(R) (2003).\ *V.Mikheenkov,A.F.Barabanov,\ L.A.Maksimov*, JETP Lett. [**80**]{}, 766 (2004).\
Gingivitis is an often painful inflammation of the gums, or gingiva. It typically occurs due to plaque buildup on the teeth. Gingivitis is a common condition that affects Key points about gingivitis include: - Bacterial buildup around the teeth is the most common cause of gingivitis. - The main symptom of gingivitis is red, puffy gums that may bleed when a person brushes their teeth. - Gingivitis often resolves with good oral hygiene, such as longer and more frequent brushing, and regular flossing. In addition, an antiseptic mouthwash may help. This article details the types, causes, and symptoms of gingivitis. It also discusses what a person can do to treat and prevent gingivitis. Gingivitis is a non-destructive type of There are two main types of gingivitis. Dental plaque-induced gingivitis In contrast, non-plaque-induced gingival lesions can result from a bacterial, viral, or fungal infection. Allergic reactions, illnesses, and reactions to Both types of gingivitis can progress to periodontitis if a person does not treat it adequately. Periodontitis is a more severe condition and can lead to further complications, such as loss of teeth. The most common cause of gingivitis is the accumulation of bacterial plaque between and around the teeth. Dental plaque is a biofilm that accumulates naturally on the teeth. It occurs when bacteria attach to the smooth surface of a tooth. This plaque can harden into calculus, or tartar, near the gums at the base of the teeth. This has a yellow-white color. Only dental professionals can remove calculus. Buildup of plaque and tartar can trigger immune responses that lead to gingival or gum tissue destruction. Eventually, it may lead to further complications, including the loss of teeth. Learn more about the differences between plaque and tartar here. Other causes and risk factors Several - Changes in hormones: This may occur during puberty, menopause, the menstrual cycle, and pregnancy. The gums might become more sensitive, raising the risk of inflammation. - Some diseases: Cancer, diabetes, and HIV are linked to a higher risk of gingivitis. - Drugs: Medications that reduce saliva production can impact a person’s oral health. Dilantin, an epilepsy medication, and angina drugs can also cause abnormal growth of gum tissue, increasing the risk of inflammation. - Smoking: Regular smokers more commonlydevelop gingivitis than non-smokers. - Age: The risk of gingivitis increases with age. - Family history: Those whose parent or parents have had gingivitis have a higher risk of developing it too. The signs and symptoms of gingivitis might include: - gum inflammation and discoloration - tender gums that may be painful to the touch - bleeding from the gums when brushing or flossing - halitosis, or bad breath - receding gums - soft gums However, in mild cases of gingivitis, there may be no discomfort or noticeable symptoms. A dentist or oral hygienist will check for symptoms, such as plaque and tartar in the oral cavity. They may also order tests to check for signs of periodontitis. This can be done by X-ray or periodontal probing, using an instrument that measures pocket depths around a tooth. If diagnosis happens early and treatment is prompt and proper, a person may be able to treat gingivitis at home with good oral hygiene. Learn more about home remedies for gingivitis here. However, if symptoms do not resolve, or the condition affects a person’s quality of life, they may wish to seek professional help. Treatment often involves care by a dental professional and follow-up procedures carried out by the patient at home. Professional dental care A dental professional may initially carry out scaling. This is so they can remove excess plaque and tartar. This can be uncomfortable, especially if the tartar buildup is extensive or the gums are sensitive. Once they have cleaned a person’s teeth, the dental professional will explain the importance of oral hygiene and how to brush and floss effectively. They may recommend follow-up appointments to monitor a person’s plaque and tartar. This will allow the dental professional to catch and treat any recurrences quickly. Fixing any damaged teeth also contributes to oral hygiene. Some dental problems, such as crooked teeth, badly fitted crowns, or bridges, may make it harder to remove plaque and tartar properly. They can also irritate the gums. A person may be able to prevent gingivitis at home by practicing regular good oral hygiene. This includes: - brushing teeth at least twice a day - using an electric toothbrush - flossing teeth at least once a day - regularly rinsing the mouth with an antiseptic mouthwash Treating gingivitis and following the dental health professional’s instructions can typically prevent complications. However, gum disease can spread and affect tissue, teeth, and bones if left untreated. Complications include: - abscess or infection in the gingiva or jaw bone - periodontitis — a more serious condition that can lead to loss of bone and teeth - recurrent gingivitis - trench mouth, where bacterial infection leads to ulceration of the gums Gingivitis is a common type of gum disease. It is the result of bacterial buildup on the teeth. This buildup irritates surrounding gum tissue and can cause the gums to become inflamed, discolored, and painful to the touch. Most people can treat gingivitis with regular good oral hygiene practices. Regular dental checkups can help to identify signs of gum disease and treat them in good time.
https://www.medicalnewstoday.com/articles/241721
The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper and its Supporting Information files. Introduction {#s1} ============ The recent Global Burden of Disease Study (GBD 2010) is the largest systematic assessment of disease and injury-specific epidemiology undertaken since its 1990 predecessor [@pone.0110208-XX1]. The GBD methodology incorporates the years of life lost through premature mortality (YLL) and years lived with disability (YLD) into a single metric (the Disability Adjusted Life Year (DALY)). Calculating disease burden worldwide and for 21 regions for 1990, 2005, and 2010 with methods to enable meaningful comparisons over time, it has delivered much needed information and key global health messages. The most notable of these is that the world has undergone a rapid health transition over the 1990 to 2010 period -- populations have aged, infectious and childhood diseases are making way for non-communicable and chronic disorders, and disease burden is increasingly defined by disability rather than premature death [@pone.0110208-Institute1]. The health transitions observed globally in GBD 2010 were not as apparent across Sub-Saharan Africa (SSA). Improvements in life-expectancies have trailed other regions, largely due to the HIV/AIDS epidemic, maternal deaths, and child mortality caused by infectious diseases and malnutrition. However these trends are at a turning point and the next 40 years are predicted to see significant reductions in child mortality and lowering mortality from HIV/AIDS and malaria, signalling the inevitable health transitions we have seen across the rest of the globe and a surge in the impact of chronic and non-communicable diseases, defined by long-term disability [@pone.0110208-Lozano1]--[@pone.0110208-Ortblad1]. Mental and substance use disorders are the leading cause of disability, accounting for 23% of all disability-associated burden (years lived with a disability, YLD) globally and 19% in Sub-Saharan Africa in 2010 [@pone.0110208-Institute2], [@pone.0110208-Whiteford1]. Major depressive disorder (MDD) makes the largest contribution, accounting for approximately 40% of YLDs in this group, yet the very limited mental health services in Sub-Saharan Africa are frequently restricted to tertiary psychiatric facilities treating those with acute psychoses, and to humanitarian responses to traumatic events such as gender-based violence and societal conflict [@pone.0110208-World1]. For most, the idea of a non-communicable disease epidemic in Sub-Saharan Africa may seem intangible, or at the very least, too far off to contemplate -- particularly given the fact that communicable, maternal, neonatal and nutritional diseases still account for approximately 68% of DALYs in the region [@pone.0110208-Institute2]. So, how imminent is this health transition, and what might it look like for Sub-Saharan Africa? Using UN population data [@pone.0110208-United1] and the findings of GBD 2010, we estimate the change in burden of mental and substance use disorders in context of the broader epidemiological transition which will take place in Sub-Saharan Africa from 2010 to 2050, and model the predicted increase in the mental health workforce required to meet future needs. Methods {#s2} ======= GBD 2010 {#s2a} -------- As part of the GBD 2010 study [@pone.0110208-Murray1], burden of disease estimates were produced for 20 mental and substance use disorders [@pone.0110208-Whiteford1]. In brief, this process consisted of systematic reviews for empirical epidemiological data for each disorder as described by the Diagnostic and Statistical Manual of Mental Disorders (DSM) and/or the International Classification of Diseases (ICD) [@pone.0110208-American1], [@pone.0110208-World2]. Internally consistent epidemiological models were created for each disorder using DisMod-MR, a Bayesian meta-regression tool. Adjustments to the data were made via various methods including the application of covariates and severity distributions, and comorbidity adjustments were applied to sequela-specific disability weights in the calculation of YLDs (more detailed information can be found in Murray et al 2012 [@pone.0110208-Murray1] and Whiteford et al 2013 [@pone.0110208-Whiteford1]). For the GBD 2010, YLDs per person from a sequela (e.g. severe major depression) are equal to the prevalence of the sequela multiplied by the disability weight for the health state associated with that sequela. YLDs for a disease or injury are the sum of the YLDs for each sequela (e.g., mild, moderate and severe depression) associated with the disease or injury [@pone.0110208-Vos1]. Disability (YLD) estimates 2010 to 2050 {#s2b} --------------------------------------- YLD predictions for each 10-year period from 2010 to 2050 were calculated for the East, West, Central and Southern Sub-Saharan African regions (as defined by the Global Burden of Disease Study; refer to [appendix S1](#pone.0110208.s001){ref-type="supplementary-material"} for a list of countries and regions). We applied UN population data forecasts (see [appendix S1](#pone.0110208.s001){ref-type="supplementary-material"} for UN population data) to 2010 age-specific YLD rates ascertained from the GBD 2010 to calculate changes in YLDs for mental and substance use disorders in the context of communicable (includes communicable, maternal, neonatal and nutritional diseases) and non-communicable diseases for each Sub-Saharan region and 10-year time period (see [appendix S1](#pone.0110208.s001){ref-type="supplementary-material"} for full list of mental and substance use disorders). By using 2010 YLD rates we assume that age-specific prevalence and disability weights will remain constant throughout the time period. It has been shown that prevalence and disability weights are unlikely to change significantly over time for the majority of mental disorders; however, there will likely be changes for substance use disorders and the implications of this assumption are addressed in the discussion [@pone.0110208-Whiteford1]. Service requirements for 2010 and 2050 {#s2c} -------------------------------------- Recommended packages of care and service requirements in low- and middle-income countries[@pone.0110208-Bruckner1]--[@pone.0110208-Chisholm1] formed the basis of estimation methods of full time equivalent (FTE) staffing requirements. Guided by the priority disorders identified in Bruckner et al [@pone.0110208-Bruckner1], 7 mental and substance use disorders which were included in GBD 2010 were selected for modelling. Selected disorders were schizophrenia, bipolar disorder, depression, alcohol dependence disorder, opioid dependence disorder, conduct disorder and attention deficit hyperactivity disorder (ADHD) (the latter two for 5--19 year olds only). The seven disorders selected are not intended to represent the only important conditions (dementia and epilepsy are examples of other important conditions that could be added to future modelling). Prevalence estimates for 2010 and 2050 {#s2d} -------------------------------------- Disorder prevalence estimates are central to the estimation of FTE staff requirements. The more recent prevalence estimates from GBD 2010 were used over the prevalence estimates reported in Bruckner et al. Pooled prevalence estimates for each disorder and each of four Sub-Saharan Africa regions (East, West, Southern and Central) were calculated using 2010 age and sex-standardised prevalence estimates derived from DisMod-MR modelling. Predicted pooled prevalence for 2050 was also calculated using 2010 age and sex-standardised prevalence estimates and UN population data for 2050 (see [appendix S1](#pone.0110208.s001){ref-type="supplementary-material"} for UN population figures). Pooled prevalence for the paediatric disorders (ADHD and conduct disorder) were based on only prevalence estimates from the 5--19 year age groups. Using similar methods, pooled prevalence estimates for the entire Sub-Saharan Africa (4 regions) were also calculated. Prevalence estimates at the regional level were applied to each country within that region. Adjustments to prevalence estimates were necessary to account for comorbidity between disorders. To allow for comorbidity between depression, alcohol and opioid use disorders, the separate and combined prevalence of MDD, alcohol use, alcohol dependence, drug use and drug dependence were sourced from South African survey data published by Williams et al [@pone.0110208-Williams1]. The GBD prevalence estimates for depression, alcohol and opioid use disorders were reduced in proportion to their weighting relative to the prevalence of any depressive or substance use disorder. This was translated into the formula: (disorder prevalence divided by sum of prevalences of individual depressive and substance use disorders) multiplied by (prevalence of any depressive or substance use disorder). Across the board, rates were adjusted down to 87.6% of the original total. This method was based on Lund et al [@pone.0110208-Lund1]. It was assumed that conduct disorder and ADHD held no comorbidity with other disorders. However it was considered implausible to assume no comorbidity in the model between conduct disorder and ADHD. Data from the US National Comorbidity Survey Replication -- Adolescent [@pone.0110208-Merikangas1] were therefore used to estimate comorbidity between these childhood disorders, as no African data were available. As per the method for depressive and substance use disorders above, the raw prevalence of ADHD, conduct and oppositional defiant disorders, as well as the prevalence of any of these behavioural disorders, was used to calculate a comorbidity weighting (prevalence/sum of individual behavioural disorders) x (prevalence of any behavioural disorder). The GBD prevalence rates for ADHD and conduct disorder were adjusted down to 69.8% of the originals. A table summarising the pooled prevalence of selected mental and substance use disorders in Sub-Saharan Africa from GBD 2010, by region and adjusted for comorbidity can be found in the [appendix S1](#pone.0110208.s001){ref-type="supplementary-material"}. For the purpose of the modelling (and due to a lack of data to estimate otherwise) it was assumed that bipolar disorder and schizophrenia held no significant comorbidity with any other mental disorders. Although these two disorders experience high comorbidity with substance use disorders, it was considered that the treatment patients would require for their substance use disorder would be in addition to that for their bipolar disorder or schizophrenia and therefore comorbidity adjustments were not made. The implications of this are explored in the discussion. Treatment packages {#s2e} ------------------ Treatment coverage targets and care packages (service coverage and utilization rate for each type of service) for the priority disorders were taken from Bruckner et al [@pone.0110208-Bruckner1] (see [appendix S1](#pone.0110208.s001){ref-type="supplementary-material"} for more details). A slight modification to the Bruckner et al model was made where an equivalent psychosocial treatment package as that for alcohol use disorders was added for opioid use disorders in line with WHO opioid treatment guidelines [@pone.0110208-World4]. For each disorder, region or country and service type, the number of bed-days for inpatient or residential or number of sessions for outpatient care was equal to the *population X adjusted prevalence X treatment coverage target X service coverage X utilisation rate,* where: *treatment coverage target* is the percentage of prevalent cases requiring or presenting for any treatment; *service coverage* is the percentage of treated cases needing care in this specific service setting in a year; and *utilisation rate* is the average number of bed days used per case per year for cases treated in inpatient services, or the average number of consultations or sessions per case per year for cases treated in outpatient services. Staffing ratios {#s2f} --------------- Staffing ratios recommended by the mhGAP costing tool [@pone.0110208-Chisholm1] allowed estimation of staffing by health professional type -- psychiatrist, other physician/doctor, nurse, psychologist, other psychosocial worker, and other provider/worker. The overall 'nurse' category was further split into psychiatric nurses and general nurses using a ratio of 1∶4, based on consultation with African mental health experts in four countries (further field testing is needed to validate this in future, as the figure will vary by country and setting). FTE staffing estimates by service type {#s2g} -------------------------------------- FTE modelling was conducted for both 2010 and 2050, for each of the four Sub-Saharan African regions and a small selection of countries within each region. The selection of countries was done without prejudice and merely as examples to highlight the stark differences that can be observed across countries within the same region. The countries selected for modelling FTE were: Sub-Saharan Africa East -- Zambia, Ethiopia, Burundi; Sub-Saharan Africa West -- Ghana, Nigeria, Chad; Sub-Saharan Africa Central -- Democratic Republic of Congo (DRC), Angola; Sub-Saharan Africa Southern -- South Africa, Zimbabwe. For each region/country, the number of bed-days for inpatient/residential services and sessions for outpatient services were converted into FTE mental health staff requirements using the following formulae. The number of FTE staff required to deliver inpatient/residential mental health care was calculated as: number of bed-days/0.85 (85% bed occupancy rate as per Bruckner et al)/365 service days per annum/25 beds in unit x number of staff per unit (7.5 per 25 beds as per mhGAP) x staffing ratio (proportion of nurses, doctors, and other professionals in that setting from mhGAP). The number of FTE staff required to deliver outpatient mental health care was calculated as: number of sessions/number of consultations per day/240 working days per year (from mhGAP) X staffing ratio (from mhGAP) [@pone.0110208-Bruckner1],[@pone.0110208-Chisholm2]. Consultations per day were derived from mhGAP as a weighted average across providers in each category of outpatient care: outpatient care −8 consultations per day; primary care −9 consultations per day; psychosocial (ancillary) care −7 consultations per day. Group day programs were assumed to be delivered to an average of 10 users per day. Finally, gaps between the estimated target FTE staff for 2010 and the current mental health service FTE staffing levels in the selected countries, as reported by the WHO Mental Health Atlas [@pone.0110208-World1], were estimated. Results {#s3} ======= Burden of disease {#s3a} ----------------- The population distribution in Sub-Saharan Africa is expected to change dramatically over the next 40 years ([Figure 1](#pone-0110208-g001){ref-type="fig"}). An expected doubling of population size from approximately 0.9 billion to 1.8 billion will be accompanied by a significant ageing. As a result, a population dominated by under 25 s in 2010 (63%) will be represented by over 25 s in 2050 (53%). ![Population age distribution in Sub-Saharan Africa, 2010 and 2050.\ Source: United Nations Population Data [@pone.0110208-United2].](pone.0110208.g001){#pone-0110208-g001} This change in demographics will result in a rapidly growing disparity between the disability burden associated with communicable compared to non-communicable diseases, with a one and a half fold increase in non-communicable compared to communicable diseases ([Figure 2](#pone-0110208-g002){ref-type="fig"}). Interestingly, although considering only population ageing and growth would see communicable diseases experience a doubling in burden, the burden in under 5 s -- traditionally the most important group for communicable diseases -- could see only a relatively modest increase over time of around 24% ([appendix S1](#pone.0110208.s001){ref-type="supplementary-material"}). ![Change in disability burden distribution of communicable and non-communicable diseases in Sub-Saharan Africa.\ Note: CDs include all communicable, maternal, neonatal and nutritional diseases; NCDs include all non-communicable diseases (including mental and substance use disorders).](pone.0110208.g002){#pone-0110208-g002} In terms of individual disorders, the largest contributor to burden of disease is MDD ([Figure 3](#pone-0110208-g003){ref-type="fig"}). This is followed by schizophrenia which is considered to be one of the most disabling conditions across all diseases in GBD. The childhood disorders of conduct disorder and ADHD will see relatively little increase in burden as the populations of SSA ages significantly over the next 40 years. ![Disability burden for individual mental and substance use disorders, 2010 to 2050.](pone.0110208.g003){#pone-0110208-g003} In terms of mental and substance use disorders, all Sub-Saharan African regions would experience an increase in burden of around 130% in the absence of a change in disorder prevalence rates ([Figure 4](#pone-0110208-g004){ref-type="fig"}). This increase would vary across regions, with the largest (196%) seen in Sub-Saharan Africa Central and the lowest (28%) in Sub-Saharan Africa Southern (East = 139% and West = 129%). ![Disability burden of mental and substance use disorders in Sub-Saharan Africa over time, all ages.](pone.0110208.g004){#pone-0110208-g004} Mental and substance use disorders were the leading cause of YLDs in Sub-Saharan Africa in 2010 (18.94% of total) and an estimated rise from approximately 20 million YLDs to 45 million YLDs could be experienced by 2050. Based on the individual effects of demographic changes alone, by 2050 mental and substance use disorders may be equivalent to approximately two thirds the YLDs of the entire communicable diseases group (67 million YLDs, [appendix S1](#pone.0110208.s001){ref-type="supplementary-material"}). Importantly, the shifting population distribution of Sub-Saharan Africa seen in [Figure 1](#pone-0110208-g001){ref-type="fig"} translates directly to the mental and substance use burden changes seen in [Figure 5](#pone-0110208-g005){ref-type="fig"}, which shows the increase in burden by age group. This is important for this group of disorders, where the 20--54 age group is typically most affected. ![Change in disability burden of mental and substance use disorders over time, 20--54 years by age group, all Sub-Saharan Africa.](pone.0110208.g005){#pone-0110208-g005} Health service requirements {#s3b} --------------------------- The overall predicted increase in target FTE per 100,000 population in each region from 2010 to 2050 is 24.0 to 24.3, due to demographic changes influencing overall prevalence ([Table 1](#pone-0110208-t001){ref-type="table"}). This seemingly modest increase can be misleading though and only when converted to absolute FTE do the estimates become more meaningful in terms of health system planning. Target FTE for Sub-Saharan Africa Southern in 2050 will increase by around 3,700 from the 2010 target, whilst the target FTE for Sub-Saharan Africa East, a far more populous region, will need to increase by 95,400 over the 2010 to 2050 time period to meet the growth in burden of disease. 10.1371/journal.pone.0110208.t001 ###### Mental health workforce requirements for Sub-Saharan Africa, year 2010 and 2050[1](#nt101){ref-type="table-fn"} ^,^ [2](#nt102){ref-type="table-fn"}. ![](pone.0110208.t001){#pone-0110208-t001-1} Sub-Saharan Africa, East Sub-Saharan Africa, West Sub-Saharan Africa, Central Sub-Saharan Africa, Southern ---------------------------------- -------------------------- -------------------------- ----------------------------- ------------------------------ ------ -------- ------ ------ -------- ------ ------ ------- *Provider type* Psychiatrist 1.8 1.9 7,400 1.7 1.8 6,200 1.9 1.9 2,900 1.8 1.9 300 Medical officer/clinical officer 2.5 2.5 9,900 2.3 2.4 8,200 2.5 2.6 4,000 2.5 2.5 400 Psychiatric nurse 2.1 2.2 8,500 2.0 2.0 7,100 2.2 2.2 3,400 2.1 2.2 300 General nurse 8.5 8.6 33,800 8.1 8.2 28,200 8.6 8.8 13,500 8.6 8.7 1,300 Psychologist 2.0 2.1 8,100 1.9 2.0 6,800 2.1 2.1 3,200 2.0 2.1 300 Psychosocial provider 3.5 3.6 14,100 3.3 3.4 11,700 3.6 3.7 5,600 3.5 3.6 500 Other primary care 3.5 3.5 13,800 3.2 3.3 11,200 3.5 3.6 5,500 3.5 3.5 500 *Service type* Outpatient care 10.9 11.0 42,800 10.2 10.2 34,900 11.1 11.3 17,200 11.0 11.1 1,600 Psychosocial care 2.5 2.6 10,400 2.2 2.3 8,100 2.6 2.7 4,200 2.6 2.6 400 Inpatient care 10.5 10.7 42,300 10.2 10.4 36,400 10.6 10.9 16,700 10.6 10.9 1,700 Total 24.0 24.3 95,400 22.6 22.9 79,400 24.4 24.8 38,100 24.2 24.6 3,700 FTE are expressed as a rate per 100,000 population. The suggested increase to 2050 was calculated by subtracting the target full time equivalent for 2050 in absolute numbers from that of 2010. ^3^Rounded to nearest hundred. Not only are there significant differences in target FTE increases across the 4 regions, but also within the countries of each region ([Figure 6](#pone-0110208-g006){ref-type="fig"}). The highly populous countries of Ethiopia, Nigeria and DRC highlight well the impact of population growth and ageing on health system requirements when compared against their neighbouring countries. Nigeria is predicted to require an additional 65,000 FTE staff to provide mental health care by 2050. In contrast, increases required in South Africa and Zimbabwe are expected to be much smaller. ![Predicted increase in FTE staff requirements for mental health care for selected Sub-Saharan African countries, 2010 to 2050.\ Note: SSA East -- Zambia, Ethiopia, Burundi; SSA West -- Ghana, Nigeria, Chad; SSA Central -- DRC, Angola; SSA Southern -- South Africa, Zimbabwe.](pone.0110208.g006){#pone-0110208-g006} Perhaps more important than predicted increases in target FTE is the gap between the estimated minimum FTE staffing requirements for 2010 and the actual FTE staffing numbers that currently exist within countries as reported by the WHO Mental Health Atlas. The health challenges in Sub-Saharan Africa are clearly demonstrated here with some countries reportedly having only a fraction of current target staffing levels. Although current FTE estimates from the WHO Mental Health Atlas and FTE targets proposed in this paper are not strictly comparable across the board, two groups of mental health workers, psychiatrists and psychologists, can provide a reasonable indication of any shortfall in mental health workers within a country ([Table 2](#pone-0110208-t002){ref-type="table"}). Our estimates highlight a very significant shortfall of mental health workers across all SSA countries, with large variations within the same geographical regions. Even South Africa, with clearly the best mental health staffing capacity, appears to be lacking in crucial areas such as numbers of psychiatrists. Particularly notable is Nigeria, a country with low current capacity and one of the highest predicted increases in requirements. 10.1371/journal.pone.0110208.t002 ###### Current and target 2010 FTE psychiatrists and psychologists per 100,000 population for selected countries in Sub-Saharan Africa. ![](pone.0110208.t002){#pone-0110208-t002-2} Country Psychiatrists Psychologists ------------------------------- --------------- --------------- ------ ------- ----- ------ *Sub-Saharan Africa East* Zambia 0.03 0.8 4% 0.02 0.3 7% Ethiopia 0.04 0.8 5% 0.02 0.3 7% Burundi 0.01 0.8 1% 0.01 0.3 3% *Sub-Saharan Africa West* Ghana 0.07 0.8 9% 0.04 0.3 13% Nigeria 0.06 0.8 8% 0.02 0.3 7% Chad 0.01 0.8 1% 0.01 0.3 3% *Sub-Saharan Africa Central* DRC 0.07 0.8 8% 0.015 0.3 5% Angola 0.002 0.8 0.3% 0.003 0.3 1% *Sub-Saharan Africa Southern* South Africa 0.27 0.8 34% 0.31 0.3 103% Zimbabwe 0.06 0.8 8% 0.04 0.3 13% Discussion {#s4} ========== We used UN population data to forecast the change in disability-associated burden of seven of the major mental and substance use disorders in Sub-Saharan Africa from 2010 to 2050. It is estimated that during this 40 year period the population will double in size and age significantly, making way for the dramatic health transition seen in other regions of the world. Predictions clearly highlight the anticipated shift from communicable diseases characterised by high mortality to non-communicable diseases with a chronic and disabling course. Naturally, modelling burden estimates holding YLD rates constant makes several assumptions, including that disorder prevalence and disability weights remain constant. Whilst this is entirely plausible for many non-communicable diseases, indeed the prevalence of many mental disorders has been shown to not change significantly over time [@pone.0110208-Whiteford1], this may not be true for others. Drug use disorder prevalence, for example, is expected to increase in the Sub-Saharan Africa regions in the coming years, largely driven by changing routes of drug trafficking [@pone.0110208-Dewing1], [@pone.0110208-Cockayne1]. Modelling changes in communicable diseases is also not straight forward. Different factors are at play and will influence the estimates. Firstly, the successes of public health campaigns such as expanded programs on immunisation and insecticide treated nets to combat malaria should continue to reduce the prevalence of infectious diseases. Secondly, improvements in health systems and treatments may not reduce the prevalence of some communicable diseases but shift the burden from premature mortality to a more chronic health loss; the push towards universal access to anti-retroviral therapies makes HIV the definitive example of this and will shape the future burden of disease in Sub-Saharan Africa substantially. In terms of disability, it is possible that the gains made by declines in prevalence of communicable, maternal, neonatal and nutritional diseases over the next 40 years will be partially offset by the increase in burden of chronic HIV. Whatever the scenario for communicable disease may be, it is clear that the balance in the contribution of communicable and non-communicable diseases to disease burden is set to change dramatically. The consequences of a rising burden on non-communicable disease (NCD) are far-reaching. For the mental and substance use disorders, largest group of NCD's the impacts are long-lasting at the level of the individual, family and community [@pone.0110208-World5]. Quality of life is impacted and economic costs are significant. A recent study estimates that the cumulative global impact of mental disorders may amount to US\$16 trillion of lost economic output over the next 20 years, equivalent to 25% of global GDP in 2010 [@pone.0110208-Bloom1]. The secondary health outcomes of mental disorders also need to be considered. For example, major depression has been shown to be an independent risk factor for other important non-communicable disorders such as ischaemic heart disease [@pone.0110208-Charlson1] and a consistent association between HIV and poor mental health has been reported [@pone.0110208-Prince1]. The strong and often bidirectional relationships that exist between mental and substance use disorders with communicable disease, non-communicable disease and injuries emphasises the critical role of mental health [@pone.0110208-Prince1]. Due to the pressures of communicable disease and malnutrition, mental health in Africa has been low on the priority list to-date. In terms of policy, a recent WHO report has revealed only 42.2% of countries within the WHO Africa region report having a dedicated mental health policy, 67% possess a mental health plan and 44% report having dedicated mental health legislation [@pone.0110208-World1]. African countries reportedly spend less than 1% of their health budgets on mental health [@pone.0110208-Saxena1]. As a result mental health services are poorly resourced and generally accessible to only the most severely ill and then as inpatients in urban facilities. Mental health care provided outside the hospital by health and social workers based in the community is only available in half of African countries [@pone.0110208-Saxena1]. Globally, treatment rates for people with mental and substance use disorders remain low [@pone.0110208-Wang1], [@pone.0110208-Mathers1], with treatment gaps over 90% in low and middle income countries [@pone.0110208-Wang1]. Perhaps an exception to this is in traumatised populations where non-government organisations are present, providing temporary mental health services during emergency or post-conflict response efforts. However, these services are not sustainable and it is important to prepare for chronic disease and disability services beyond acute care. In keeping with the recommendations of WHO, reaching FTE targets set in this paper requires a shift from current practice in most African countries, where psychiatric hospitals are the main site of service delivery, and consume the vast majority of the country's mental health budget [@pone.0110208-World1]. Instead a new model is required which involves substantial investment in the training of primary care practitioners, supported by district based mental health specialist teams, using a task sharing model that mobilises local community resources [@pone.0110208-Kakuma1], [@pone.0110208-Patel1]. The new model also requires substantial investment in smaller inpatient psychiatric units, based in district and regional general hospitals as reflected in the inpatient services targets which demand the biggest increase in resources in this model. Health transitions in other regions of the world have been rapid [@pone.0110208-Murray1]. The health system reforms required to deal with dramatic changes in burden of disease are intimidating and will take long term careful planning. However the human and economic costs of not preparing for this will be substantial. Sub-Saharan Africa is in a particularly challenging position as it will soon be caught in the rising tide of non-communicable disease, yet it still has a considerable way to go in the fight against communicable, maternal, neonatal and nutritional diseases. A logical starting point in the preparation might be to implement packages of care for key mental disorders which have been designed for low- and middle-income countries and provide realistic goals given challenging environments such as those in Sub-Saharan Africa [@pone.0110208-Chisholm2], [@pone.0110208-Patel2]. Fundamental in scaling up services is increasing the mental health workforce [@pone.0110208-Bruckner1] and integrating mental health into primary health care [@pone.0110208-Collins1], [@pone.0110208-Eaton1]. A continuing commitment to addressing key research priorities is also vital [@pone.0110208-Collins1]. Limitations {#s4a} ----------- As highlighted previously, this modelling is limited to disability-associated burden only and makes the assumption of unchanged disease prevalence and disability weights over time. Whilst this may be a reasonable representation for the future of many non-communicable diseases, changes in disease profiles of communicable diseases will be likely. As mental and substance use disorders are highly disabling conditions which carry a relatively small amount of directly-associated mortality and there will be a future shift to chronic, disabling diseases it was seen appropriate to selectively present disability-associated burden forecasts. One obvious omission from this paper which is particularly relevant to the Sub-Saharan African context is that of neurological disorders. Neurological disorders were modelled separately from mental and substance use disorders in GBD 2010 and were not included in our modelling. Given epidemiological transition and ageing population, the inclusion of neurological disorders (dementia in particular) would add substantially to the predicted changes in burden and FTE requirements and have significant implications for concomitant service requirements. Other assumptions of our estimates are that predicted demographic changes are correct. The estimation process of mental disorder YLD rates ascertained in GBD 2010 additionally carried some limitations which have been discussed elsewhere [@pone.0110208-Whiteford1]. In particular, the lack of data from the Sub-Saharan region was problematic and more epidemiological data is required from these regions. Comorbidity adjustments were not made between bipolar disorder and schizophrenia and substance use disorders when the known comorbidity is high. There was no clear way to adjust prevalence between these disorders in a way that would reflect the reality of how service requirements would likely reduce (that is, a reduction in substance use services) given the clinical reality that any reduction in service requirements for a patient experiencing bipolar disorder or schizophrenia and a substance use disorder would be minimal given the very different packages of care, intensity of treatment required and the low prevalence of these two mental disorders. Although there is potential that treatment services for substance use disorders have been overestimated to a small degree, it should be noted that estimated staffing requirements for the treatment of opioid and alcohol use disorders will likely be conservative due to using dependence (lower) prevalence estimates, as provided by GBD 2010, with treatment targets and care packages from Bruckner et al designed for the broader category of abuse and dependence. It is important to recognise that the model used for estimating service requirements in this paper is one developed for both low- and middle-income countries and may need substantial modification to suit the Sub-Saharan African context. Furthermore, it is recognised that each country will have its own health service and workforce configuration and unique challenges, and therefore targets should ideally be tailored to the country context. Finally, the Mental Health Atlas only records mental health labelled staff working in dedicated facilities/clinics whereas the care packages presented by Bruckner and colleagues represent the total package of care that person needs, including care that might be provided by non-mental health specialists working in general primary care clinics. These key differences make general comparisons against the staffing targets modelled above difficult yet the data is nonetheless informative. Conclusion {#s5} ========== With a rapid epidemiological transition predicted for Sub-Saharan African over the coming decades it is now time to begin planning for health services able to address a dramatic increase in burden of non-communicable disease. Mental and substance use disorders are already the leading cause of disability in Sub-Saharan Africa yet services are drastically under-resourced. Packages of care and service models advocating for a task shifting approach and the embedding of mental health services within the primary health care system are available for LMICs. Investment and health system reform should be a priority for policy makers in Sub-Saharan Africa. Supporting Information {#s6} ====================== ###### (PDF) ###### Click here for additional data file. We would like to thank Dan Chisholm (World Health Organisation) for his provision of mhGAP costing tool and his guidance in modelling treatment packages. We would also like to thank Oye Gureje (University of Ibadan, Nigeria), Abebaw Fekadu (Addis Ababa University, Ethiopia) and Fred Kigozi (Makerere University, Uganda) for their advice in preparing this paper. [^1]: **Competing Interests:**The authors have declared that no competing interests exist. [^2]: Conceived and designed the experiments: FJC HAW SD CL LD. Analyzed the data: FJC SD. Contributed to the writing of the manuscript: FJC SD CL LD HAW.
The Case for ASEAN Military Integration While Southeast Asia considers closer military integration, China continues its encroachment on the sovereignty of the region. The Association of Southeast Asian Nations has the potential to become a fully integrated mutual defense alliance which stands to ward off aggressive acts from an increasingly southward expanding PRC. This process of military and security integration has already begun in the form of the ASEAN Chiefs of Defense Forces Informal Meetings and ASEAN Defense Ministers Meetings. These meetings provide a forum for the nations to discuss their pressing security issues in a collaborative setting. External threats provide the perfect environment for internal cooperation and this region collectively faces a unilateral threat from an aggressive China. Why would the states favor military integration? Military cooperation in the form of a collective security organization would allow the region to assert their territorial claims to the South China Sea in lieu of China’s. This goal itself poses fundamental problems to the very idea of cooperation by ASEAN members. China only represents one of the countless territorial claimants over the South China Sea. Most claimants are ASEAN members and would represent competing interests within the alliance. These competing interests do pose challenges to the prospect of cooperation but are possible to overcome through analyzing the ranking of each states preferable outcomes. The South China Sea currently entertains claims or partial claims from Indonesia, China, Taiwan, Vietnam, the Philippines, Malaysia, Cambodia, Thailand, and Singapore. Of these states only one, China, is a great power capable of truly global power projection. Every other state with competing claims actually lacks the military capacity to dispute any of China’s claims. This single truism means that the only hope for keeping China out of the region will require balancing and collaboration by the smaller powers. Now certainly not all claims and disputes over borders are with China, meaning that some states in the region have no direct quarrel with the PRC, but that does not mean their interests are not threatened by Chinese presence in the region. Since the days of the Srivijaya (ancient kingdom based on Sumatra) and the Spice Islands, Southeast Asia has been a pillar of international trade, whether consensual or otherwise. This international trade finally stands to benefit the population of the region in place of the frequently exploitative system that has been forced upon the region for centuries. Though this trade system is threatened if the People’s Republic of China continues to force its presence into strategic waterways. Would ASEAN military cooperation be sufficient to deter further Chinese encroachment? According to the Stockholm International Peace Research Institute Military Expenditure Database, the total military expenditures of ASEAN in 2013 equal around $38 billion USD. This is miniscule compared to China’s 2013 expenditures of $188 billion USD but represents a unified threat that could be substantial in deterring China from aggressive action. (SIPRI) Currently the cost of China taking aggressive or hostile postures towards individual countries with which it has disputes is effectively zero. No single nation in the region even poses a threat of military resistance due to the imbalance of power. But a unified front with shared military resources of approximately a fifth of China’s expenditures would pose a serious threat to uncontested Chinese hegemony in the region. Unilaterally attacking such an alliance would surely make China a pariah in the international system. Military cooperation between ASEAN states would not be for the purpose of fighting China, but would serve to negotiate from a position of strength and not the current climate of weakness. Such an alliance of Southeast Asian states was tried before and with spectacular failure. The Southeast Asian Treaty Organization previously existed to prevent the spread of communism in the region. SEATO was hardly a collective security organization but a tool for American policy interests. It was not intended for sovereignty and security but an effort by the U.S. to contain states that did not recognize American hegemony. The failures of the past can become the roadmap for future cooperation. While SEATO stood directly opposed to communist states, ASEAN military cooperation should be based on common regional threats and not the domestic composition of states. This will be critical in maintaining the cooperation of member-states as Southeast Asia is host to a number of different regime types, ranging from single party communist states, functioning democracies, military dictatorships, and theocratic monarchies. How would China respond to such an aggressive alignment against its interests? With China’s increasing look towards the Middle East as postulated in the New Silk Road theory, coercion will have to be replaced with cooperation. The modern or maritime Silk Road theory argues China’s foreign policy is wholeheartedly devoted towards opening ports in the Indian ocean to connect the oil-rich near east with China, requiring shipping lanes through the South China sea and a compliant, passive, and cooperative Southeast Asia. At this point in time and for the foreseeable future China does not have the potential to risk truly aggressive relations with ASEAN nations due to the international costs of acting outside of the international order. Currently, bullying the individual-minded Southeast Asian nations has no cost and will not escalate to war due to the sheer scale of asymmetry within each China – Southeast Asia dyad. By raising the costs China would have to pay if war broke out above their current negligible levels, Southeast Asia can actually assert itself and negotiate solutions that are not excessively one sided and reflect the true power of the region. Other instances that could benefit from collective security alliances to ward off aggressive great powers include the former eastern bloc countries. Had NATO not so callously and incredibly commit to the defense of former Warsaw Pact countries, the region could have used the opportunity to realign into an anti-Russian collective security agreement. This commitment would have been much more credible because an attack on one former Warsaw Pact member does serve as a threat to the other members. This same logic is true in Southeast Asia, China’s aggression towards one ASEAN member serves as a threat to the prosperity of all ASEAN members. Surely military cooperation among ASEAN states could also provide a platform and opportunity for outside assistance that does not currently exist among the fragmented status quo. If India wished to push back and maintain its influence in the Indian Ocean, one method would be to pledge its support for the ASEAN alliance. This provides the ASEAN alliance with much needed naval power, a resource surprisingly lacking in a region so tied to maritime control. India has incentives to check China’s Silk Road initiative, which is heavily dependent on Chinese ties to Pakistan. The prospects of a rising state and future great power like India declaring for a quickly assembled alliance seem unlikely, but represents a potential wedge for India to rise in power relative to China. India has already made advances towards relations with ASEAN as a whole, engaging in free trade with them. Security and defense ties between the two partners have also increased a sign of further cooperation to come. One must also not ignore the political difficulty of cooperation between ASEAN states. Each state in the region has competing internal goals that make accomplishing external goals difficult. Just related to the South China Sea, the competing claims are often not against China but other ASEAN members. This issue can be resolved with the simple maxim that if ASEAN members do not cooperate they will lose all current claims to China. It is a simple case of ordered preferences where it appears that all states prefer China to stay out of the region, to China in the region and would prefer a world in which their goals are not achieved but China stays out. This ranking of preferences forces cooperation and prevents cheaters from attempting to break the alliance in order to strike a deal with China. Southeast Asia has historically found itself at the crossroads of southern and eastern Asian identities. The 21st century provides an opportunity for the region to define itself anew and assert its role in the international system. This opportunity will not be possible if China is not stopped, an achievement only conceivable through cooperation and outside resources. China is currently trying to bring ASEAN states to the table to settle their differences, but any solution reached at this time will favor China. In order to defend the region and their interests, military cooperation is a necessity.
https://intpolicydigest.org/the-case-for-asean-military-integration/
when the author interrupts the usual sequence to tell the readers about something that happened in the past. inference a guess you make based on information you read main idea who or what the article, essay, or story is mostly about plot the sequence of events in a story rising action events that happen after the conflict but before the climax in a story setting the time and place where a story takes place conflict the problem facing the main character or characters in a story analogy two sets of words that are related in the same way cause the reason something happens context clues information from the words and sentences around an unfamiliar word that helps you figure out its meaning effect the result of the cause mood the way the story makes the reader feel resolution how the conflict of the story is ultimately solved supporting details the facts, examples, or descriptions that explain or back up the main idea of a text tone the author's attitude toward a subject as well as his or her attitude toward the reader prefix a group of letters added to the beginning of a word that changes the word's meaning author's purpose an author's reason for writing: to entertain, inform, explain, or persuade author's point of view an author's opinion on a subject contrast to show how things are different dialogue words the characters say to each other fact a statement that can be proved to be true idiom an expression that means something different from what its individual words mean opinion what someone believes or feels, cannot be proven to be true personification giving human qualities to an object or idea root the main part of a word and a clue to the word's meaning sequence the order in which information is arranged, from start to finish simile a comparison of two things using like or as suffix a group of letters added to the end of a word that changes the word's meaning summary a shortened version of a longer work, containing the main idea/theme of a text symbol an object, living thing, or situation that stands for or represents an idea or feeling analyze to break it down infer read between the lines evaluate to judge support back it up predict think about the future cite to mention articulate to speak it out demonstrate to clearly show integrate to unite distinguish to separate identify to single it out synthesize to combine paraphrase to restate in your own words interpret to explain act a major unit of a drama or play alliteration the repetition of consonant sounds at the beginning of words or syllables such as "towering, trembling trees" antagonist a person or force in society or nature that opposes the protagonist, or main character connotation the suggested or implied meaning associated with a word, beyond its dictionary meaning drama a story written to be performed on a stage in front of an audience extended metaphor a metaphor that compares two unlike things in various ways throughout a paragraph, stanza, or an entire work fiction a literature in which situations and characters are invented by the writer free verse poetry that has no fixed pattern of meter, rhyme, or rhythm hyperbole a figure of speech that uses exaggeration to express strong emotion, make a point, or evoke humor imagery descriptive language that appeals to one or more of the five senses: sight, hearing, touch, taste, and smell irony a contrast between the way things seem and the way they really are, or between what is expected and what actually happens line the basic unit of poetry. a line consists of a word or row of words lyric poetry poetry that expresses strong personal feelings about an object, person, or event. usually short and musical metaphor a figure of speech that compares two seemingly unlike things directly, without using like or as meter a regular pattern of stressed and unstressed syllables in poetry, gives a rhythm or beat monologue a long speech by a single character, normally in a drama myth a traditional story of unknown authorship, often involving goddesses, god, heroes, and supernatural forces that attempts to explain why or how something came to be narrative poetry poetry that tells as story and has a plot narrator the person who tells a story nonfiction writing that is about real people, places, and events onomatopoeia a word or phrase that imitates or suggests the sound of what it describes, such as hiss or crack persuasion a type of speech or writing, usually nonfiction, that attempts to convince audience members to think or act in a particular way exposition part of the plot that introduces the setting, conflict, characters poetry a form of literature that differs from traditional literature in that it is written in lines and stanzas point of view the standpoint from which a story is told 1st person point of view the narrator is a main character in the story, uses pronouns such as I, me, and we 3rd person point of view the narrator is outside of the action of the story, uses pronouns such as he, she, it, they. can be limited or omniscient protagonist the central or main character in a story, the action revolves around the protagonist repetition the recurrence of sounds, words, phrases, lines, or stanzas in a speech or literary work rhyme scheme the pattern of rhyme formed by the end rhyme in a stanza or a poem scene a subdivision of an act in a play. each scene takes place in a specific setting and time speaker the voice speaking in a poem; similar to a narrator in a story speech a public address or talk, in most cases the speaker tries to influence the audience's behavior, beliefs, and attitudes stanza a group of lines forming a unit in a poem stage directions instructions written by a playwright to describe the appearance and actions of characters, as well as the sets, props, costumes, sound effects, and lighting theme the central message of a work of literature, often expressed as a general statement about life allusion a reference to an important piece of literature cause and effect organization the author describes an events cause and the events that follow (effect) chronological organization the author presents the material in time order compare and contrast organization the author discusses similarities and differences between people, things, concepts, or ideas problem and solution organization the author gives information about a problem and explains one or more solutions rhyme when words have the same sound at the end (example: height, site, bite) flat character a character that we see only on side to his/her personality round character a character that shows different sides to his/her personality dynamic character a character who changes in personality or attitude by the end of the story static character a character that shows little to no change throughout the story assertion an accepted and respected belief (dogs are man's best friend) prose everyday writing (not drama or poetry) bandwagon a persuasive device that makes you believe that everybody else has or likes the product ) example: "Everybody loves Freddie's Burgers") loaded language using words that cannot be proven but sound important (example: the hottest new sneakers) rhetorical questions asking questions with obivious answers to lead readers to agree with an argument YOU MIGHT ALSO LIKE... Cambridge Teaching Knowledge Test - TKT TEFL TESOL CELTA giflingua $9.99 STUDY GUIDE Reading Terms week 6 48 Terms excopper176 TEACHER Addy's BIG list 55 Terms candice_richards3 Testing Terms 50 Terms Casymae98 OTHER SETS BY THIS CREATOR Pre- A Ribbon for Baldy 13 Terms Peet-Clark TEACHER Personal Narrative 12 Terms Peet-Clark TEACHER Review for STAAR 3 21 Terms Peet-Clark TEACHER Text Features for Nonfiction 22 Terms Peet-Clark TEACHER THIS SET IS OFTEN IN FOLDERS WITH...
https://quizlet.com/125487322/8th-grade-reading-staar-review-flash-cards/
Issues In Political Economy The meeting of the World Social Forum in Porto Alegre, Brazil on the identical time was in some ways a conference of a worldwide political party in opposition, which is now looking for a standard program with which to oppose the investor’s agenda. The distinction between these two “events” is not, as the media would have it, the distinction between globalizers and anti-globalizers. Globalization – within the sense of individuals exchanging items and concepts with one another has been occurring for a number of thousand years and will continue. A former chief economist of the IMF has brazenly acknowledged that the workers of the IMF make no essential choice without checking with the U.S. The MSc in International Political Economy (IPE) offers a multidisciplinary perspective on worldwide economic and energy relations, important to understanding an more and more globalised world. Furthermore, through direct tax evasion, or the use of regulatory loopholes, massive firms may acquire a decisive benefit over local suppliers operating in the same market sector and providing comparable providers. In that sense, the core contention between Davos/New York and Porto Alegre is over the principles of the worldwide marketplace – and who will set them. The International Monetary Fund (IMF), for instance, just isn’t a central bank for nurturing world progress and stability. It is quite a lender with an ideological agenda, conditioning its loans to troubled nations with austerity and anti-labor insurance policies aimed toward giving precedence to debt reimbursement through exports rather than home growth. Like all cartels, the IMF uses its oligopolistic power to pursue political objectives. It subsequently has a determined bias toward international locations whose leaders are in sync with the financial institution’s major supporters. Presidential Signing Statements And Separation Of Powers Politics - Our goal is to broaden and promote the examine of the connection between economics and safety by fostering a network of academics and supporting coverage-relevant academic analysis and educating. - This financial climate has had severe and sometimes counterintuitive ramifications for nationwide and international security. - The research of the intersection of economics and security requires an interdisciplinary approach, involving insights and tools from the fields of Political Science, Economics, History, Sociology, and Anthropology, amongst others. - The previous decade has been rife with financial crises, austerity measures, and elevated monetary globalization. Neither is it a concern with “social” versus the “financial” points. The assembly in Porto Alegre was additionally about economics – an economics that serves society, somewhat than one that’s served by society. Moreover, a nationalist politics undercuts the cross-border cooperation needed to balance the cross-border political reach of business and finance. Nationalism perpetuates the parable that national identity is the one think about determining whether one wins or loses within the world economy. It obscures the widespread interests of employees in all countries when faced with the alliances of buyers in rich and poor nations that now dominate the worldwide market.
https://www.deslivrespourtous.org/issues-in-political-economy.html
Our Indigenous Relations team empowers youth and adults alike to embark on a journey of social change by creating a holistic learning and action-based experience. Through WE Schools, we provide programming for Indigenous youth that empowers and inspires them to continue being leaders in their communities. To help raise Indigenous issues as a priority, we also focus on educating non-Indigenous youth living in Canada about Indigenous cultures, histories and traditions. Through campaigns, leadership programs, inspirational WE Day events, lesson plans, social media and so much more, we’re helping to make Turtle Island a more inclusive place. If you have any questions, please contact [email protected] It’s created in collaboration with subject experts and Indigenous representatives. It’s leadership focused—designed to create empathy and understanding. It provides tools for young people to actively support each other in creating a better world. It empowers Indigenous youth to continue to use their voices to create meaningful change. It helps educators meet curriculum requirements and encourages students to explore Indigenous cultures and histories. It partners with leading Indigenous organizations to ensure Canadians are aware of and equipped to take action. Sacred Circle is a three-day leadership program designed to empower Indigenous youth. Focusing on the capacity and strengths of each individual and community, this program assists youth in exploring their unique identity, developing leadership skills, and build increased self-esteem and confidence. It also works to develop a sense of belonging and to explore and celebrate Indigenous cultures and perspectives, including the issues facing Indigenous communities. The goal: to help youth design action plans that support diversity and positive Indigenous cultural awareness. Reconciliation in Action is a youth leadership program designed to build relationships between Indigenous and non-Indigenous youth. This program explores history and current events, and examines privilege and perception. The program engages with Elders and community leaders in both traditional practices and in the development of crucial leadership skills, such as effective communication and consensus-building. Following the findings of the Truth and Reconciliation Commission, this program will guide participants to a deeper understanding of themselves as leaders, and the whole group will emerge with a clear action plan and next steps for creating sustainable change within our schools, communities and the country. First Voices is a three-year leadership program created with community Leaders and Elders, that examines the needs and wants of the community to create a sustainable action plan. With a focus on the capacity and strengths of each individual and community, this program helps youth explore their identity, develop leadership skills, a sense of belonging and build self-esteem and confidence. Every program is customized and includes digital touch points to ensure added support. This program also explores and celebrates Indigenous cultures and perspectives, including issues facing Indigenous communities today, with the goal of designing action plans that support diversity and positive Indigenous cultural awareness. WE’s Indigenous Relations team is passionate about providing a platform for Indigenous voices to be heard in classrooms across the country. Our team works closely with Indigenous youth, activists, educators and leaders to inform the stories being shared in our Indigenous action campaign and resources. Learn more about Indigenous perspectives and experiences in our WE Stand Together Campaign. Our resource guide helps educators bring Indigenous history, culture and experiences into the classroom. As Canadians begin to seek reconciliation in light of our country’s dark history and mistreatment of Indigenous Peoples, educators are poised to help create the biggest shift in national and generational mindset. Help bring your students into the conversation all people should be having!
https://www.we.org/en-us/our-work/we-schools/indigenous-relations/
NEW YORK—The Metropolitan Museum of Art has received a major gift of Diane Arbus archival material from the photographer’s estate, it was announced on Dec. 18. Additionally, the Met has purchased 20 of the artist’s most important images, including Woman with a Veil on Fifth Avenue, N.Y.C., 1968, and Russian Midget Friends in a Living Room on 100th Street, N.Y.C., 1963, for approximately $5 million. The photos, dated from 1956 to 1971, the year that Arbus committed suicide, were acquired from the Fraenkel Gallery, San Francisco, which represents the Arbus estate. The Arbus archival material includes 7,500 rolls of film, hundreds of photographs and negatives, correspondence, diaries, appointment books and family pictures. These were donated by the artist’s two daughters, Amy and Doon, both as gifts and as promised gifts. Jeff L. Rosenheim, curator of photography at the Met, told ARTnewsletter he first encountered this archival material in 2003, through an exhibition of the artist’s work titled “Diane Arbus Revelations”; assembled by the San Francisco Museum of Modern Art, it traveled to the Met in 2005. “This was wonderful material that gave great meaning to the objects,” Rosenheim recalls. He notes that his questions to Amy and Doon about the future of the trove prompted a dialogue leading to the recently announced gift. The archives will be catalogued and made available to researchers in the same manner that scholars of Walker Evans have been able to look through his archives at the Met since 1994. The museum also plans to collaborate with the Arbus estate in holding future exhibitions. Although the Met simultaneously announced the donation of the archives and its own Arbus acquisitions, both gallery owner Jeffrey Fraenkel and Rosenheim call the timing coincidental. “We’ve looked at our photography collection, trying to identify places where we were weak,” Rosenheim explains. Works by Arbus, as well as by Robert Frank, Lee Friedlander and Garry Winogrand, among others, are areas of collecting interest, he says, noting that “it took about a year to raise the money” to purchase the Arbus photographs.
https://www.artnews.com/art-news/news/met-acquires-major-trove-of-images-by-diane-arbus-1492/
NEW DELHI: Economic activity in the country lost some pace amid GST related disruptions but underlying growth momentum remains strong and the country may clock 6.7 per cent growth this fiscal, says a Morgan Stanley report.India's economic growth slipped to a three-year low of 5.7 per cent in April-June, underscoring the disruptions caused by uncertainty related to the GST rollout amid slowdown in manufacturing activities.Commenting on the GDP numbers, Morgan Stanley said, "We are inclined not to read this as a sign of general slowdown in aggregate demand"."Indeed, we remain skeptical that the GDP statistics are fully reflecting the underlying growth trends in the economy," Morgan Stanley said in a research note.It further said that a number of high frequency growth indicators are indicating that end demand is holding up well and is running counter to the slowdown exhibited in the national accounts.However, on account of the weak GDP print in June 2017 quarter, Morgan Stanley has made some mark-to-market adjustments to its full year GDP growth estimates."We believe that June 2017 likely marked the trough in growth in this cycle and we expect GDP growth to accelerate by almost 200 bps to 7.5 per cent year-on-year in March 2018 quarter," it said.On a calendar year basis, Morgan Stanley now project growth of 6.4 per cent and 7.4 per cent in 2017 and 2018, respectively, as against 7.6 per cent and 8.0 per cent previously.The revised new financial 2018 and fiscal 2019 growth estimates are at 6.7 per cent and 7.5 per cent, respectively.According to Morgan Stanley, currency replacement programme and GST had led to a deceleration in growth momentum."However, considering that these events are already in the rear view mirror, we expect the underlying economic growth momentum to reassert themselves, leading to a re-acceleration in growth," it said."In our view, India is moving on to the next phase of the business cycle of productive growth - a phase marked by further improvement in growth while macro stability remains in check. This will also set the stage for a sustained growth cycle," it added.
Intellectual Reasoning vs. Instinct It has been said from Plato onward that man's reasoning is his highest faculty and makes him superior to animals. In the short story "To Build a Fire," by Jack London, man’s intellectual reasoning ability is regarded as “second class” to that of the survival mechanism that is embedded within humans and animals alike. This survival mechanism is sometimes referred to as instinct. If solely depended on, man’s intellectual reasoning may be clouded, imprudent and even detrimental, leading him to the wrong decision. Instinct, on the other hand, is a natural reaction pre-programmed into man for survival and cannot be altered by reasoning, making it superior to reason. As the story opens, the man clearly understands that the “day had broken cold and gray, exceedingly cold and gray,” and still he insists on continuing his journey (650). The fact that the temperature is below freezing did not seem to bother him. He is ignorant of the cold. As he stands surveying the snow covered Yukon trail, “the mysterious, far-reaching hair-line trail, the absence of sun from the sky, the tremendous cold, and the strangeness and weirdness of it all—made no impression on him” (651). He is determined to join the boys at camp to enjoy the warmth, food, and companionship regardless of the weather. The man is very observant about his surroundings, however, “he was without imagination” (651). The temperature is about seventy-five degrees below zero, which means that it is about one hundred and seven degrees below freezing. To him, the air is cold and uncomfortable, and nothing more. He ignores the fact that he is a warm blooded creature and as such only able to survive at certain temperatures. Anything beyond that range requires not only intellectual reasoning ability but also instinct. The big native husky that accompanies him on his journey is his only companion. The animal can adapt to the cold weather, but on this occasion it is...
https://www.studymode.com/essays/Intellectual-Reasoning-Vs-Instinct-509417.html
Do you have employees who have earned qualifications, who have the technical skills to perform their roles but who need “soft skills” to help them manage their professional lives more effectively? Most schools and universities do not teach learners the nuances that can dictate their successes or failures at work. Technical skills are learned while professional behaviours and attitudes are overlooked. The result is that you may have technically brilliant employees who may never reach their full potential in your business, who cause conflict and poor productivity within themselves and amongst colleagues or who cannot conduct effective business with your customers. This programme develops those essential personal attributes, attitudes and behaviours that enhance an individual’s interactions, job performance and career prospects. It can be customised to dove-tail with your induction programme, promoting greater employee performance from the very beginning of an individual’s tenure. The ESSENTIAL WORKPLACE SKILLS programme is offered as a complete 6-module package or you may prefer to choose specific modules only, based on your needs. Each of the following modules is usually delivered over the course of a training day: - Business Communication - Business Etiquette - Ethics in Business - Personal Finance - Personal Effectiveness - Personal Development Target audience This programme is most effective when presented to graduates and new hires, as it establishes the foundation on which to build a successful career within your organisation. It is also applicable to experienced hires that need to develop effectiveness in these areas. Content Business Communication How individuals interact in the working environment and beyond is fundamental to their success and that of the businesses they work in. This module aims to develop an awareness of the power of verbal and non-verbal communication and provides guidance on how best to communicate in a business environment. Content of this module includes: - Modes of communication - Personal awareness: preferred communication channels - Prejudices, preconceptions and presumptions - Importance of effective communication - Applying different modes of communication - Personal brand Business Etiquette By understanding corporate culture and applying behaviours that support company objectives, participants should more effectively work towards achieving the broader goals of your business. This module includes the following content: - Knowing the business you’re in: what does it do and how does it do it? - Corporate culture: structure, image and ethics - Applying business ethics in the workplace - Behaving appropriately in a business environment - Assessing one’s own behaviour in a business setting Ethics The subject of business ethics is complex. Influenced by their personal values, people sometimes have significant differences of opinion regarding what constitutes ethical behaviour and how ethical decisions should be made. This module explores the concepts and principles relating to personal ethics and how these interface in a work environment. - How personal value systems are formed and adapted - How personal value systems influence behaviour (values vs. ethics) - Understanding diversity: including race, gender, ethnic group, age, personality, cognitive style, tenure, organizational function, education and background - Frameworks for ethical conduct in the workplace: regulations, procedures and compliance - The nature and functioning of professional values in the workplace - Professional accountability - Professional values in an organisational context - Why value conflicts occur in the workplace - Handling value conflicts Personal Finance Our personal finance module discusses the need for and tools to prepare a personal budget. The focus is on managing financial resources so that this does not become a negative influence in the workplace. Content includes: - Understanding your payslip - Recording and analysing current spending - Investigating ways of controlling own finances - Recognising the need to save as part of own financial plan - Compiling a personal budget - Understanding the implications of credit contracts - Understanding the costs of accessing credit - Investigating the options for the financing of household items and - Understanding differing options provided by banks and the associated costs Personal Effectiveness Participants will learn how to operate more effectively in the workplace by being organised and developing positive working relationships. Course content includes: - Administrative procedures: company and legal frameworks - Information management: maintaining files and records - Plan and organise their own work - Time management - Team working - Relationship management - Conflict management - Communication Personal Development Our personal development module focuses on how to set short- and long-term goals for participants’ professional and personal development. This module aims to instil in participants a sense of ownership and accountability for their development. The content includes - Understanding how to set and prioritise tasks - Creating a personal vision, utilising a vision board and converting this into a personal development plan - Monitoring and adjusting your personal development plan - Operating as a member of a team Outcomes By the end of this programme, delegates will - Have bridged the gaps in attitude and behaviour between places of learning and the real workplace - Understand the factors beyond their technical training that will integrate them into your organisation and help them succeed throughout their careers - Have practical experience in applying key learning points - Take ownership for their conduct and performance at work - Behave in ways that enhance their effectiveness and productivity Method of delivery Carefully chosen theory is presented to provide context and learning takes place largely through activities and group discussions. Care is taken to ensure that all learners actively participate in the training sessions. Participants are given a training manual to help guide their thinking and behaviour in the workplace. Programme details Content and length is tailored according to your business context and the experience level(s) of the delegates. Our facilitators will run this programme at a venue convenient to you: either on- or off-site. The training day usually runs from 8:30am – 4:30pm.Find something interesting? Please SHARE it!
http://www.theofficecoach.co.za/?page_id=13
This new study, with input from the universities of York and Manchester, is said to fill a long-standing gap in the periodic table; astatine, atomic number 85, is the last element present in nature for which this fundamental property remained unknown. The element is of particular interest because isotopes of astatine could be used to create radiopharmaceuticals for cancer treatment by targeted alpha therapy. This research, co-funded by the Science and Technology Facilities Council (STFC) and published in Nature Communications, could help chemists to develop applications for astatine in radiotherapy, as well as developing theories that predict the structure of super-heavy elements. By looking at the ionization potential of astatine - namely, the energy needed to remove one electron from the atom, and thereby turning it into a positive ion - the scientists have been able to understand more about the chemical reactivity of astatine and the stability of its chemical bonds in compounds. Astatine is a naturally occurring trace element and less than 28 grams exist on Earth at any time. Physicists at ISOLDE can make artificial isotopes of astatine by bombarding uranium targets with high-energy protons. By shining a series of precisely wavelength-tuned lasers at the astatine atoms, the team that operates the resonance ionization laser ion source (RILIS) at ISOLDE measured the ionization potential of astatine to be 9.31751 electron volts. ‘None of the many short-lived isotopes used in medicine exist in nature; they have to be artificially produced by nuclear reactions,’ said Dr Bruce Marsh from CERN and Manchester University. ‘The possible medical isotopes of astatine are not so different in this respect. What is different about astatine is that its scarcity in nature makes it difficult to study by experiment, which is why this measurement of one of the fundamental properties is a significant achievement.’ ‘The experimental value for astatine also serves for benchmarking theories that predict the atomic and chemical properties of super-heavy elements, in particular a recently discovered element 117, which shares very similar characteristics to astatine,’ added York University’s Prof Andrei Andreyev in a statement.
https://www.theengineer.co.uk/content/news/rarest-natural-element-study-could-enable-new-radiotherapy
The Diversity Committee of the Minnesota Defense Lawyers Association seeks to promote diversity within its membership and the law firms in which its members work. We appreciate and embrace that our legal community and clientele come from a rich variety of diverse cultures, beliefs, perspectives and backgrounds. Through an open and inclusive membership, we hope to achieve a better understanding of the broader issues of diversity, as well as the cultural similarities and differences within our society, so that we may better serve the legal community and the people we represent. The Minnesota Defense Lawyers Association Diversity Committee will address issues of importance to the many different segments of our society, including issues of racial/ethnic diversity, gender diversity, age, sexual orientation, disability, religious and cultural beliefs. The Diversity Committee will also examine how its members’ firms may achieve success in recruiting, employing and maintaining a diverse legal staff. The goals of the Committee will be accomplished, not only through meetings and publications to identify and consider issues of diversity, but most importantly through proactively seeking out members of the defense bar from diverse and minority backgrounds to join the Minnesota Defense Lawyers Association and the Committee in this cause.
https://www.mdla.org/page/Cmte_Diversity
The number and complexity of control systems in wind turbines is expanding rapidly, and their design can be the difference between an immensely profitable system and a dormant or damaged system. Designing a robust control system requires an accurate model of the plant, and tools that enable rapid iteration to find the best design, not simply the first design that works. The control system must be as optimized as possible, while meeting multiple (and sometimes conflicting) system requirements. Pitch and yaw controllers must also interact with supervisory logic controllers in order to operate and protect the turbine under a wide range of operating conditions. The model of a complete wind turbine (including mechanical, electrical and hydraulic systems) will be used to show: • How to easily apply linear control theory to rapidly design controllers for nonlinear systems, and to verify their performance on the nonlinear system • How to use optimization algorithms to optimize system performance with respect to multiple design requirements • How to define supervisory logic using state machines • How to integrate and test all of these models in a single environment to test for integration issues and test overall system performance These points will be illustrated with demonstrations using the model and the simulation software. Experience with MATLAB and Simulink is helpful, but not required to learn from this webinar. You can download the model used in this webinar from MATLAB Central.
https://au.mathworks.com/videos/designing-control-systems-for-wind-turbines-81629.html
Programme shifts focus to Muğla region to promote sustainable tourism 19 April 2022, İstanbul – After 15 years of pioneering work and US$2 million invested in promoting sustainable tourism in Turkey, the “Future is in Tourism” programme announced today that its focus for 2022 would be Muğla province, one of the Turkish regions hardest hit by last year’s forest fires. The venerable programme, the first in Turkey to focus on sustainable tourism, is implemented by the Ministry of Culture and Tourism, the United Nations Development Programme (UNDP) and Anadolu Efes. As in past iterations of the programme, the new activities in Muğla will promote community-based initiatives, “sustainable tourism” models, women’s entrepreneurship and local job creation. The project will develop five new alternative tourism routes; provide mentoring and other support to at least 50 women-led tourist businesses; and train 500 people in Muğla region in sustainable tourism. Local tourism companies that complete the training will receive a recommendation certificate. Recognizing the heightened threat to tourism posed by climate-driven disasters, the project will also conduct awareness raising activities on forest-fire risks for local residents and install water tanks at high-risk locations in the region to support quick reaction to forest fires. “We started this programme in 2007 to diversify tourism in Turkey away from simple ‘sea-sand-sun’ models to an environmentally sustainable approach that benefits local communities over 12 months and four seasons,” said Can Çaka, Group Head and CEO of Anadolu Efes Beer. “Over 15 years, we have developed successful alternatives together with UNDP and the Ministry of Culture and Tourism. Starting from this year, we’ll be in Muğla, which is extremely important in terms of domestic and foreign tourism. As Anadolu Efes, we stood by the people of the region during the forest fires last year. Now we take responsibility for the social and economic rehabilitation process of Muğla. In the new period, our efforts will contribute to the development of people, particularly women, and protection of natural and cultural heritage.” “Tourism is a crucial economic sector for Turkey,” said UNDP Resident Representative Louisa Vinton, “but it also puts a heavy burden on our delicate natural environment. Our efforts aim to develop models of tourism that generate income and jobs while protecting nature. In this respect, we see our partnership with Anadolu Efes and the Ministry of Culture and Tourism as a model of the public-private partnerships that will be vital for future green growth in Turkey.” “One of the Ministry’s responsibilities is to identify ways to diversify tourism activities extend tourism to 12 months and increase tourism income,” said Şennur Aldemir Doğan, Deputy General Manager of Directorate General of Investments and Enterprises at the Ministry of Culture and Tourism. “In this context, the ‘Future is in Tourism’ reflects a strong partnership structure through which we have be able to support sustainable tourism initiatives in different regions of Turkey for 15 years. In the new period, based on the experiences we aim to support local people and small businesses to generate income from tourism by improving the existing potential in Muğla, one of the important tourism destinations in Turkey.” Since 2007, the “Future is in Tourism” programme has engaged 200,000 people in 19 different projects and cooperated with 600 NGOs and 23 universities. The results of the programme include new employment opportunities for 500 women. Grants awarded under the programme – for example, for the “Lavender Scented Village” in Kuyucak or the “Kars Cheese Route” – have shown remarkable success in attracting new visitors and improving local livelihoods by highlighting distinctive local products, scenic landmarks and indigenous flora and fauna. For more information:
https://www.undp.org/turkey/press-releases/%E2%80%9Cfuture-tourism%E2%80%9D-has-invested-us2-million-turkey-over-15-years
Sports Medicine II Sports Medicine II builds on the knowledge and skills gained through Sports Medicine I and prepares the student for entry-level employment or further post-secondary training in the field of Sports Medicine. Students acquire advanced practical concepts of training room development, risk management, administrative and legal issues, and prevention, care, and treatment of athletic injuries. Further knowledge and skills related to body conditioning, nutrition, use of protective equipment, and awareness of environmental issues are incorporated. On-the-field and off-the-field assessment, prevention, and treatment of acute and non-acute injuries following standard precautions build students’ experience during practical application. Integrated throughout the course are standards for Career Ready Practice and Academic Content Standards which include: appropriate technical skills and academic knowledge; communication skills; career planning; applied technology; critical thinking and problem solving; personal health and financial literacy; citizenship, integrity, ethical leadership and effective management; work productively while integrating cultural and global competence; creativity and innovation; reliable research strategies, and environment, social and economic impacts of decisions.
https://www.tricitiesrop.org/high-school/classes/available-high-schools/item/sports-medicine-2
Filing For Bankruptcy On A Joint Account: Filed under: Bankruptcy When filing for bankruptcy protection there are many factors that can influence the outcome of your case. There are different types of debts that may or may not be eligible for discharge through bankruptcy. The type of account the debt is tied to may also play a role in the outcome of your bankruptcy case. An account that is shared with another individual, or joint account, can produce problems in a bankruptcy proceeding. Debts In A Joint Account People on a joint account are sharing liability for that account and any debts accumulated on those accounts. In the case of a bankruptcy filing, all persons listed as owners of an account can be affected by the bankruptcy. If one owner of an account files for bankruptcy and receives a discharge of the debt, the creditor can still pursue the remaining account owners. When a creditor is granted a seizure and liquidation of a joint account, all owners of that account are affected by the liquidation. Bankruptcy requires that all assets and debts be listed on the bankruptcy petition. Joint checking or savings accounts may be at risk in bankruptcy. However, this does not mean that all assets of joint accounts will be liquidated in order to satisfy the debt owed. Many times, one account owner may be able to prove they were solely responsible for the debt, and alleviate the remaining account owners from responsibility. Also, if one account owner can prove they own a certain amount of the assets, the remaining assets may be exempt from the risk of liquidation. Joint Account Considerations If you are one owner of a joint account, make sure you do not attempt to transfer assets to another party, including remaining owners. The bankruptcy court may view this action as your attempt to conceal an asset and consider it fraud. It is also a good idea to limit joint ownership of accounts. Generally, spouses should have only two or three joint accounts. To prevent any trouble for minors or dependent children, list them as authorized users on your account and now account owners. It is also a good idea to review your states bankruptcy exemptions to determine if there are any further laws regarding bankruptcy protection for certain types of joint accounts.
https://leebankruptcy.com/bankruptcy_blog/bankruptcy/filing-for-bankruptcy-on-a-joint-account/
So, do you like basketball and math? At least one of the two? I've got a fun exercise for you if you do. Rocky Top Talk reader kidbourbon sent the following question, which I am happy to answer: How do you use kenpom's pythag rating to get a predicted margin of victory? This is an excellent question! But first, let's talk briefly about what "pythag" is, then I'll show you how to use it to calculate score predictions. (I had a little discussion of pythag over at The BruceBall Blog awhile back if you're interested.) "pythag" is short for "Pythagorean Expected Winning Percentage." You can see why we like to abbreviate it, and "pythag" sounds a lot more appropriate than "PEWP." Anyway, the concept comes from baseball, which has a long and rich history of intense, complex statistical analysis. The basic idea was to calculate how many games a team should win, based on how many runs it scored and how many runs it allowed. It measures how well a team plays overall factoring out wins and losses due to luck or timing. Simple enough. The basic baseball formula is E(W%) = runs scored^2 / (runs scored^2+runs allowed^2). At their root, baseball and basketball work on the same principle-- you win by scoring more than you allow-- so it's natural to try and use the same formula to assess the strength of a team, particularly as it relates to predicting future outcomes. The major difference comes in two places: 1) many more points are scored in basketball, and 2) good teams win a far higher percentage of their games in basketball. Consequently, the exponents have to be changed to make it a realistic measure for basketball. Pomeroy, using the log5 formula and a whole slew of games, calculated that an exponent of 11.5 would be the most accurate for college basketball. Fine by me. So for NCAA basketball, the formula becomes E(W%) = points scored^11.5 / (points scored^11.5+points allowed^11.5). Of course, in NCAA basketball, no two teams play the same schedule. So Pomeroy adjusts the points scored and allowed to account for schedule strength. Each team has a number for scoring rate and defensive scoring rate, adjusted to an average schedule. Those are plugged in to the above formula, and voila. So there you go. A quick and dirty pythag introduction and a brief look at how it's calculated. Now, back to kidbourbon's question . . . so how do we use this number to predict margin of victory? I'll show you the series of required calculations, using the upcoming Kentucky@Tennessee game as an example. Now, Pomeroy reports points scored and points allowed on a per 100 possession basis, calling them offensive and defensive efficiency. Adjusted for schedule, Tennessee's numbers are 117.7 (offense) and 89.5 (defense). Kentucky's are 108.0 and 92.8. However, homecourt advantage is not accounted for in this-- so for predictive purposes, kenpom gives the home team a 1.4% bonus to offense and defense, and gives the visiting team the same bump in the other direction. After this adjustment, we have Vols: 119.3 offense, 88.2 defense Wildcats: 106.5 offense, 94.1 defense From these adjusted numbers, we could recalculate pythag for each team using the E(W%) formula above. When we do, we see that Tennessee's pythag is now 0.970 and Kentucky's is 0.806. This will allow us to look at Tennessee's probability of winning. I mentioned the log5 formula above, and that is what all of this is based on. The log5 formula gives the chance of a team winning a game, given their pythag and their opponent's pythag. For team A vs. team B, the log5 prediction for A's chance of winning is P(W) = (A - A * B) / (A + B - 2*A*B). For Tennessee, this becomes P(W) = (0.970 - 0.970 * 0.806) / (0.970 + 0.806 - 2 * 0.970 * 0.806) = 88.6%. If you look at kenpom's prediction for the UT-UK game, he lists the Vols' chances of winning at 89%. Bingo. But what about the margin? That's what kidbourbon is looking for, after all. First we have to know how many possessions to expect. Adjusted for schedule, Tennessee averages 72.3 possessions per game while Kentucky averages 64.9. Across all NCAA games, the average tempo is 67.3 possessions. So, Tennessee gets 107.4% of the average, and Kentucky gets 96.4%. To predict the tempo, we simply take UT's 107.4%, multiply it by Kentucky's 96.4%, and then multiply by the average number to get expected tempo: E(tempo) = 107.4% * 96.4% * 67.3 = 69.7, or 70 possessions. If you look at kenpom's prediction, you'll see this expected tempo in brackets next to the score prediction. With me so far? Good. As it turns out, the calculation for the predicted score is very similar to the calculation for the expected tempo, which is why I introduced that one first. For each team we'll need to compare their offense and defense to the average, just as we did with tempo. The average points per 100 possessions nationwide is 101.5. That means that Tennessee's offense is 117.6% of the average. Its defense is 86.9% of the average. For Kentucky, these numbers are 104.9% and 92.7%. Remember, we already added in the homecourt advantage, so these really represent how we expect UT to do at home and how we expect UK to do on the road. Now getting each team's expected output is easy. We simply multiply their offense by the opposing defense and then by the average output: Tennessee expected output = 117.6% * 92.7% * 101.5 = 110.6 Kentucky expected output = 104.9% * 86.9% * 101.5 = 92.6 Seem a little high? That's because these numbers are for 100 possessions. Recall that for this game, we expect 69.7 possessions. To adjust for that, multiply each output by (69.7/100). The result? Tennessee 77, Kentucky 65. If this seems like a long and complicated way to get a prediction for who wins and by how much, that's because it is. Thankfully for real games kenpom has already done the calculating for us. But it's nice to know how to do this just in case you want to look at some hypotheticals. Does that answer your question, kidbourbon? Filed under: kenpom, pythag, and expected scoring margin: a reader's question Share this story So, do you like basketball and math? At least one of the two? I've got a fun exercise for you if you do.
https://www.rockytoptalk.com/2008/2/27/165520/578
Skip header and navigation ThinkWood.com Your Selections: 0 Items research library Print PDF Toggle Full Record Increasing Deemed to Satisfy Height Limits for Timber Construction Cost Benefit Analysis https://research.thinkwood.com/en/permalink/catalogue1929 Year of Publication 2015 Topic Cost Application Wood Building Systems More detail Organization The Centre for International Economics Publisher Forest & Wood Products Australia Year of Publication 2015 Format Report Application Wood Building Systems Topic Cost Keywords NCC Mid-Rise Research Status Complete Summary This study undertook an analysis of net benefits obtained from increasing the height allowances for the deemed to satisfy (DTS) provisions in the National Construction Code (NCC) for timber construction. The analysis considered DTS provisions for up to 25 metres for building Classes 2 and 3 (multi-residential construction) and Class 5 (office construction). The most valuable benefit of using timber construction would be shorter construction times compared to traditional steel and concrete construction, with reduced foundation requirements; reduced need for additional services such as fixed cranes; and an increased ability for other trades to work concurrently through the construction process thereby reducing final time to completion. The analysis estimated that using timber construction would result in cost savings of around $1.1 million for a 4 storey apartment building ($10.8 million traditional build cost reducing to $9.7 million), $1.3 million for a 5 storey apartment and $1.6 for a 6 storey apartment building assuming ten apartments per floor. For a 6 storey commercial building, cost savings of $1.92 million might be expected. The report estimates that for the overall Australian economy, increased height allowances for timber construction in the NCC would bring approximately $103 million in net benefits over 10 years. This is made up of $98.2 million in direct construction cost savings; $3.8 million in reduced compliance costs; and $1 million in environmental benefits.
https://research.thinkwood.com/en/permalink/catalogue1929
The small vestibule which precedes the hall contains two elements, extremely common to synagogue interiors: an elegant basin for ritual hand washing and a receptacle for alms with an inscription paying tribute to the value of discrete charity. The interior is a jewel of Piedmont baroque architecture; functional and decorative elements have been designed in full harmony with the interior. In accordance with the region’s most widespread tradition, the hall features a central layout with a fulcrum in the splendid canopied and lacquered wood tevah, dated 1766. An aron hakodesh is located on the eastern wall, coordinated in style and equally rich in embellishments, in particular on the inner part of its doors featuring symbolic depictions of the Sanctuary of Jerusalem and its furnishings. A single bench for the public runs along the entire perimeter; the matroneum, in an elevated position, is located above the vestibule and faces the left side of the hall. There are numerous scrolls painted on plaster of intrinsic, commemorative and ornamental value, where poetic verses allude to dates of events and the names of donors who contributed towards the synagogue’s completion. via Bertini, 8 First floor without elevator. Guided tours only. For info and reservations:
https://www.visitjewishitaly.it/en/listing/synagogue-of-carmagnola/
FIELD OF THE INVENTION BACKGROUND OF THE INVENTION BRIEF DESCRIPTION OF THE INVENTION 0001 The invention relates to exercise and sports, particularly to applications in which recovery of a person from a fitness exercise performed by him/her is controlled. 0002 Recovery after exercising is important both to metabolism and muscle care. Stress pain resulting from exercising can be reduced considerably by a well-performed recovery exercise. In that case recovery is achieved in shorter time and the capability of the muscles and the system to perform the next exercise improves considerably. The most important function of the recovery exercise is to remove any lactic acid, i.e. lactate, accumulated in the body quickly and efficiently so that the lactate does not cause pain and post-exercise stress in the muscles. For this reason, the recovery exercise has to be performed at a stress level which prevents build-up of additional lactate, but enables effective removal of lactate from the body. Thus the recovery exercise is performed below the anaerobic threshold. 0003 Nowadays various instructions and rules are used in sports coaching and training to keep recovery exercise at a certain adequate level for a predetermined time. For example, the exercising person may be told to recover from exercising by walking for 10 minutes or by keeping the heart rate at 120 beats/minute for 10 minutes. 0004 The prior art method of recovering from a fitness exercise has considerable disadvantages. It is clear that the above-mentioned instructions are very general and by no means optimal for achieving as efficient recovery as possible. The above-mentioned instructions take the characteristics of an exerciser into account only indirectly, e.g. a coach may give different instructions for performing recovery exercise to athletes with different fitness levels. 0005 The object of the invention is to provide an improved method of controlling a fitness exercise. This is achieved with the method to be described in the following. The method concerns controlling recovery of a person from a fitness exercised performed by him/her. The method comprises controlling a recovery exercise following the fitness exercise so that it is performed at a heart rate level below the threshold value of heart rate, heart rate variation being higher than a preset threshold value of heart rate variation at heart rate levels lower than the-threshold value of heart rate. 0006 The invention also relates to a heart rate measuring arrangement. The heart rate measuring arrangement comprises measuring means for obtaining heart rate information, forming means for forming control information from the heart rate information obtained by measuring to control the recovery exercise, display means for presenting the formed control information. 0007 The preferred embodiments of the invention are disclosed in the dependent claims. 0008 The invention relates to a method and apparatus for controlling recovery of a person from a fitness exercise performed by him/her. In this description the fitness exercise refers to a physical exercise which is at least partly performed at a workload level exceeding the anaerobic level, in which case lactate is accumulated in the muscles of the person's body. The recovery exercise means the exercise phase that follows the actual fitness exercise or competitive exercise which is mainly performed at a workload level below the anaerobic level. Controlling means control information provided e.g. by a heart rate monitor, such as the heart rate level, the heart rate limits within which the recovery exercise should be performed, and the time preferably used for the recovery exercise. 100 104 102 220 106 106 106 106 106 106 106 b b b 0009 In a preferred embodiment of the invention, an anaerobic threshold value, i.e. the threshold value of heart rate, is found on the basis of changes in heart rate variation. Here heart rate variation means temporal variations in heart beats around the expected moments at which the heart should beat. In a preferred embodiment, the variation is calculated as moving standard deviation, but it, can also be calculated by another prior art mathematical method, e.g. by a method which utilizes the distribution function between the heart rate and the heart rate variation. As a function of heart rate, the heart rate variation naturally decreases as the heart rate, i.e. the heart beat frequency, increases. FIG. 1 illustrates variation as a function of heart rate, i.e. the x axis shows the heart rate as per cent of the maximum heart rate and the y axis shows standard deviation as milliseconds around the expected moment at which the heart should beat. FIG. 1 illustrates dependency between the heart rate variation and the heart rate, which applies to the majority of people. When the heart rate level is e.g. 40% of the maximum heart rate, the heart rate variation is between 15 to 25 milliseconds. The maximum heart rate means the heart rate value that can be calculated e.g. by the formula (age), in which case the maximum heart rate of a 40-year old person is 180. The maximum heart rate can also be measured at the maximal workload or determined from the person's physiological properties using a neural network, for instance. It can be seen from FIG. 1 that as the heart rate level approaches the maximum heart rate, the heart rate variation decreases considerably. The angular point of heart rate variation, i.e. the change point , is achieved at a heart rate level which is usually about 62 to 65% of the maximum heart rate, but may also vary in a wider range, e.g. 55 to 70% of the maximum heart rate. The change point of heart rate variation is connected to the anaerobic limit point of energy metabolism. It can be seen from FIG. 1 that the anaerobic limit point is at a slightly higher heart rate level, i.e. 15 to 25 beats higher, than the change point of heart rate variation. At heart rate levels above the anaerobic limit point exercise is anaerobic, whereas at heart rate levels below the limit point exercise is aerobic. The intersection point of the change point is about 4 milliseconds at the y axis, but may vary e.g. from 3 to 5 milliseconds. 0010 In this description the fitness exercise refers to a physical exercise which is at least partly performed at a workload level exceeding the anaerobic limit, in which case lactate accumulates in the muscles of the person's body. Lactate concentration can be estimated for a given period, e.g. a few hours before and after the fitness exercise, and thus the invention is not limited to the actual performance of the fitness exercise. A fitness exercise can be divided e.g. into the following phases: warm-up, active phase, recovery phase, in which case the fitness exercise is preceded and followed by a rest. Different phases can be defined and distinguished from one another e.g. on the basis of heart rate levels and/or workload levels. Then the recovery phase, for example, can be defined as an exercise level where the heart rate level drops from 130 beats/minute to a rest level of 70 beats/minute. The recovery phase is considered to begin when the heart rate level is below the limit of the active phase, i.e. 130 beats/minute, for two minutes, for instance. 0011 In a preferred embodiment of the invention the exercising person monitors his/her heart rate at least at the end of the fitness exercise. At the beginning of the recovery exercise, the exerciser starts to walk, for example, so that the heart rate drops to a heart rate value below the change point of heart rate variation. For the recovery to be maximally efficient, it should be performed as close to the change point as possible, i.e. at a heart rate which is about 55 to 60% of the maximum heart rate. 100 106 106 106 106 b b b. 0012 In another preferred embodiment of the invention, the physical condition of the exercising person is also taken into account in the calculation of the change point of heart rate variation. Physical condition can be defined e.g. as the maximal oxygen uptake, which can be determined e.g. by measuring the maximal oxygen uptake at the maximal workload or by forming an estimate by means of a neural network, into which one or more physiological parameters are fed as input parameters and/or several stress parameters that describe the workload. The physical condition affects curve shown in FIG. 1 so that the change point of heart rate variation of a fit person is at a higher heart rate level than that of an unfit person. However, the proportional share of heart rate variation of the maximum heart rate is the same for both these persons. Thus a fit person can exercise at a higher workload without the exercise being anaerobic. The distance between the points and depends on the person's condition and lactate properties. In the case of a person with a very good condition, for example, the distance between the points is larger, which is taken into account in an embodiment by considering the person's condition in calculation and determination of controlling. In determination of point a prior art lactate tolerance test, for example, is used. In the test, blood tests are used to locate the threshold of the angular coefficient of the lactate curve under stress, which corresponds to the heart rate level at point 202 200 204 206 204 206 200 208 210 0013 In a preferred embodiment recovery from a fitness exercise is controlled by means of the vanishing point of heart rate variation and the lactate that has accumulated in the body during exercise. According to an embodiment, the amount of lactate in the body is estimated by a two-part mathematical model which is described in greater detail in FIG. 2. In this specification the mathematical model refers to a set of mathematical operations and rules for determining output parameter values from the input parameter values. Mathematical operations include arithmetic operations, such as addition, subtraction and multiplication. The mathematical model can naturally be implemented as a table or a database, in which case the output parameter value corresponding to a given input parameter is read directly from the database. It is clear that the model may consist of only one part or of more than two parts. One or more parameters representing the person's heart rate, such as the average heart rate, standard deviation of the heart rate or the like, are fed into the first part of the model as input parameters. The input data of the model also comprise one or more stress parameters describing the exercise workload, such as running speed or pedalling speed of the exercise bike. The third set of input parameters for the model consists of one or more physiological parameters of the person, such as height, weight or gender. The above-mentioned input parameter sets to are optional, i.e. they may be included in the model separately, simultaneously, or be omitted from the model. In an embodiment of the invention the first part of the model is implemented as a neural network which has been trained with user data comprising information on hundreds or even thousands of users. In an embodiment of the invention, the first part of the model provides the person's stress level during exercise as the output. The output parameter set provided by the model represents one or more fitness parameters which describe the person's physical condition, such as the maximal oxygen uptake or the fitness index. 212 208 210 212 214 214 216 218 0014 The input parameters to be fed into the second part of the model include the above-mentioned information representing the exercise stress level and optionally one or two fitness parameters describing the user's condition. In a preferred embodiment of the invention, the second sub-model is a mathematically formed physiological model which gives the amount of lactate in the person's body as the output parameter on the basis of the input parameters. The amount of lactate is used as the input parameter of control routines which control removal of lactate from the body, using the control output for monitoring that the duration and efficiency of the recovery exercise are sufficient. 0015 In the solution of the invention for controlling a recovery exercise, the person whose recovery is to be monitored, preferably uses a heart rate monitor. The heart rate monitor is a device employed in sports and medicine, which measures human heart rate information either from an electrical impulse transmitted by the heart or from the pressure produced by the heart beat on an artery. Generally, the heart rate monitors comprise an electrode belt to be fitted around the user's chest to measure the heart rate by means of two or more electrodes. The electrode belt transmits the measured heart rate information inductively as one or more magnetic pulses per heart beat, for instance, to a wrist-worn receiver unit. On the basis of the received magnetic pulses, the receiver unit calculates the heart rate and, when needed, other heart rate variables, such as moving standard deviation of the heart rate. Often, the receiver unit, i.e. the wrist monitor, also comprises a display for showing the heart rate information to the exerciser and a user interface for the use of other facilities of the heart rate monitor. In the above-described situation, the heart rate monitor refers to the entity consisting of the electrode belt and the receiver unit. The heart rate monitor can also be a one-piece device in which the display means are located on the chest, and thus there is no need to transmit the information to a separate receiver unit. Further, the structure of the heart rate monitor can be such that it only comprises a wrist-worn monitor which operates without the electrode belt to be fitted around the chest, measuring the heart rate information from the vessel pressure. In the description of the invention, the heart rate measuring arrangement refers to the above-described heart rate monitor solutions. The heart rate measuring arrangement also comprises solutions in which heart rate information is transmitted to an external computer or to a data network, which has display means, such as a computer screen, for presenting the information measured or generated by the heart rate monitor. 0016 In a preferred embodiment of the invention, the functions required by the method of the invention are performed in the receiving unit if the heart rate monitor consists of two pieces. One or more mathematical models of the invention and other functions required by the model are preferably implemented by software for a general-purpose processor. The models and functions can also be implemented as ASIC, with separate logic components or in a corresponding manner. In a preferred embodiment of the invention, the heart rate monitor comprises means for feeding user-specific physiological information, stress information and information on a fitness exercise. The feeding means can be, for instance, a keypad of the heart rate monitor, display equipment that supports control, a speech controller, a telecommunications port for external control or the like. The heart rate monitor also preferably comprises means for controlling the exerciser during a recovery exercise. The controlling means can be e.g. the display of a heart rate monitor, a speech controller, a telecommunications port for transmitting information to external means, such as a computer, or the like. 0017 An advantage of the invention is that it provides more accurate controlling of recovery from a fitness exercise than the prior art methods. BRIEF DESCRIPTION OF THE DRAWINS 0018 In the following, the invention will be described in greater detail with reference to the accompanying drawings, in which 0019FIG. 1 illustrates change of heart rate variation as a function of heart rate, 0020FIG. 2A is a block diagram illustrating a model structure according to an embodiment of the invention, 0021FIG. 2B is a block diagram illustrating a model structure according to an embodiment of the invention, 0022FIG. 3A is a method chart of an embodiment according to the invention, 0023FIG. 3B is a method chart of an embodiment according to the invention, 0024FIG. 3C is a method chart of an embodiment according to the invention, 0025FIG. 4A illustrates an embodiment of the heart rate measuring arrangement according to the invention, 0026FIG. 4B illustrates the electrode belt shown in FIG. 4A from the side to be fitted against the body of the person to be measured, 0027FIG. 4C illustrates an embodiment of a two-piece heart rate monitor. DETAILED DESCRIPTION OF THE INVENTION 1 4 0028 In the following, the invention will be described by means of preferred embodiments with reference to FIGS. to C. In an embodiment, the change point of heart rate variation, which can be determined from the maximum heart rate of a person, is used for controlling recovery. At heart rate values exceeding the change point the heart rate variation drops below the threshold value of heart rate variation. The majority of people have a threshold value of 4 ms. When the fitness exercise is finished or the workload is reduced, the heart rate starts to decrease and thus the heart rate variation begins to increase. When the variation exceeds the threshold value, recovery can be deemed to have begun. Heart rate variation can be utilized for controlling the recovery exercise e.g. so that a 10-minute recovery exercise is performed at a workload which keeps the heart rate variation between 6 to 8 ms. Naturally the heart rate can also be employed for controlling recovery because the heart rate and the heart rate variation correlate with each other. 3 200 302 304 302 304 0029FIGS. 3A to C illustrate a preferred embodiment of the method according to the invention. A neural network , which is shown in FIG. 2, is formed in steps to of FIG. 3A. The invention is not limited to the use of a neural network as the model for determining the stress level on the basis of the heart rate information, but some other prior art classifier can also be used. In step parameters are selected for the model. The parameters that describe the person's heart rate information are compulsory, whereas physiological parameters and stress parameters are optional. Parameters that represent the heart rate information include the heart rate, standard deviation of the heart rate, change rate of the heart rate and similar parameters that can be calculated from the heart rate. According to an embodiment, only the heart rate is used in the model, but the above-mentioned heart rate parameters can be combined in various ways as the input parameters of the model. Physiological parameters are optional input parameters of the model. One or more physiological parameters, such as the person's age, gender, height, weight and the like can be inserted into the model. Furthermore, the input parameters of the model may comprise one or more stress parameters that describe the exercise workload. The stress parameters typically comprise e.g. running speed, pedalling speed of the exercise bike or a similar parameter. In step the neural network model is trained, i.e. the weighting coefficients of the model parameters are matched. The model is preferably trained by means of a large set of users, which includes e.g. over 1000 users. The larger the number of users taken into account in training of the model and the more heterogeneous the group used in respect of its physiological properties and physical condition, the better the model parameters can be matched. In an embodiment of the invention, the model yields the person's stress level during exercise, which can be used for controlling recovery. The stress level is expressed e.g. as workload/time unit. In an embodiment the model yields one or more fitness parameters describing the person's physical condition as output parameters. The fitness parameter may be e.g. the maximal oxygen uptake or the fitness index. In a preferred embodiment the model yields the amount of lactate in the body, which is used in the controlling of recovery. The model is preferably calibrated by real user data before the actual use. In respect of lactate this means that the real amount of lactate in blood is measured a few times during an exercise, and the real measurement result is fed into the model, which calibrates the model parameters by means of feedback so that the real measured value is obtainable by the model. As a result of calibration the model yields better and more accurate estimates of the amount of lactate in blood during the actual use. 306 0030 In the following, the generation and principles of a physiological model related to step will be described on the basis of physiological properties of a person. The efficiency of a fitness exercise can be described as exercise intensity in relation to time. The intensity can be examined as heart beat frequency in relation to time. However, if momentary exercise intensity is examined this way, the result will be only a momentary rough estimate of the stress level, which gives hardly any information on the exercise performed. The effect of long-term exercise stress depends on the individual, i.e. a fit person sustains stress better than an unfit person. For example, both persons may be able to perform exactly the same exercise at the same intensity, but the exercise affects both persons differently: the fit person does not become significantly exhausted whereas the unfit person performs the same exercise at the extreme limits of his/her capacity. The influence of momentary stress on an individual and on the stress level experienced by the individual during exercise depends on previous stress. 214 202 214 202 0031 In training, it is important to know the amount of cumulative stress, which increases under hard stress and decreases at rest. Concentration of lactate, i.e. lactic acid, in blood represents well the cumulative stress. The amount of lactate is the only indicator by which the cumulative stress can be measured in practice. The amount of lactate can be measured by taking a blood sample, which is analysed. This is, however, slow and requires a complex measuring arrangement. The present invention provides a non-invasive and indirect method of measuring lactate and utilizing the information on the amount of lactate in the body for controlling recovery from a fitness exercise. Referring to FIG. 2, the amount of lactate is formed as a function of heart rate information during the fitness exercise, and in the following recovery exercise the removal of lactate from the body is monitored on the basis of the heart rate information . 0032 The physiological basis for the models illustrated in FIG. 2 is obtained from human energy metabolism. Muscles receive the energy needed for exercise from ATP (adenosine triphosphate). The ATP deficiency resulting from exercise should be replenished by producing new ATP from the energy reserves. For the first 10 to 15 seconds since the exercise stress starts, creatine reserves are sufficient for producing the ATP needed by the muscles. After this, the energy obtained from the glucose in the body can be used. Fatty acids cannot be utilized until about 15 minutes after the onset of exercise stress. In short maximal stress lasting only for tens of seconds, energy production is always chiefly anaerobic. In exercise stress of a few seconds energy is produced mainly by alactic processes by means of creatine phosphate. However, the creatine phosphate reserves are small and after ten seconds of exercise stress, energy is produced mainly by lactic processes. In longer maximal stress lasting for dozens of minutes the proportion of aerobic energy production increases. However, the long-term stress employs nearly the same energy production mechanisms as the short time stress. Carbohydrates from food provide glucose which is stored in muscles as glucogen. In glucolysis glucose degrades and releases energy. The reaction may take place either aerobically or anaerobically. 0033 Aerobic case: 2 2 2 0034 glucoseOCOHOenergy. 0035 Anaerobic case: 2 2 0036 glucoseCOHOlactateenergy. 0037 Lactate is thus produced in anaerobic glucolysis. The amount of lactate, i.e. lactic acid, in the body is a balance reaction. A small but equal amount of lactate is produced and removed at rest. If the workload increases, more lactate is produced, but the lactate removal mechanisms also start to function at a higher rate. If the stress level does not rise to a very high level, the lactate removal mechanisms can remove lactate at the same rate as it is produced. In hard exercise stress, more lactate is produced than removed. This leads to a rapid increase of the lactate level in the body and exhaustion. The lactate removal mechanisms of a fit person are efficient and quick to react. A lactate surplus is generated when the production rate of lactate is higher that the removal rate. A lactate curve, which presents the amount of lactate as a function of heart rate, represents in a way the person's condition. The curve of an unfit person grows more evenly than that of a fit person. The lactate level of a fit person is relatively low up to a rather high stress level. At a certain stress level, known as the lactate threshold, the curve rises steeply. This curve form can be explained by the fact that the lactate removal mechanisms of a fit person are efficient and react rapidly to the elevated production rate of lactate. The amount of lactate does not increase significantly until the maximal lactate removal rate is reached. Correspondingly, the lactate removal mechanisms of an unfit person are weaker and follow the elevated lactate production rate with a minor delay. If the lactate curve is known, it is easy to plan exercise and recovery. Exercise should take place within the lactate curve area in which development is desired because exercise generally improves blood circulation, and consequently the efficiency of the removal mechanisms. The recovery exercise, on the other hand, should be performed at a level at which the lactate removal mechanisms function maximally, i.e. preferably at a stress level just below the anaerobic limit. 208 212 212 212 212 <mtable><mtr><mtd><mrow><mfrac><mrow><mo>&amp;dd;</mo><mrow><mi>la</mi><mo>&amp;af;</mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow></mrow><mrow><mo>&amp;dd;</mo><mi>t</mi></mrow></mfrac><mo>=</mo><mrow><mfrac><mn>1</mn><mi>V</mi></mfrac><mo>&amp;af;</mo><mrow><mo>[</mo><mrow><mrow><mi>Ra</mi><mo>&amp;af;</mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow><mo>-</mo><mrow><mi>Rd</mi><mo>&amp;af;</mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow></mrow><mo>]</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mtd></mtr></mtable> 0038 An embodiment of the physiological model is shown in FIG. 2B. The person's stress level during exercise is fed into the model as input data. A delay unit A enables presentation of the model as a neural network according to an embodiment. Since the physiological model is preferably in the form of a differential equation, a discrete form of the model can be implemented by a delay unit which provides feedback. The physiological model with units B to D can be expressed simply by formula (1) 208 0039 where la(t) is the lactate concentration, Ra(t) is the lactate production rate, R(d) is the lactate removal rate and V is the lactate breakdown rate. Parameter k in the model represents the dependency of Ra on the stress level , represents the dependency of Rd on the stress level, n represents the dependency of Rd on la, and s the dependency of Rd on Rdmax. The above-mentioned parameters of the model may be adapted to their user-specific optimal values on the basis of a reference exercise performed by the user. In the reference exercise the stress parameters are accurately defined. 214 216 216 216 216 216 218 216 0040 As output the model provides the lactate amount in the body, which can be fed into a controller monitoring recovery according to FIG. 2A. Instead of the controller , a person, i.e. the exerciser or his/her coach, can also control the recovery exercise. The controller preferably monitors that the amount of lactate remains below the threshold value, in which case the lactate removal rate is higher than the lactate production rate. Furthermore, the controller preferably monitors the duration of the recovery exercise, i.e. the fact that a sufficient amount of lactate is removed from the body to avoid stress pain caused by lactate. In a preferred embodiment of the invention the controller also receives information on the heart rate, and, in addition to lactate values, the controller uses the change point of heart rate variation in controlling recovery from a fitness exercise. The control function provided by the controller can be supplied e.g. to the display of the heart rate monitor or provided for the user as a voice message. 322 324 326 0041FIG. 3B illustrates feeding of user-specific information into the model and adaptation of the model to a certain user. In step user-specific physiological parameters are fed into the mathematical model. The mathematical model, such as a neural network model, and physiological parameters are used for forming a rough estimate of the user's physical condition. In step stress parameters corresponding to a reference exercise are fed into the model. A reference exercise, e.g. a 12-minute running exercise in accordance with the Cooper's test, is performed in step . On the basis of the running speed and heart rate, a specified estimate of the user's condition can be formed using the fitness parameters, such as the maximal oxygen uptake. 342 220 344 346 2 348 350 0042FIG. 3C illustrates use of a solution implemented by the method of the invention. In step the user's maximum heart rate is calculated by the formula age, and thus the maximum heart rate of a 30-year person, for example, is 190. In step the person's threshold heart rate is calculated, which is 63% of the maximum heart rate, i.e. 120 in this case. The threshold heart rate value is thus a rough estimate of the person's anaerobic limit. According to an embodiment of the invention, the person's condition is used for specifying the threshold heart rate. For example, if 10 beats are added to the threshold heart rate value of a fit person, the person's threshold heart rate will be 130. In step the person carries out the actual fitness exercise the recovery from which is to be controlled according to the invention. The models described in connection with FIGS. 2A to B are used for estimating the lactate concentration in the person's body as a function of heart rate. In step the fitness exercise is monitored and finished, after which the person starts a recovery exercise, which, according to the example, should also be performed at a relatively heavy workload, at a heart rate slightly below 130. According to a preferred embodiment, the controller shown in FIG. 2B or the exerciser or another person monitors the amount of lactate during recovery, i.e. the fact that the lactate removal rate is optimal and at least exceeds the production rate. A sufficient duration can be determined for the recovery exercise from the threshold value formed for the amount of lactate in accordance with step . 400 406 200 402 410 410 402 402 404 402 404 0043FIG. 4A shows a person who performs an exercise on a treadmill . The heart rate of the person is measured by means of a transmitter electrode belt fitted around the chest. The heart rate is measured by two or more electrodes A to B of the transmitter electrode belt , between which a difference of potential is created as the heart beats. The transmitter electrode belt is fitted around the person's chest by means of an elastic band made of elastic material, for instance. The measured heart rate is preferably transmitted inductively to a wrist-worn receiver which preferably also comprises a display for showing the measured heart rate. The invention is also applicable to heart rate monitors, in which the electrode belt on the chest, in addition to measuring, takes care of storing, processing and displaying the heart rate information, and thus the wrist-worn receiver unit is unnecessary. The heart rate monitor can also be a single wrist-worn device, in which the transmitter part and the receiver part are integrated into one single device, and, thus transmitter and receiver electronics are unnecessary. The heart beat can be measured from the wrist, either from an ECG signal, arterial pressure pulse or by observing optically changes in the absorption or reflection of blood circulation. 402 402 410 410 416 416 402 412 410 410 410 410 412 414 414 0044FIG. 4B shows the electrode belt of FIG. 4A in greater detail. In FIG. 4B, the electrode belt is illustrated from the side of the electrodes A to B, i.e. from the side facing the body. The figure also shows securing means A to B, by which the electrode belt can be secured to an elastic band to be fitted around the chest. In FIG. 4B, an electronic unit for processing the heart rate information obtained from the electrodes A to B is illustrated by a broken line. The electrodes A and B are connected to the electronic unit with conductors A and B, respectively. 402 404 402 404 112 402 410 410 410 410 420 422 424 426 426 430 404 0045FIG. 4C illustrates the structures of the transmitter electrode belt and the receiver by means of an embodiment. The topmost part of the figure shows the transmitter electrode belt , the middle part shows a sample of heart rate information to be transmitted and the bottom part shows the essential parts of the receiver unit . The electronic unit of the transmitter electrode belt receives heart rate information from the means A to B for measuring one or more heart rate parameters. The measuring means A to B are preferably electrodes. The heart rate monitor comprises at least two electrodes, but more can also be used. From the electrodes the signal is applied to an ECG preamplifier from which the signal is transferred via an AGC amplifier and a power amplifier to a transmitter . The transmitter is preferably implemented as a coil which sends the heart rate information inductively to a receiver, such as a wrist-worn receiver unit or to an external computer, for instance. 432 432 432 432 432 430 430 404 440 442 444 404 448 450 450 450 450 450 448 0046 One 5 kHz burst A corresponds to one heart beat, for instance, or a cluster of a plurality of bursts A to C may correspond to one beat. The intervals A to B of bursts A to C can be equal or different in duration, as appears from FIG. 4C. Information can be transmitted inductively, or alternatively, optically or via a conductor, for instance. In an embodiment, the receiver , such as the wrist-worn receiver, comprises a receiver coil , from which the received signal is applied through a signal receiver to a central processor , which coordinates the operation of different parts. The receiver preferably also comprises a memory for storing the heart rate information and display means for presenting the heart rate or the heart rate parameters derived from it, such as the standard deviation. In a preferred embodiment of the invention the display means also show information needed to control recovery, such as the heart rate level at which the recovery exercise is optimal and the recovery exercise duration. The display means can also show the amount of lactate in the person's body or the person's stress level, for instance. The display means is, for example, the display of a heart rate monitor or a speech controller. In preferred embodiments the display means may also comprise means for transmitting the heart rate and/or feedback information to an external computer or data network. The transmitting means can be implemented e.g. as an induction coil, an optical transmitter, or a connector for transmission via a connecting line. A heart rate measuring arrangement is in question if the information measured or generated by the heart rate monitor is transmitted to equipment outside the heart rate monitor, such as a computer. According to a preferred embodiment, the display means are then located in the computer, by which the information measured in real-time or stored in the memory of the heart rate monitor can be displayed. 452 452 452 200 452 450 542 452 200 444 452 200 444 404 446 446 200 0047 The heart rate monitor further comprises forming means for forming control information from the measured heart rate information for controlling the recovery exercise. The forming means are preferably implemented as the heart rate monitor's calculating unit. The calculating unit preferably implements the functions required by the method of the invention for forming control information, such as location of the change point of heart rate variation in one embodiment or calculation of a person's maximum heart rate value on the basis of age. In a preferred embodiment the calculation unit comprises a mathematical model , which by means of input parameters provides e.g. the amount of lactate in the body and/or the person's stress level as output parameters. In that case, the calculation unit forms the control information using the output parameters provided by the model. The output parameters provided by the model can naturally be presented as such by the display means of the heart rate monitor. It is clear that the calculating unit need not be implemented as a separate device unit but the calculating unit and the mathematical model included therein can be part of the central processor . Further, it is clear that the heart rate monitor need not necessarily comprise a separate calculating unit but the model can be implemented in the central processor , for instance. The heart rate monitor, i.e. receiver in the solution of FIG. 4C, preferably comprises feeding means , such as a keypad or speech controller means. The feeding means can be used e.g. for feeding the physiological parameters and stress parameters required by the model . 0048 In a preferred embodiment of the invention the functions, means and one or more models implementing the method steps of the invention are implemented by means of software in a general-purpose processor. Said means can also be implemented as ASIC, by separate logic components or by any corresponding known method. 402 404 402 404 402 404 0049 In the embodiment of FIG. 4C the heart rate monitor refers to an entity consisting of the transmitter electrode belt and the receiver . In an embodiment, the heart rate monitor can also be implemented so that the above-described functions included in the transmitter electrode belt and the receiver are located in one device. This one-piece device can be either fitted on the chest for heart rate measurement, or alternatively, worn on the wrist. It is obvious to a person skilled in the art that the electrode belt and the receiver may also comprise other parts than those shown in FIGS. 4B and 4C, but it is not relevant to describe them here. 0050 Even though the invention has been described by means of examples according to the attached drawings, it is clear that the invention is not limited to them, but may be modified in various ways within the inventive concept defined in the appended claims.
You’ve probably heard about mitochondria back in the day on your Biology 101 class. Allow us to refresh your memory if you can’t recall a darn thing about it. Mitochondria are cell organelles (tiny structures that have specific actions within a cell), animating every living cell inside your body. Mitochondria are found in almost all living cells and produce adenosine triphosphate (ATP), the main energy molecule used by all cells. Due to this critical, energy-generating role, the mitochondrion (singular of mitochondria) is often referred to as “the powerhouse of the cell7.” Mitochondria are found in all eukaryotes, a term that classifies each living organism except for bacteria or archaea7 (forms of microorganisms that thrive in acidic or oxygen-free environments such as hot springs and marshes)16. Inside the cell, the mitochondria conduct the process of cellular (aerobic) respiration. Cellular respiration requires oxygen, giving off carbon dioxide, and producing ATP7. A specific part of cellular respiration is the citric acid cycle (e.g., the Krebs cycle) inside the mitochondria. The Krebs cycle breaks down organic (carbon-based) fuel molecules from food in the presence of oxygen in order to harvest energy stored in the food we eat so we can grow our cells and fuel our activities they need to grow and divide21; the byproduct of this cycle is that we exhale our food in the form of CO2. In other words, if it weren’t for mitochondria, the cells in your body would be unable to grow, multiply and form new cells and tissues, or–as per a series of studies–maintain a healthy body14,19,4. Inevitably, mitochondria have a pivotal role in almost every single disease you might develop, from the seasonal flu to viral infections to life-threatening conditions such as stroke and cancer19,12,4. The Different Functions of Mitochondria Besides the cellular process of food-to-energy conversion, mitochondria are engaged in a number of other processes at the cellular level. Only recently, we have begun to understand how these processes are key to arming our body’s immune system. Mitochondria maintain heart health. This refers to the process of calcium flow through the mitochondria inside the living cells. One recent study has observed what may happen if we distort the calcium exchanging mitochondria function among adult mice brought to the lab. Eliminating the normal mitochondrial calcium exchange resulted in sudden death of the mice, notably from heart dysfunction and inflammation of the heart muscle. Less than 13% of the affected mice survived after two weeks following this lethal mitochondria intervention13. The mitochondria are very important in heart health and calcium signaling. Mitochondria strengthen the body’s immune system. Our immune system is a shield, which does a remarkable job of defending against disease-causing pathogens9. That is possible thanks to the Mitochondrial antiviral signaling protein (MAVS), which is crucial for helping the body to adapt against any incoming viral infections. MAVS have a major role in response against all viruses3, as well as shielding us from chronic and inflammatory diseases such as colitis (inflammation of the colon)11,6. During instances of infection, your sensation of tiredness owes to the fact that mitochondria have suddenly shifted attention to boosting your immune system function at the cost of producing less ATP (energy)14. Mitochondria are also important to stem cells. Stem cells are the body’s raw material–cells from which all other cells with specialized functions are generated8. Stem cells can be found in embryos and in almost all adult tissues8 and are known for their regenerative properties. That is why researchers are working on how to best utilize them in clinical practice for treating various diseases. The mitochondria produce reactive oxygen species (ROS), while too many ROS can cause oxidative damage in many diseases15. Scientific research has revealed that ROS can determine the fate of adult stem cells1. Increased ROS correlates with a decline in the stem cells regeneration capability but can also promote stem cells’ progenitor commitment and differentiation (their ability to devise cells with specialized functions)1. In this context, Light Therapy can have a meaningful role. It can help alleviate any damage caused by excessive ROS, as well as it can promote stem cell production without causing any adverse effects on the human body. Mitochondria also correlate to programmed cell death. Different names describe programmed cell death: apoptosis, autophagy, and necrosis4. And the mitochondria have a key role. Changes/disruptions in mitochondrial processes (like the normal production of ROS or ATP) have been linked with the different kinds of cell death4. For example, in cases of stroke, there is an abrupt blockage in the blood supply to the brain. During a stroke event, cells begin to die. Depending on where the dead tissue forms in the brain, the affected person can experience speech, sight or movement impairments19. The mitochondria are directly affected by the stroke because suddenly, without oxygen, they cannot perform their most vital function of being the cells’ powerhouses. Subsequently and gradually, the mitochondria are forced to change properties and release an apoptogenic protein, promoting apoptotic and necrotic cell death19, which is also why an immediate medical response to stroke is critical to prevent heavy brain damage and to restore regular blood supply to the brain and the mitochondria as soon as possible. Cell death isn’t always bad, however. In fact, it’s essential in treating cancer10,12. Cancer cells are notorious for they divide relentlessly, form tumors, flood the blood with abnormal cells, and require rigorous therapies to combat and eliminate from the body5. Light Therapy can help by alleviating the side-effects of cancer treatments. How to Keep Your Mitochondria Healthy? Wherever you look, the evidence is clear: your body relies on mitochondria for every physiological process. Poorly performing mitochondria can increase the risks of a disease developing in your body and manifest in various chronic illness symptoms14. Below, see how you can support the health of your mitochondria: - Engage in regular physical activity. Exercise (especially aerobic) is generally good for your health (running, swimming, cycling, etc.) and can be especially beneficial for your mitochondrial health as these types of activities promote mitochondrial balance and help the mitochondria’s restoration process14. - Pay attention to your diet. You want to avoid consuming sugary and processed food, traditionally associated as a risk factor for various diseases. Include fresh fruits and vegetables in your diet, and food that is high in antioxidants such as dark chocolate, nuts, berries, artichokes, beans, cabbage or spinach18, 14. If you are struggling with weight loss, you can also utilize Light Therapy to help decrease the number of fat cells, naturally reduce cellulite or decrease overactive hunger hormones. - Reduce stress, take time to relax, get enough sleep. Stress is a natural physical and mental reaction to life experience2, and there’s no easy way around it. Stress can manifest itself in a number of ways, including insomnia, diarrhea, frequent urination, rapid breathing, and tachycardia (rapid heart rate)20. Scientific evidence further supports the notion that acute and chronic stressors influence various aspects of mitochondrial biology17. Sports can help you combat stress in life as will do cutting down on caffeine, alcohol, and nicotine if these are your vices. Products containing the above-mentioned substances may help you feel better in the moment, but overconsumption, in the long run, will certainly bring a negative impact on your health and well-being. Help your stress with some meditation, getting enough hours of sleep or talking to someone. Lastly, you can always enhance mitochondria performance with Light Therapy, as well. When red, near-infrared, and infrared light is correctly applied to a certain area of the body that requires support and healing, it essentially creates a more friendly environment for mitochondria. Light Therapy supports health by reducing excessive oxidative stress at the cellular level, which in turn allows the mitochondria to balance performance and functions. If you want to experience the benefits of Light Therapy first-hand, book an appointment here. Even if you simply want to talk and make a consultation regarding your health, drop us a message on Facebook and Twitter. Our team would be happy to respond. Follow our Light Lounge™ blog to get the latest health updates and tips to improve your health and lifestyle.
https://blog.lightlounge.life/what-is-mitochondria-and-how-it-relates-to-your-overall-health/
Australia’s Tourism and Transport Forum has announced its intentions to roll out a contacltess ticketing system in Sydney modeled after Southeast Queensland’s Go Card system, according to The Sydney Morning Herald. “Our research has found that when it comes to integrated smart card ticketing in Australia, the Go Card is the best there is, and other states are taking notice,” said the Forum’s managing director Christopher Brown. According to Mr. Brown, who claims smart card ticketing is “the way forward” for public transit, the NSW government has recently contracted Cubic Transportation Systems, the company behind Go Card, to design and implement a similar system for Sydney. Go Card is currently in use on southeast Queensland’s rail, bus and ferry networks.
https://www.secureidnews.com/news-item/sydney-to-adopt-se-queenslands-go-card-system/
My narrative collage-paintings are an evolving, deeply personal reflection on my life and my family’s colorful history. I create two-dimensional vignettes made up of many layers. My paintings are created in oils on wood panel or canvas, but I often use a variety of processes and materials to achieve the finished work. I layer paint, cut-paper collage, different transparent tissues, vellums, fabrics, tin milagros and religious medals, even candy wrappers to create a dimensional and cohesive whole. Layering colors and textures, my aim is to create a shifting, dream-like atmosphere, a window into a different world. My paintings often deal with the construction of identity; the way we gather items to ourselves-both sentimental ephemera, and the patchwork of memories, perceptions and vanities that we use to cobble together a sense of self. I often use scraps of ribbon and lace and broken bits of sentimental trinkets to embellish my paintings, or religious medals and tin milagros that speak to my Roman Catholic upbringing. My work frequently explores issues of beauty, captivity, confinement and discomfort. The female figures that I paint are central to my work. Archetypal stand-ins, I wrap my characters in layers of tulle and lace. Frequently, I portray them entangled or framed in some kind of organic tangle of vegetative matter. There is an undercurrent of anxiety, and also purpose, to their actions. They are en route to mysterious destinations or engaged in inexplicable tasks. The viewer wonders about the role that these women play. The question of female sexual availability has always been one that interests me, and in my work I deliberately portray my characters in an undefined role, and as participants in a visual reality where they are the lone subjects. The carnival is another predominant theme in my work. My great-grandfather owned a travelling carnival called Prudent’s Amusement Shows, which toured through the 1950s and 1960s. I have often heard my mother and grandmother tell vivid stories about the many seasons they worked there, and this has given me a life-long enthusiasm for carnival culture. There is a sense of transience and grit which undergirds the brightness of this world, and that sense of unease draws me to try to capture that dreamlike feeling, the sense of something half-remembered. I’m inspired by the dizzying whirl of the carnival- the lights at night, the glitter, the bright faded metal structures looming against the sky, promising danger and a giddy rush of adrenaline. These themes are intertwined with many other inspirations- the changing colors of sea and sky through different lights and weather that I observe in my tiny coastal hometown. Old-fashioned children’s illustrations, and the way that they tell a straightforward narrative but inform the telling with such a grace and elegance; among my favorites are Tenniel, Arthur Rackham and Hilary Knight. Victorian stage sets, and the atmosphere of willful, beautiful artificiality that they create; the stacked two-dimensional layers that together, form a living environment of depth and harmony. All these disparate elements serve to inform a world of my own making that I seek to make visible one small window at a time.
http://lealandeve.com/artist-statement.php
We are exceptionally glad to offer professional grooming services at the Companion Animal Hospital of Milford. Bridget is uniquely trained and certified as a groomer from the NY School of Grooming in 1995 and has been grooming for many years. She has previously worked in both a veterinary hospital as well as specialized grooming facilities. Bridget offers a full and complete range of grooming services. Her patience and skill in handling animals will provide your pet with a stress-free grooming experience. Grooming appointments can be scheduled Monday through Friday. Pets are typically dropped off between 8:00 a.m. - 9:00 a.m. and may be picked up around 5:30 p.m. If you require special drop off or pick up arrangements, please let Bridget know and she will make every effort to accommodate you. For additional information about our grooming services, or to schedule an appointment:
http://companionvetmilford.com/dog_grooming_milford_ct.htm
Buratai debunks allegations of lack of equipment, others CHIEF of Army Staff (COAS) Lt. Gen Tukur Buratai yesterday said allegations of lack of equipment and non-payment of allowances to troops in the war fronts were ill-conceived and misleading. He said the allegations were part of the campaigns of calumny against the army to dampen the morale of troops. The Army chief urged the nation’s media to see through the psychological warfare of the insurgents and their international and local collaborators. Lt.-Gen Buratai spoke yesterday at the opening ceremony of Nigerian Army Conference for Editors at Nigerian Army Resource Centre, Abuja. He said the theme of the conference, “Enhancing military media relationship for effective fight against terrorism and insurgency in Nigeria”, was chosen to address the country’s security challenges. “The allegations of human rights abuses against the military by Amnesty International, allegations of lack of equipment and non-payment of allowances to troops are ill-conceived and misleading. “The war against terrorism should be a collective responsibility for all, and not just a war between the army and the terrorist groups. The war needs to be reported as it is, and therefore, the media need to enlighten the people to understand the true situation and support the military. The impact of the terrorists’ propaganda was one of the major challenges facing the military by discouraging and dampening morale of the troops in the frontline,” the Army chief said. He said the army would remain apolitical as the nation prepares for the general elections, noting that Exercise Python Dance III was launched across the country recently to ensure the maintenance of peace and security. Buratai appealed to the nation’s editors to support the military in the war against Boko Haram terrorists, asserting that the war should not been seen as that of the military alone. He said: “I am highly delighted to speak to you, the leading media practitioners in Nigeria, who make major contributions that shape public perception on daily basis through national dailies, electronic media as well as online media. “There is no doubt that by your distinct position, you can stabilise or destabilise situations in any given security environment. The reasons are not far from the fact that the pen is mightier than the sword. This media conference organised by the Nigerian Army came at the right time when we are faced with challenges of reporting military operations vis-a-vis the concern to preserve our national security. “The effort of the military to ensure peace and security in our country is a constitutional responsibility, which requires support from all sectors to make Nigeria safe from all forms of criminalities. “Therefore, your roles as leading media practitioners in this fight is key in shaping public opinion by reporting what is right as at when due to avoid putting the lives of security agents in danger.”
ECOTERRA Intl. declared: After the gathering of parts and products of the natural fauna, the hunting of species of the natural fauna is one of humankind's oldest cultural forms to safeguard survival and sustainance as well as to provide for livelihoods and development. As such hunting and gathering still has its rightful place in the aboriginal cultures of genuinely indigenous peoples the world over, who live and want to live in free self-determination such lives in their traditional culture. However, most mainstream societies of those states represented in the General Assembly of the United Nations (UN) today have abandoned genuine hunting of wild animals - with the exception of the traditional fisheries sector - as well as the purposeful and free-ranging gathering of natural plants and plant products - with the exemption of the traditional forestry sector - and engaged in later developed forms of economic activities and productivity to sustain their livelihoods. Modern societies developed not only alternative forms of primary production by engaging in domesticated livestock husbandry, altered plant agriculture and plantation forestry, but strive today - in some parts already mainly - in the secondary and tertiary sectors of economy to make their living. The killing of wild animals therefore can only be termed hunting, if carried out by members of traditional strata of aboriginal hunter and gatherer cultures or individuals and groups, who have fully reverted to that culture and lifestyle - to live it consequently as their main economic activity and in their own, unalienable rights. In southern Africa groups of the First Nation of the aboriginal San (bushmen) are an example of a genuinely lived and unbroken hunter-gatherer culture, though genocidal actions and other atrocities directed against these indigenous peoples have been and still are committed since 1702 by those strata, which make out the mainstream society in South Africa (SA) today. If the killing of wildlife is carried out by non-aboriginal cultures, the legal or illegal off-take of wild animals from the natural ecosystems and even more so from ranched populations of animals under human stewardship of non-domesticated and/or non-domestic species can never be termed "hunting" and consequently must be called by its true name: the KILLING of wild animals or the CULLING of whole segments of a wildlife population or even ECOCIDE if a whole population is wiped out. All such must end. ECOTERRA Intl. demands to immediately stop any canned Lion killing in South Africa and elsewhere and all United Nations member states who are signatories to CITES to immediately raise the status of Lions (panthera leo) to endangered.
http://www.cannedlion.org/blog/ecoterras-position-on-hunting
We are looking for an enthusiastic candidate, for NHS England-Band 4 Admin Assistant, with excellent communication skills, word processing skills, the ability to work independently and who will provide support to the team with a variety of duties. In particular the post holder will provide and coordinate administrative and secretarial services; including for example, the preparation of agendas and minutes, taking appropriate follow-up action as required. Supporting the team with the management of projects, gathering information and undertaking enquires as and when is necessary for the head of department, teams and the department. Key Job specifics and responsibilities Provide specialist secretarial/administration support and advice to the teams and/or Sector by: - Taking telephone calls for the department and using initiative to deal with phone calls and messages. - Sorting and prioritising all incoming mail and e-mail, distributing as appropriate. - Managing the electronic diary for the department, including arranging and changing appointments, prioritising these as appropriate. - Undertaking administrative duties such as photocopying, faxing and mail-out distributions. - Ensuring all urgent and/or confidential communications are received and distributed from/to relevant parties in a timely manner. - Organising and planning events as necessary supporting information material. - Supporting teams in project management and participating in department events. - Inputting, monitoring and checking data, required for finite and ongoing projects within the teams. - Acting as a point of contact for teams, dealing and responding effectively with complex queries from stakeholders and passing on relevant information to appropriate team members sensitively and autonomously. - Supplying the relevant information required for financial management, supporting the head of department and teams by checking and sending invoices for payment. Ordering stationery as required. - Running and collating reports which may include reports to the Board and senior management as required. - Preparing agendas, taking minutes and distributing notes of meetings including typing up of group discussions and interviews as necessary. - To provide administrative support including when appropriate to all team members to support a range of department initiatives. - Working together with other administrators/PA's to provide an effective network of communication including dealing with visitors to the base and being flexible to cover other administrators' general duties on the base. - Providing guidance and advice on relevant policies and procedures.
https://www.brookstreet.co.uk/job/administration-assistant-band-4-1/
The importance of understanding biotic patterns in managed tropical landscapes is increasingly recognised. Bangladesh is a country with a long human land-use history and constitutes almost a blind spot in vegetation science on the landscape scale. Here, we analyse patterns and drivers of plant species richness and community composition along a land-use intensity gradient in a forest landscape including tea gardens, tree plantations and nature reserves (Satchari Reserved Forest) based on multivariate approaches and variation partitioning. We find richness as well as composition of tree and understory species to directly relate to a disturbance gradient that reflects protection status and elevation. This is astonishing, as the range in elevation (70 m) is small. Topography and protection remain significant drivers of biodiversity after correcting for human disturbances. While tree and non-tree species richness were positively correlated, they differ considerably in their relation to other environmental or disturbance variables as well as in the spatial richness pattern. The disturbance regime particularly structures tree species richness and composition in protected areas. We conclude by highlighting the importance of explicitly integrating human–biosphere interactions in any nature protection strategy for the study region. Steinbauer, Manuel, et al. "Drivers for plant species diversity in a characteristic tropical forest landscape in Bangladesh." Landscape Research 42.1 (2017): 89-105.
https://cris.fau.de/converis/portal/publication/118166444
AboutWho We Are A group of multi-disciplinary professionals, who perceived the global reaction to Covid, and lockdown in particular, as overwrought and damaging to the point of causing a great tear in the fabric of society, established PANDA (Pandemics Data & Analytics) in April 2020. As a politically and economically independent organisation, PANDA seeks to develop science-based explanations and test them against international data. Policy recommendations for governments and other institutions can be developed from these. PANDA stands for open science and rational debate, for replacing flawed science with good science and for retrieving liberty and prosperity from the clutches of a dystopian “new normal”. KNOWLEDGE FACTORY Consisting of leading scientists from diverse fields, doctors, data scientists, social scientists, actuaries, economists and legal experts from all over the globe. This team of experts was established to analyse problems holistically. PRODUCTION FACTORY Consists of writers, editors, graphic designers, video editors, social media specialists and translators. Our communication team transforms scientific findings into content that the public can access and understand. ENGINE FACTORY Ensures that all the functions required to keep the organization operating are put in place and initiated. Our Operations team consists of strategists, administrators, accountants and IT experts. Our mission is simple: the science is clear on what key policy responses should be. Fuelling fear and removing agency from people’s lives across the world is not sound public health policy. It is critically important that societies be reopened and those vulnerable to serious illness from SARS-CoV-2 be protected in a sensible, dignified manner. Human agency must be upheld and individuals should be informed about risks and mitigation so that they can make personal choices. Our multidisciplinary team has developed a framework for helping communities do just that. PANDA’s Protocol for Reopening Society builds upon the widely supported Great Barrington Declaration and pre-Covid pandemic guidelines to provide a roadmap out of the damaging cycle of lockdowns and other coercive policies.
https://www.pandata.org/about/
Composer-Performer Jana De Troyer positions herself at the dynamic borders between styles and disciplines. Concurrently, she deftly switches between her roles of composer, free improviser, instrumentalist, human, sound artist and programmer. Coming from a background as a contemporary saxophonist, Jana has always been curious about further exploring and fusing various modes of expression. This “Experimentierfreude” has brought about a myriad of creative collaborations with composers, musicians and artists from other disciplines such as visual arts, dance and coding. An experienced interpreter of scores, Jana strives to translate artist’s ideas as accurately as possible, whether it be conventionally notated or in more open forms such as text scores, video scores or graphical scores. She has studied the Western saxophone canon and greatly relishes assisting and collaborating with living composers on the realisation of new work. Led by the joy of performing, she enters the stage with or without instruments, using whatever is necessary to bring her collaborator’s ideas to life. Jana’s open mindset can also be seen in her improvisational practice, where she actively combines her classical and contemporary saxophonist training with other sound sources, movement and electronics. She has had the pleasure to play with great improvisers in a range of constellations, varying from duo performances to improvisation orchestras and on stages from Stockholm to Palermo. Her own compositional output consists of both instrumental and electronic music, as well as interactive installations, web art and audiovisual works. She has developed works for knitting guitar quartet (Fashionista’s), kissers (DU-O), a window cleaner (Putzzwang), Tiktok-loving improvisers (FMO_1), meditating audience and cactus (Intimate Space Study 1a), and more. At the core of her work is a deep sonic exploration of non-musical concepts, which often leads to the integration of interdisciplinary means.
https://www.janadetroyer.com/bio
In exploring the field of association between disease and genes especially in GWAS (genome-wide association studies), there were large multiple variants in data collection. These variants have MAF (minor allele frequency) with large than 5%. We wanted to detect the causal factors for certain disease in GWAS in either population-based or family-based. So far, the common disease common variant hypothesis must hold true. But, recent studies told us that the CVs have provided just a fraction of the genetic risk with most disease. There was not only a minority CVs to induce the disease. The disease affected from functional rare variants (RVs). The RVs could come from resequencing data, and their MAFs were less than 0.1%. Individuals with large number of functional RVs (or multiple RVs) have the higher risk to a complex disease. In our previous studies, we found out that the method for CVs to search for relation with disease and genes would not be suitable in multiple RVs. The type I error rate were be rise and departure from specify significant level in these methods. We already proposed the new method of employment in the RVs data in family-based with case-parental case design. It is true that we don’t consider problem for population stratification in family. Further, we would discuss the RVs related to disease when one of the parents would be missing. The idea of method comes from the comparison of the frequency of transmitted risk allele and non-transmitted risk allele. If the test could control the given type I error rate and have higher power, we would apply to genome-wide data and resequencing data in practical.
https://tmu.pure.elsevier.com/en/projects/association-study-of-rare-variants-and-disease-in-case-parental-d
A music brief is an essential first step in gaining a grasp of the objectives and goals of the customer. You need to have the ability to write them and know what to include and how to make them simple to read and comprehend. Simply put, a music brief is a document that describe the essential aspects of the project, from timelines and responsibilities to what the music should communicate to the ones listening. The creative director can better grasp the client's needs and address those needs by using music briefs. In addition, specialists in the music industry and the business world approach it differently. The following are the first observations that will take you to: How does one prepare a music brief? Preparing a music brief is often the responsibility of a producer, composer, or music supervisor. It is the creators' capacity, honed over many years, to comprehend and fulfill the requirements of their customers in terms of the music they anticipate hearing. The briefing process can be a great problem because customers might be unable to articulate the specific music they are looking for. It is clear, given that music cannot be explained. On the surface, these briefs may appear to be a frivolous use of time, but the fact that they provide an outline contributes to the effectiveness of the manufacturing process. This is comparable to how you would outline the primary arguments of an essay before beginning to construct it. You'll know where you're going and what points you need to emphasize; the only thing left to do is develop those points further. Additionally, it shortens the amount of time required to do a project. The revisions and course corrections that are the direct result of insufficient planning can be effectively avoided using the brief if the expectations are communicated in a clear and comprehensive way at the beginning of the process of producing music. It instils a sense of responsibility. It can be challenging to write an effective music synopsis. It's possible that you already have a clear picture in your head of how the music for your project should sound, but explaining that picture to a music supervisor won't be as simple as it sounds. Each undertaking is one of a kind and possesses its own distinct set of characteristics and constraints, both of which play a role in determining the style of music that will work best. Some of the most obvious ones include your narrative, who your audience is, and how much money you have to spend on music. It is essential to consider musical elements early to ensure the smoothest possible execution of the music briefing process. Also, when it comes time to send a brief to a music supplier or music supervisor, make sure to include all of the pertinent information they need to present you with the best assortment of tracks for the project. Because your brand identity should constantly stick out from the rest of the notes and chords in the music, the most important component of the brief is to explain what your brand accomplishes. Setting an emotional scene for the musician and igniting the initial flame of inspiration in their creative brains is what you accomplish when you introduce the musician to your industry, objective, and brand traits. What's more significant is that you're describing the foundation of your audio identity, which is something you ought to incorporate into all of your upcoming audio creations. Even at its most straightforward, musical expression can be a tricky business. We recommend that you always be clear on the purpose of the music from the very beginning of the process. For example, the question should be asked whether the music will be used for a film score and, if so, what the budget for the film's production will be. Will the score be used for a show on either Netflix or a traditional television network? It could be a television commercial for a large brand that will air on screens worldwide. It may be only seeking a slot on the local radio station. What are we hoping to accomplish with this project? What specific goals do your customer hope to accomplish by creating a music brief, and how do they intend to evaluate the success of each objective? This is very important because once the project is over, only by knowing this can we keep track of how effective the work is and how it can be applied in your portfolio for future reference. Without this knowledge, we cannot do either of those things. Whether it's music, voice-over, sound effects, or anything in between, specifying the type of audio you're searching for will go hand in hand with the touchpoints and your target audience. This is true regardless of whether you're looking for music, voice-over, or sound effects. The composer needs to clearly understand the music's purpose or voice-over before beginning work on it. It is the equivalent of delivering a painter the real frame in which the picture will be placed. By establishing such limits, the composer can better see the entire creative process, from the beginning to the end, and direct their creative energy toward accomplishing the goals necessary to arrive at their desired destination. A brand's purpose is to make itself appealing to a particular demographic. If you don't keep them in mind when you're developing a branding strategy, likely, the end product won't be as successful as it could have been. Because of this, it is necessary to identify the target audience's demographics and keep them in mind while working on the brand's design. We are aware of that here at HolaBrief, which is why we have developed an interactive target audience exercise that you can include in your branding briefs and carry out with your client. In HolaBrief we have an interactive Customer Persona exercise that you and your client can use for free. The composer or production firm may ask you to provide detailed details regarding the deliverables that will be provided. It is in everyone's best interest to have this matter clarified and the points listed out. Do you need an anthem or something similar? Or is there a point system? It's possible that you need a toolbox full of musical cues. You should also consider the locations from which it will be transmitted. For instance, do you require the final mix to have a specific loudness level when played back? (EBUR128 in Europe). If you were to explain your concept to the composers, they would get a more solid picture thanks to the addition of reference tracks. It can also be of great use to composers searching for new sounds. Regarding BAM, we have a reference of tags that will sync up with the atmosphere of the music brief. To tell you the truth, music already functions as its language. It takes time to get the hang of, much like learning any other language. However, an open and honest exchange of ideas with customers will greatly assist. The attitude of historical and musical jargon is what we mean when we talk about "music short." You can use our interactive Moodboards to take care of this section! Feel free to try it out in HolaBrief without cost! We are aware that not everyone can articulate their emotions using musical terminology. Because of this, we have begun putting together compelling audio briefings with the assistance of HolaBrief. It is our responsibility to convey the sentiments and ideas behind our client's brand into sound and music that accurately portrays our client's narratives and enables them to achieve their goal of successfully evoking powerful sensations in their target audience. That calls for a very sensitive approach. And every piece of information that we may glean about our customer's brand has the potential to alter the results! Suppose you are a freelancer or work for an agency. In that case, you may use HolaBrief to create briefs that have the appearance of being professionally prepared, simply collect information from customers, and organize everything in a single location.
https://www.holabrief.com/creative-brief/music-template
“It is not easy living as a girl or young woman in low- and middle-income countries today. Adolescents are at higher risk of illness and even death from pregnancy related complications, and girls and young women shoulder a disproportionate amount of the world’s global disease burden. In addition, between 8 percent and 25 per cent of girls in sub-Saharan Africa drop out of school early because of unintended pregnancy and it is estimated that one woman dies every two minutes due to complications in childbirth — and many of these are adolescents. Protection through contraception What are the factors prolonging this situation? While the underlying causes are complex and varied, issues that play a significant role include: gender inequity; a lack of services and access to affordable, appropriate and effective methods of contraception; access to and knowledge of their sexual and reproductive health and rights (SRHR); and a lack of high-quality comprehensive sexuality education for young adolescents. There are steps that we can take to support these young women to empower themselves and take control of their own sexual and reproductive health. What we have learned, in our many years of working with young people in Eastern Africa, is that the best communicators for young people are young people themselves. We have found that peer-to-peer education and counselling is a hugely effective way to talk to young men and women about their contraceptive options and their sexual health. Working with health professionals and community health volunteers to support them in providing the necessary knowledge and access to contraception and family planning advice is also crucially important, as is engaging the whole community in an ongoing conversation about sexual and reproductive health. Youth Champions for contraception and SRHR Together with the support of Bayer HealthCare and as part of World Contraception Day, DSW has been working with its Youth-to-Youth club network in Uganda to do just this through the GeNext project. The goals of GeNext are to empower young people to champion World Contraception Day and its messages, and to support young people while providing them with the knowledge to make informed choices on family planning and SRHR. Through the project we have trained 60 Youth Champions to spread the message in their communities. They in turn – through peer-to-peer education – have trained 60 more young people in their local communities and in DSW youth clubs. We have also worked with local health workers to train them on the provision of youth-friendly SRHR services. And I am happy to report that since its inception, the project has reached more than 10,000 young people in Uganda through a number of outreach activities, with a plan to reach 18,000 young people and 1,000 community members with SRHR information before the end of the project period. All these youth champions bring with them the same message: that, with enough information and access to youth friendly services, young people have the potential within themselves to take control of their health and their own futures.” Renate Baehr, DSW Executive Director Subscribe to our Newsletter Join the conversation and receive special reports, newsletters, and invitations to special events. Sign up today!
https://www.dsw.org/en/2014/09/wcd2014-empowering-youth-contraception/
New Zealand unprepared for AI It could increase GDP by up to NZ$54 billion by 2035, but a report on the state of Artificial Intelligence in New Zealand, shows that this country is generally unprepared for this disruptive technology. The AI Forum, which has a membership of 43 organisations, has released a 108-page report analysing its adoption in New Zealand. It is based on two streams of research, an online survey, with respondents heavily weighted to organisations in information, media, telecommunications, professional, scientific and financial sectors. And interviews conducted by IDC with 46 active participants in AI in New Zealand. It also provides a summary of international research by the Sapere Research Group. The report found a lack of expertise and an absence of a national AI strategy, especially when compared to countries that are ploughing millions of dollars into AI, such as Canada, France, Singapore, South Korea, UK, UAE and China. “Through initial market discussions it became apparent that very few people or organisations are considering AI, or its potential implications,” the report says. Of the organisations surveyed, over 20% have adopted some form of AI system, but these were mostly large enterprises that have made significant investments in IT. Yet even among the early adopters, AI understanding remains primarily with the tech staff, with only 36% indicating that AI discussions are occurring at board level. A major concern highlighted in the report a the lack of expertise, with research showing that that worldwide there are around 22,000 PhD qualified AI experts – of which only 85 are located in New Zealand. “Although PhD is not technically required to be considered an AI expert, PhD level attainment is a useful proxy for assessing the technical ability of talent pools across different nations,” the report says. On a positive note, the report predicts that AI won’t lead to mass unemployment, pointing out that many jobs in past have disappeared as new technology has rendered them irrelevant. “In New Zealand in 1966, 21,000 people worked as typists or stenographers. Almost all these jobs no longer exist, however this didn’t result in the mass unemployment of 21,000.” If AI follows the same adoption rate as personal mobile phones, household broadband and business ICT use, it could be 13-15 years before AI is widespread. In the meantime, the report recommends that consideration be given to the impact that AI will have on society, as well as the economy. It notes that 68% of respondents to its survey were concerned about the potential for AI to make biased, unfair or inaccurate decisions. “However, the research also found that only 15% of firms planning to use AI intend to establish some form of AI ethics committee.” While advanced AI systems such as Nanny Robots and Personal Robotic Assistants are, according to the report, not expected to become mainstream, so-called ‘robo-advice’ for providing automated financial advice is on its way. But what if it goes a step further and AI systems determine the outcome of loan applications? “Currently, there are no laws requiring the developer of an AI system to design the system so that it can explain its decisions. In fact, in many cases, the algorithms are proprietary and the company who created them won’t have to unless legally compelled to,” says the report. “Consider an AI system that makes a recommendation to a bank on whether you apply for a loan. In a traditional banking environment, if you weren’t successful you can ask why you weren’t approved for a loan and receive a reply. But if a loan is turned down by an AI system, it won’t necessarily be able to explain why, even with extensive auditing features.” The report notes that the Government’s adoption of AI is “disconnected and sparsely deployed”, although New Zealand is ranked 9th in the OCED for government AI readiness. Minister for Government Digital Service and Broadcasting, Communications and Digital Media Clare Curran officially launched the report yesterday evening. She says an action plan and ethical framework is urgently needed to educate and upskill people on Artificial Intelligence (AI) technologies. “An ethical framework will give people the tools to participate in conversations about Artificial Intelligence (AI) and its implications in our society and economy,” Clare Curran said. “As a first step and because of the importance of ethics and governance issues around AI, I will be formalising the government’s relationship with Otago University’s NZ Law Foundation Centre for Law and Policy in Emerging Technologies.” The AI Forum was launched in June 2017, chaired by Stuart Christie, investment manager at the New Zealand Venture Investment Fund, and with support from NZTech. It is a member of the NZ Technology Alliance, which is managed and supported by NZTech. Copyright 2018 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.
Q: Computing the trigonometric integral $\int \cos^6x \, dx$ My calculus skills are a bit rusty and I am working trig integrals. I checked here, Help with $\int \cos^6 x dx$ but the way I go about solving this is a bit different. So computing $\int \cos^6x \, dx$ what I do is, $$\int \cos^6 dx \, = \, \int \cos^4 x \cos^2 x \, dx$$ $$\int \cos^4 x \cos^2 x \, dx \, = \, \int \left(\frac {1+\cos 2x}2\right)^2\left(\frac {1+\cos 2x}2\right)dx$$ $$=\, \frac 18 \int \left(1+3\cos 2x + 3\cos^2 2x+ \cos^3 2x\right)dx$$ I split this into 3 integrals and simplify them, $$\frac 18 \int 1+3\cos 2x \, dx\;+\; \frac 3{8}\int \cos^2 2x\,dx \;+\; \frac 1{8} \int \cos^3 2x\, dx$$ $$= \, \frac 18 \int 1+3\cos 2x \, dx\;+\; \frac 3{16}\int 1+\cos 4x\,dx \;+\; \frac 1{8}\int \cos^2 2x \cos 2x \, dx$$ $$= \, \frac 18 \int 1+3\cos 2x \, dx\;+\; \frac 3{16}\int 1+\cos 4x\,dx \;+\; \frac 1{16}\int \left(1+\cos 4x\right) \cos 2x \, dx$$ Edit (correct answer found) $$= \, \frac 18 \int 1+3\cos 2x \, dx\;+\; \frac 3{16}\int 1+\cos 4x\,dx \;+\; \frac 1{16}\int \cos 2x dx + \frac 1{16} \int \cos 4x \cos 2x \, dx$$ $$= \, \frac 18 \int 1+3\cos 2x \, dx\;+\; \frac 3{16}\int 1+\cos 4x\,dx \;+ \frac 1{16}\int \cos 2x dx +\; \frac 1{32}\int \cos 2x + \cos 6x \, dx$$ Solving one at a time I get, $$\frac x8 + \frac 3{16}\sin 2x \;+\; \frac 3{16}\int 1+\cos 4x\,dx \;+ \frac 1{16}\int \cos 2x dx +\; \frac 1{32}\int \cos 2x + \cos 6x \, dx$$ $$= \, \frac x8 + \frac 3{16}\sin 2x \;+\; \frac 3{16}x + \frac 3{64} \sin 4x \;+\frac 1{16}\int \cos 2x dx +\; \frac 1{32}\int \cos 2x + \cos 6x \, dx$$ $$= \, \frac x8 + \frac 3{16}\sin 2x \;+\; \frac 3{16}x + \frac 3{64} \sin 4x \;+\frac 1{32}\sin 2x dx +\; \frac 1{32}\int \cos 2x + \cos 6x \, dx$$ $$= \, \frac x8 + \frac 3{16}\sin 2x \;+\; \frac 3{16}x + \frac 3{64} \sin 4x \;+\frac 1{32}\sin 2x \, +\; \frac 1{64} \sin 2x + \frac 1{192} \sin 6x$$ Finally, $$= \, \frac 5{16}x + \frac {15}{64}\sin 2x + \frac 3{64} \sin 4x + \frac 1{192} \sin 6x + C$$ A: When simplifying the split integral, $\cos^2 2x$ became $\frac{\cos{4x}}{2}$. I think you missed out something here.
McConnell Center lecture series considers future of Kentucky health Ryan Quarles, Commissioner of the Kentucky Department of Agriculture, will give a free, public talk on “Unbridled Hunger: Food Insecurity Challenges and Solutions for Kentucky” Jan. 24, 6-7 p.m., at Ekstrom Library’s Chao Auditorium (directions). The commissioner’s talk will cover a range of public policy impacts, including the role of health care and education in rural and urban parts of Kentucky. Quarles formed the Kentucky Hunger Initiative, a first-of-its-kind effort to bring together farmers, charitable organizations, faith groups, community leaders and government entities to help reduce hunger in the state. According to a 2016 study done by Feeding America, 17 percent of Kentuckians are food insecure. The January lecture is part of the McConnell Center’s 2018 spring public lecture series, “Taking Kentucky’s Temperature: Future of Health Policy in the Commonwealth.” No RSVP required.
https://louisville.edu/mcconnellcenter/news/archived/mcconnell-center-lecture-series-considers-future-of-kentucky-health
We are seeking a high performing Instructional Systems Designer (ISD) to support a client program. Our ISDs are highly motivated individuals whose primary responsibilities are to analyze, design, develop, and maintain instructional content for web-based and/or instructor-led courses and related training products; and nurture, build and sustain strategic client partnerships. Carney’s ideal candidate is passionate about instructional design, industrial security training, and developing our solutions. This is a contingent position based on award notification. The position is REMOTE! Responsibilities The ideal candidate will support the client’s Training Division in order to meet the continuously changing security environment which drives the need to develop and update training content. The ISD shall provide technical expertise to apply the ADDIE model. New development and maintenance may consist of eLearning asynchronous courses, virtual instructor-led courses, short format eLearning, instructor led training and performance support tools. · Assist and provide guidance on the application of the ADDIE methodology · Collaborate with client personnel to conduct comprehensive Training Needs Analysis (TNA) that identify security competencies, training requirements, goals, learning objectives, topics, instructional strategies, media selection, and training recommendations for a specified audience · Support new development, due to immediate requirements, through front end analysis, training design plans, storyboards, assessments and beta tests · Support maintaining existing products by identifying changes based on policy, events or the Department of Defense Security Skill Standards (DS3). Changes shall be documented accordingly on storyboards. The contractor shall ensure changes are accurately applied after a client programmer completes the changes · Develop detailed course design documents that include the course content outline, media treatments, technical specifications, assessment strategy, instructional materials and recommended design approach for the course · Develop and update course storyboards which specifies the flow for audio and text of all content to include graphic design for inclusion in the final course product · Conduct and participate in beta tests for products in development and evaluate for accuracy and instructional effectiveness · Develop and update assessments for courses · Evaluate products for instructional effectiveness Minimum Requirements Bachelor’s degree Experience developing and implementing instructional development processes applying the ADDIE methodology Experience developing and updating on-line instructor-led or instructor-facilitated distance-delivered courses in a Collaborative Learning Environment (CLE), such as Sakai, related to DoD Preferred to have: Either a bachelor's or master's degree in Instructional Systems Design or Adult Education from an accredited university. At least eight (8) years of experience designing and developing online, instructor-led, or instructor-facilitated distance-delivered courses related to DoD Experience leading security instructional development projects Equal Opportunity Employer/Disabled/Veteran The job skills required for Instructional Systems Designer include Instructional Design, System Design, Storyboard etc. Having related job skills and expertise will give you an advantage when applying to be an Instructional Systems Designer. That makes you unique and can impact how much salary you can get paid. Below are job openings related to skills required by Instructional Systems Designer. Select any job title you are interested in and start to search job requirements.
https://www.career.com/company/team-carney/job/instructional-systems-designer/-in-mogantown,wv?jid=b94567b8-c403-4194-b3fe-70786634ba94
The approach of the tectonic plates causes earthquakes like those of Granada, but at the other end of the Peninsula it is the fractures and fluids of the earth’s crust that generate seismic movements. The earthquakes that have recently occurred in the south of Spain and in the east of Japan have their origin in the collision of the tectonic plates, of Eurasia and Africa in the first case, and of the Philippines and the Pacific in the second. However, in the northwest of the Iberian Peninsulawhich is far from the edge of the plates, from time to time seismic movements also occur, which have exceeded magnitude 5 in towns such as Puebla de Sanabria (Zamora) in 1961 and Lugo in 1997, with damage and alarm between the population. Behind these earthquakes are the fractures that were formed and reactivated in the earth’s crust during the Alpine orogeny (the same one that raised the Pyrenean, Cantabrian and Betic mountain ranges), but the exact mechanisms that trigger them are not at all clear. To solve the problem, geologists from the universities of León, Rey Juan Carlos and Complutense of Madrid have devised an original model that simulates what has happened under that territory with the help of a ‘box’ in which they deposit three elements: silicone to represent the lower and middle crust (20 km=2 cm), white sand colored by layers as upper bark (15 km= 1.5 cm) and coffee particles sprinkled on top to follow their movements. One of the walls of this sandbox it is mobile. “As we generate a deformation in the box like the one that occurred during the Alpine orogeny, we record everything that happens using a laser scanner, and thus obtain a digital model to analyze the topography and the changes that occur in the relief”, explains one of the authors, Javier Fernandez Lozanofrom the University of Leon. The team, who publish their study in the journal tectonics, uses a mathematical algorithm to measure the displacements generated by the fractures on the surface of the model. In this way it is possible to observe where the deformations are concentrated and, therefore, where earthquakes are most likely to occur. According to the study, its origin lies in the presence of fluids that circulate at great depths and with high thermal gradients, which facilitates the breakage of fractures in the crust and thus increases seismic activity in certain areas of the peninsular northwest. “This phenomenon would explain the significant variations in seismicity observed at the western end of the Cantabrian Mountains and the Galician-Leonese MountainsIn the call brittle-ductile transition zone (where the rocks of the earth’s crust go from being harder and more fragile to more malleable as the temperature rises with depth)”, explains Fernández Lozano, “and in that area, the increase in the pressure of the pores facilitates the opening of the fractures and circulation of hot fluids, reducing the resistance of the crust and the depth at which earthquakes occur”. gold deposits In addition, the geologist points out that this study has important implications for the formation of gold deposits in northwestern Spain: “Fractures act as escape valves for hot fluids, and when their pressure exceeds a certain threshold, the rock breaks and circulate towards shallow areas where mineral elements of great interest precipitate, such as gold, tin and tungsten”. “Therefore –he concludes–, with the new earthquakes the possibility is opened that a new deposit may be developing under those areas of Castilla y León and Galicia, that is, primary deposits (where the ore is formed in the fractures of the parent rock) as important as those that later, by transport and sedimentation, have given rise to the secondary the marrows (ancient Roman gold mines) could be in formation today throughout the northwest of the Peninsula”.
https://shisot.info/this-is-the-surprising-experiment-with-coffee-and-sand-that-explains-the-earthquakes-in-spain/
Related Posts: OpenStax Anatomy and Physiology Most cells in the body make use of charged particles, ions, to build up a charge across the cell membrane. Previously, this was shown to be a part of how muscle cells work. For skeletal muscles to contract, based on excitation–contraction coupling, requires input from a neuron. Both of the cells make use of the cell membrane to regulate ion movement between the extracellular fluid and cytosol. The cell membrane is primarily responsible for regulating what can cross the membrane and what stays on only one side. The cell membrane is a phospholipid bilayer, so only substances that can pass directly through the hydrophobic core can diffuse through unaided. Charged particles, which are hydrophilic by definition, cannot pass through the cell membrane without assistance. Transmembrane proteins, specifically channel proteins, make this possible. Several passive transport channels, as well as active transport pumps, are necessary to generate a transmembrane potential and an action potential. Of special interest is the carrier protein referred to as the sodium/ potassium pump that moves sodium ions (Na+ ) out of a cell and potassium ions (K+ ) into a cell, thus regulating ion concentration on both sides of the cell membrane. The sodium/potassium pump requires energy in the form of adenosine triphosphate (ATP), so it is also referred to as an ATPase. As was explained in the cell chapter, the concentration of Na+ is higher outside the cell than inside, and the concentration of K+ is higher inside the cell than outside. That means that this pump is moving the ions against the concentration gradients for sodium and potassium, which is why it requires energy. In fact, the pump basically maintains those concentration gradients. Ion channels are pores that allow specific charged particles to cross the membrane in response to an existing concentration gradient. Proteins are capable of spanning the cell membrane, including its hydrophobic core, and can interact with the charge of ions because of the varied properties of amino acids found within specific domains or regions of the protein channel. Hydrophobic amino acids are found in the domains that are apposed to the hydrocarbon tails of the phospholipids. Hydrophilic amino acids are exposed to the fluid environments of the extracellular fluid and cytosol. Additionally, the ions will interact with the hydrophilic amino acids, which will be selective for the charge of the ion. Channels for cations (positive ions) will have negatively charged side chains in the pore. Channels for anions (negative ions) will have positively charged side chains in the pore. This is called electrochemical exclusion, meaning that the channel pore is charge-specific. Ion channels can also be specified by the diameter of the pore. The distance between the amino acids will be specific for the diameter of the ion when it dissociates from the water molecules surrounding it. Because of the surrounding water molecules, larger pores are not ideal for smaller ions because the water molecules will interact, by hydrogen bonds, more readily than the amino acid side chains. This is called size exclusion. Some ion channels are selective for charge but not necessarily for size, and thus are called a nonspecific channel. These nonspecific channels allow cations—particularly Na+ , K + , and Ca2+—to cross the membrane, but exclude anions. Ion channels do not always freely allow ions to diffuse across the membrane. Some are opened by certain events, meaning the channels are gated. So another way that channels can be categorized is on the basis of how they are gated. Although these classes of ion channels are found primarily in the cells of nervous or muscular tissue, they also can be found in the cells of epithelial and connective tissues. A ligand-gated channel opens because a signaling molecule, a ligand, binds to the extracellular region of the channel. This type of channel is also known as an ionotropic receptor because when the ligand, known as a neurotransmitter in the nervous system, binds to the protein, ions cross the membrane changing its charge. A mechanically gated channel opens because of a physical distortion of the cell membrane. Many channels associated with the sense of touch (somatosensation) are mechanically gated. For example, as pressure is applied to the skin, these channels open and allow ions to enter the cell. Similar to this type of channel would be the channel that opens on the basis of temperature changes, as in testing the water in the shower. A voltage-gated channel is a channel that responds to changes in the electrical properties of the membrane in which it is embedded. Normally, the inner portion of the membrane is at a negative voltage. When that voltage becomes less negative, the channel begins to allow ions to cross the membrane. A leakage channel is randomly gated, meaning that it opens and closes at random, hence the reference to leaking. There is no actual event that opens the channel; instead, it has an intrinsic rate of switching between the open and closed states. Leakage channels contribute to the resting transmembrane voltage of the excitable membrane. Source:
https://chromoscience.com/electrically-active-cell-membranes/