content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
BACKGROUND Field of the Invention Description of the Prior Art SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention relates to a six-piece optical lens system, and more particularly to a miniaturized six-piece optical lens system applicable to electronic products. In recent years, with the rapid development of portable electronic products, such as, smartphone, tablet computer and so on, small optical lens system applied to portable electronic products has been indispensable. In addition, as the advanced semiconductor manufacturing technologies have allowed the image sensors with smaller size and higher pixel, small optical lens systems have increasingly higher pixel, there's an increasing demand for an optical lens system with better image quality. Conventional miniaturized optical lens systems used in portable electronic products mostly consist of five lens elements, however, since the high profile portable electronic products, such as smart phone, wearable device and tablet personal computer, are becoming prevalent, the demand for resolution and imaging quality of the miniaturized optical lens systems also increases. The conventional five-piece lens system cannot satisfy higher demand. Currently, conventional six-piece optical lens systems is developed to provide imaging lens systems with big stop and high image quality, however, the total track length of these optical lens systems is too long, and it is difficult to have the characteristics of big stop, high image quality and miniaturization, which are not applicable to portable electronic products. The present invention mitigates and/or obviates the aforementioned disadvantages. The primary objective of the present invention is to provide a miniaturized six-piece optical lens system having a big stop and high image quality. Therefore, a six-piece optical lens system in accordance with the present invention comprises, in order from an object side to an image side: a stop; a first lens element with a positive refractive power, having an object-side surface being convex near an optical axis and an image-side surface being concave near the optical axis, at least one of the object-side surface and the image-side surface of the first lens element being aspheric; a second lens element with a negative refractive power, having an object-side surface being convex near the optical axis and an image-side surface being concave near the optical axis, at least one of the object-side surface and the image-side surface of the second lens element being aspheric; a third lens element with a positive refractive power having an object-side surface being convex near the optical axis and an image-side surface being convex near the optical axis, at least one of the object-side surface and the image-side surface of the third lens element being aspheric; a fourth lens element with a negative refractive power having an object-side surface being concave near the optical axis and an image-side surface being convex near the optical axis, at least one of the object-side surface and the image-side surface of the fourth lens element being aspheric; a fifth lens element with a positive refractive power having an object-side surface being convex near the optical axis and an image-side surface being concave near the optical axis, at least one of the object-side surface and the image-side surface of the fifth lens element being aspheric; and a sixth lens element with a negative refractive power having an object-side surface being concave near the optical axis and an image-side surface being convex near the optical axis, at least one of the object-side surface and the image-side surface of the sixth lens element being aspheric and provided with at least one inflection point. Preferably, a focal length of the first lens element is f1, a focal length of the second lens element is f2, and they satisfy the relation: −0.7<f1/f2<−0.3, so that the refractive power of the first lens element and the second lens element are more suitable, it will be favorable to obtain a wide field of view and avoid the excessive increase of aberration of the system. 0 6 Preferably, the focal length of the second lens element is f2, a focal length of the third lens element is 13, and they satisfy the relation: −1.0<f2/f3<−., so that the refractive power of the third lens element can be distributed effectively and will not be too large, it will be favorable to reduce the sensitivity of the system and reduce the aberration. Preferably, the focal length of the third lens element is 13, a focal length of the fourth lens element is f4, and they satisfy the relation: −1.3<f3/f4<−0.7, so that the refractive power of the fourth lens element can be distributed effectively and will not be too large, it will be favorable to reduce the sensitivity of the system and reduce the aberration. Preferably, the focal length of the fourth lens element is f4, a focal length of the fifth lens element is f5, and they satisfy the relation: −1.9<f4/f5<−1.3, so that the refractive power of the fifth lens element can be distributed effectively and will not be too large, it will be favorable to reduce the sensitivity of the system and reduce the aberration. Preferably, the focal length of the fifth lens element is f5, a focal length of the sixth lens element is f6, and they satisfy the relation: −1.4<f5/f6<−0.7, so that the refractive power of the sixth lens element can be distributed effectively and will not be too large, it will be favorable to reduce the sensitivity of the system and reduce the aberration. Preferably, the focal length of the first lens element is f1, the focal length of the third lens element is f3, and they satisfy the relation: 0.2<f1/f3<0.6, which can balance the refractive power of the six-piece optical lens system, consequently achieving the optimum imaging effect. Preferably, the focal length of the second lens element is f2, the focal length of the fourth lens element is f4, and they satisfy the relation: 0.6<f2/f4<1.0, which is favorable to increase the field of view and enlarge the stop of the six-piece optical lens system. Meanwhile, the assembling tolerance can be reduced to improve yield rate. Preferably, the focal length of the third lens element is f3, the focal length of the fifth lens element is f5, and they satisfy the relation: 1.3<f3/f5<2.0, which is favorable to increase the field of view and enlarge the stop of the six-piece optical lens system. Meanwhile, the assembling tolerance can be reduced to improve yield rate. Preferably, the focal length of the fourth lens element is f4, the focal length of the sixth lens element is f6, and they satisfy the relation: 1.3<f4/f6<2.0, which is favorable to increase the field of view and enlarge the stop of the six-piece optical lens system. Meanwhile, the assembling tolerance can be reduced to improve yield rate. Preferably, the focal length of the first lens element is f1, a focal length of the second lens element and the third lens element combined is f23, and they satisfy the relation: −0.1<f1/f23<−0.03, which can balance the refractive power of the six-piece optical lens system, consequently achieving the optimum imaging effect. Preferably, the focal length of the second lens element and the third lens element combined is f23, the focal length of the fourth lens element is f4, and they satisfy the relation: 3.0<f23/f4<7.0, which is favorable to increase the field of view and enlarge the stop of the six-piece optical lens system. Meanwhile, the assembling tolerance can be reduced to improve yield rate. Preferably, the focal length of the second lens element and the third lens element combined is f23, a focal length of the fourth lens element and the fifth lens element combined is f45, and they satisfy the relation: −5.0<f23/f45<−3.8. If f23/f45 satisfies the above relation, a wide field of view, high pixel and low height can be provided and the resolution can be improved evidently. Contrarily, if f23/f45 exceeds the above range, the performance and resolution of the optical lens system will be reduced, and the yield rate will be low. Preferably, a focal length of the first lens element and the second lens element combined is f12, a focal length of the third lens element and the fourth lens element combined is f34, and they satisfy the relation: −0.1<f12/f34<−0.03. If f12/f34 satisfies the above relation, a wide field of view, high pixel and low height can be provided and the resolution can be improved evidently. Contrarily, if f12/f34 exceeds the above range, the performance and resolution of the optical lens system will be reduced, and the yield rate will be low. Preferably, the focal length of the third lens element and the fourth lens element combined is f34, a focal length of the fifth lens element and the sixth lens element combined is f56, and they satisfy the relation: −3.5<f34/f56<−2.3. If f34/f56 satisfies the above relation, a wide field of view, high pixel and low height can be provided and the resolution can be improved evidently. Contrarily, if f34/f56 exceeds the above range, the performance and resolution of the optical lens system will be reduced, and the yield rate will be low. Preferably, the focal length of the fourth lens element and the fifth lens element combined is f45, the focal length of the sixth lens element is f6, and they satisfy the relation: −2.6<f45/f6<−1.3. If f45/f6 satisfies the above relation, a wide field of view, high pixel and low height can be provided and the resolution can be improved evidently. Contrarily, if f45/f6 exceeds the above range, the performance and resolution of the optical lens system will be reduced, and the yield rate will be low. Preferably, the focal length of the first lens element is f1, a focal length of the second lens element, the third lens element and the fourth lens element combined is f234, and they satisfy the relation: −0.7<f1/f234<−0.3. Appropriate refractive power is favorable to reduce the spherical aberration and astigmatism of the optical lens system effectively. Preferably, the focal length of the second lens element, the third lens element and the fourth lens element combined is f234, the focal length of the fifth lens element is f5, and they satisfy the relation: −1.6<f234/f5<−1.0. Appropriate refractive power is favorable to reduce the spherical aberration and astigmatism of the optical lens system effectively. Preferably, the focal length of the second lens element, the third lens element and the fourth lens element combined is f234, the focal length of the fifth lens element and the sixth lens element combined is f56, and they satisfy the relation: −0.35<f234/f56<−0.05. Appropriate refractive power is favorable to reduce the spherical aberration and astigmatism of the optical lens system effectively. Preferably, the focal length of the first lens element, the second lens element and the third lens element combined is f123, the focal length of the fourth lens element is f4, and they satisfy the relation: −0.6<f123/f4<−0.2. Preferably, the focal length of the first lens element, the second lens element and the third lens element combined is f123, the focal length of the fourth lens element and the fifth lens element combined is f45, and they satisfy the relation: 0.15<f123/f45<0.5. If f123/f45 satisfies the above relation, a wide field of view, high pixel and low height can be provided and the resolution can be improved evidently. Contrarily, if f123/f45 exceeds the above range, the performance and resolution of the optical lens system will be reduced, and the yield rate will be low. Preferably, the focal length of the first lens element, the second lens element and the third lens element combined is f123, the focal length of the fourth lens element, the fifth lens element and the sixth lens element combined is f456, and they satisfy the relation: −0.6<f123/f456<−0.2. If f123/f456 satisfies the above relation, a wide field of view, high pixel and low height can be provided and the resolution can be improved evidently. Contrarily, if f123/f456 exceeds the above range, the performance and resolution of the optical lens system will be reduced, and the yield rate will be low. Preferably, an Abbe number of the first lens element is V1, an Abbe number of the second lens element is V2, and they satisfy the relation: 30<V1−V2<42, so that the chromatic aberration of the six-piece optical lens system can be modified effectively. Preferably, an Abbe number of the third lens element is V3, an Abbe number of the fourth lens element is V4, and they satisfy the relation: 30<V3−V4<42, so that the chromatic aberration of the six-piece optical lens system can be modified effectively. Preferably, the focal length of the six-piece optical lens system is f, a distance from the object-side surface of the first lens element to the image plane along the optical axis is TL, and they satisfy the relation: 0.6<f/TL<0.95, it will be favorable to obtain a wide field of view and maintain the objective of miniaturization of the six-piece optical lens system, which can be used in thin electronic products. The present invention will be presented in further details from the following descriptions with the accompanying drawings, which show, for purpose of illustrations only, the preferred embodiments in accordance with the present invention. FIGS. 1A and 1B FIG. 1A FIG. 1B 100 110 120 130 140 150 160 170 180 100 112 110 Referring to , shows a six-piece optical lens system in accordance with a first embodiment of the present invention, and shows, in order from left to right, the image plane curve and the distortion curve of the first embodiment of the present invention. A six-piece optical lens system in accordance with the first embodiment of the present invention comprises a stop and a lens group. The lens group comprises, in order from an object side to an image side: a first lens element , a second lens element , a third lens element , a fourth lens element , a fifth lens element , a sixth lens element , an IR cut filter , and an image plane , wherein the six-piece optical lens system has a total of six lens elements with refractive power. The stop is disposed between an image-side surface of the first lens element and an object to be imaged. 110 111 190 112 190 111 112 110 The first lens element with a positive refractive power has an object-side surface being convex near an optical axis and the image-side surface being concave near the optical axis , the object-side surface and the image-side surface are aspheric, and the first lens element is made of plastic material. 120 121 190 122 190 121 122 120 The second lens element with a negative refractive power has an object-side surface being convex near the optical axis and an image-side surface being concave near the optical axis , the object-side surface and the image-side surface are aspheric, and the second lens element is made of plastic material. 130 131 190 132 190 131 132 130 The third lens element with a positive refractive power has an object-side surface being convex near the optical axis and an image-side surface being convex near the optical axis , the object-side surface and the image-side surface are aspheric, and the third lens element is made of plastic material. 140 141 190 142 190 141 142 140 The fourth lens element with a negative refractive power has an object-side surface being concave near the optical axis and an image-side surface being convex near the optical axis , the object-side surface and the image-side surface are aspheric, and the fourth lens element is made of plastic material. 150 151 190 152 190 151 152 150 The fifth lens element with a positive refractive power has an object-side surface being convex near the optical axis and an image-side surface being concave near the optical axis , the object-side surface and the image-side surface are aspheric, and the fifth lens element is made of plastic material. 160 161 190 162 190 161 162 160 161 162 The sixth lens element with a negative refractive power has an object-side surface being concave near the optical axis and an image-side surface being convex near the optical axis , the object-side surface and the image-side surface are aspheric, the sixth lens element is made of plastic material, and at least one of the object-side surface and the image-side surface is provided with at least one inflection point. 170 160 180 The IR cut filter made of glass is located between the sixth lens element and the image plane and has no influence on the focal length of the six-piece optical lens system. The equation for the aspheric surface profiles of the respective lens elements of the first embodiment is expressed as follows: <math overflow="scroll"><mrow><mi>z</mi><mo>=</mo><mrow><mfrac><msup><mi>ch</mi><mn>2</mn></msup><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>[</mo><mrow><mn>1</mn><mo>-</mo><mrow><mrow><mo>(</mo><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow><mo>)</mo></mrow><mo></mo><msup><mi>c</mi><mn>2</mn></msup><mo></mo><msup><mi>h</mi><mn>2</mn></msup></mrow></mrow><mo>]</mo></mrow><mn>0.5</mn></msup></mrow></mfrac><mo>+</mo><mrow><mi>A</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo></mo><msup><mi>h</mi><mn>4</mn></msup></mrow><mo>+</mo><msup><mi>Bh</mi><mn>6</mn></msup><mo>+</mo><msup><mi>Ch</mi><mn>8</mn></msup><mo>+</mo><msup><mi>Dh</mi><mn>10</mn></msup><mo>+</mo><msup><mi>Eh</mi><mn>12</mn></msup><mo>+</mo><msup><mi>Gh</mi><mn>14</mn></msup><mo>+</mo><mi>…</mi></mrow></mrow></math> wherein: 190 z represents the value of a reference position with respect to a vertex of the surface of a lens and a position with a height h along the optical axis ; c represents a paraxial curvature equal to 1/R (R: a paraxial radius of curvature); 190 h represents a vertical distance from the point on the curve of the aspheric surface to the optical axis ; k represents the conic constant; A, B, C, D, E, G, . . . : represent the high-order aspheric coefficients. In the first embodiment of the present six-piece optical lens system, a focal length of the six-piece optical lens system is f, a f-number of the six-piece optical lens system is Fno, the six-piece optical lens system has a maximum view angle (field of view) FOV, and they satisfy the relations: f=3.89 mm; Fno=2.0; and FOV=78 degrees. 110 120 In the first embodiment of the present six-piece optical lens system, a focal length of the first lens element is f1, a focal length of the second lens element is f2, and they satisfy the relation: f1/f2=−0.47. 120 130 In the first embodiment of the present six-piece optical lens system, the focal length of the second lens element is f2, a focal length of the third lens element is f3, and they satisfy the relation: f2/f3=−0.80. 130 140 In the first embodiment of the present six-piece optical lens system, the focal length of the third lens element is f3, a focal length of the fourth lens element is f4, and they satisfy the relation: f3/f4=−1.03. 140 150 In the first embodiment of the present six-piece optical lens system, the focal length of the fourth lens element is f4, a focal length of the fifth lens element is f5, and they satisfy the relation: f4/f5=−1.59. 150 160 In the first embodiment of the present six-piece optical lens system, the focal length of the fifth lens element is f5, a focal length of the sixth lens element is f6, and they satisfy the relation: f5/f6=−1.08. 110 130 In the first embodiment of the present six-piece optical lens system, the focal length of the first lens element is f1, the focal length of the third lens element is f3, and they satisfy the relation: f1/f3=0.37. 120 140 In the first embodiment of the present six-piece optical lens system, the focal length of the second lens element is f2, the focal length of the fourth lens element is f4, and they satisfy the relation: f2/f4=0.83. 130 150 In the first embodiment of the present six-piece optical lens system, the focal length of the third lens element is f3, the focal length of the fifth lens element is f5, and they satisfy the relation: f3/f5=1.64. 140 160 In the first embodiment of the present six-piece optical lens system, the focal length of the fourth lens element is f4, the focal length of the sixth lens element is f6, and they satisfy the relation: f4/f6=1.71. 110 120 130 In the first embodiment of the present six-piece optical lens system, the focal length of the first lens element is f1, a focal length of the second lens element and the third lens element combined is f23, and they satisfy the relation: f1/f23=−0.06. 120 130 140 In the first embodiment of the present six-piece optical lens system, the focal length of the second lens element and the third lens element combined is f23, the focal length of the fourth lens element is f4, and they satisfy the relation: f23/f4=6.06. 120 130 140 150 In the first embodiment of the present six-piece optical lens system, the focal length of the second lens element and the third lens element combined is f23, a focal length of the fourth lens element and the fifth lens element combined is f45, and they satisfy the relation: f23/f45=−4.58. 110 120 130 140 In the first embodiment of the present six-piece optical lens system, a focal length of the first lens element and the second lens element combined is f12, a focal length of the third lens element and the fourth lens element combined is f34, and they satisfy the relation: f12/f34=−0.05. 130 140 150 160 In the first embodiment of the present six-piece optical lens system, the focal length of the third lens element and the fourth lens element combined is f34, a focal length of the fifth lens element and the sixth lens element combined is f56, and they satisfy the relation: f34/f56=−2.69. 140 150 160 In the first embodiment of the present six-piece optical lens system, the focal length of the fourth lens element and the fifth lens element combined is f45, the focal length of the sixth lens element is f6, and they satisfy the relation: f45/f6=−2.26. 110 120 130 140 In the first embodiment of the present six-piece optical lens system, the focal length of the first lens element is f1, a focal length of the second lens element , the third lens element and the fourth lens element combined is f234, and they satisfy the relation: f1/f234=−0.48. 120 130 140 150 In the first embodiment of the present six-piece optical lens system, the focal length of the second lens element , the third lens element and the fourth lens element combined is f234, the focal length of the fifth lens element is f5, and they satisfy the relation: f234/f5=−1.29. 120 130 140 150 160 In the first embodiment of the present six-piece optical lens system, the focal length of the second lens element , the third lens element and the fourth lens element combined is f234, the focal length of the fifth lens element and the sixth lens element combined is f56, and they satisfy the relation: f234/f56=−0.17. 110 120 130 140 In the first embodiment of the present six-piece optical lens system, a focal length of the first lens element , the second lens element and the third lens element combined is f123, the focal length of the fourth lens element is f4, and they satisfy the relation: f123/f4=−0.43. 110 120 130 140 150 In the first embodiment of the present six-piece optical lens system, the focal length of the first lens element , the second lens element and the third lens element combined is f123, the focal length of the fourth lens element and the fifth lens element combined is f45, and they satisfy the relation: f123/f45=0.33. 110 120 130 140 150 160 In the first embodiment of the present six-piece optical lens system, the focal length of the first lens element , the second lens element and the third lens element combined is f123, a focal length of the fourth lens element , the fifth lens element and the sixth lens element combined is f456, f123/f456=−0.42. 110 120 In the first embodiment of the present six-piece optical lens system, an Abbe number of the first lens element is V1, an Abbe number of the second lens element is V2, and they satisfy the relation: V1−V2=34.5. 130 140 In the first embodiment of the present six-piece optical lens system, an Abbe number of the third lens element is V3, an Abbe number of the fourth lens element is V4, and they satisfy the relation: V3−V4=34.5. 111 110 180 190 In the first embodiment of the present six-piece optical lens system, the focal length of the six-piece optical lens system is f, a distance from the object-side surface of the first lens element to the image plane along the optical axis is TL, and they satisfy the relation: f/TL=0.85. The detailed optical data of the first embodiment is shown in table 1, and the aspheric surface data is shown in table 2. TABLE 1 Embodiment 1 f(focal length) = 3.89 mm, Fno = 2.0, FOV = 78 deg. surface Curvature Radius Thickness Material Index Abbe # Focal length 0 object infinity infinity 1 infinity 0.300 2 stop infinity −0.300 3 Lens 1 1.421 (ASP) 0.629 plastic 1.544 56.000 3.012 4 8.800 (ASP) 0.105 5 Lens 2 13.326 (ASP) 0.239 plastic 1.651 21.500 −6.447 6 3.196 (ASP) 0.324 7 Lens 3 56.461 (ASP) 0.375 plastic 1.544 56.000 8.047 8 −4.757 (ASP) 0.342 9 Lens 4 −0.933 (ASP) 0.302 plastic 1.651 21.500 −7.790 10 −1.287 (ASP) 0.054 11 Lens 5 1.594 (ASP) 0.496 plastic 1.544 56.000 4.899 12 3.509 (ASP) 0.402 13 Lens 6 −2.437 (ASP) 0.352 plastic 1.535 56.000 −4.556 14 −531.535 (ASP) 0.307 15 IR-filter infinity 0.300 glass 1.517 64.167 — 16 infinity 0.350 17 Image plane infinity infinity TABLE 2 Aspheric Coefficients surface 3 4 5 6 7 8 K: −2.1156E−01 1.9788E+01 1.9960E+02 −2.3973E+01 −2.0005E+02 9.1747E+00 A: 1.8400E−02 −1.3383E−01 −1.8482E−01 −1.0924E−02 −1.6042E−01 −8.4133E−02 B: −1.0375E−01 3.3723E−01 3.5010E−01 3.4261E−01 −8.2262E−02 5.0657E−02 C: 4.5840E−01 −1.4023E+00 −2.9659E−01 −9.5442E−01 1.1973E−01 −4.8933E−01 D: −1.1060E+00 4.3185E+00 3.6527E−01 3.0474E+00 −4.6005E−01 1.2412E+00 E: 1.4320E+00 −7.9438E+00 −1.1238E+00 −5.9281E+00 1.1688E+00 −1.5349E+00 F: −9.5950E−01 7.5364E+00 1.7295E+00 6.0047E+00 −1.8692E+00 9.4097E−01 G: 2.3674E−01 −2.8762E+00 −9.0914E−01 −2.2048E+00 1.3536E+00 −1.9431E−01 surface 9 10 11 12 13 14 K: −3.8672E+00 −8.2613E−01 −1.0420E+01 −9.6324E+00 −3.9509E−02 −1.2936E+03 A: 5.3042E−02 6.7135E−02 −8.5886E−02 7.3665E−03 1.0270E−01 2.7623E−02 B: −3.1856E−01 −7.0244E−02 −6.4647E−02 −1.9310E−01 −2.2580E−01 −1.1711E−01 C: 5.5836E−01 1.2369E−01 9.1343E−02 1.9947E−01 1.6222E−01 8.4440E−02 D: −2.9469E−01 −3.6117E−02 −6.8859E−02 −1.3122E−01 −5.4691E−02 −2.9048E−02 E: −6.1208E−02 −1.0500E−02 2.1715E−02 5.1044E−02 9.9585E−03 5.2326E−03 F: 1.1252E−01 3.8368E−03 −1.3719E−03 −1.0271E−02 −9.5224E−04 −4.7377E−04 G: −3.2462E−02 4.5103E−06 −2.5702E−04 8.2595E−04 3.7866E−05 1.7037E−05 The units of the radius of curvature, the thickness and the focal length in table 1 are expressed in mm, the surface numbers 0-17 represent the surfaces sequentially arranged from the object-side to the image-side along the optical axis. In table 2, k represents the conic coefficient of the equation of the aspheric surface profiles, and A, B, C, D, E, F . . . : represent the high-order aspheric coefficients. The tables presented below for each embodiment are the corresponding schematic parameter and image plane curves, and the definitions of the tables are the same as Table 1 and Table 2 of the first embodiment. Therefore, an explanation in this regard will not be provided again. FIGS. 2A and 2B FIG. 2A FIG. 2B 200 210 220 230 240 250 260 270 280 200 212 210 Referring to , shows a six-piece optical lens system in accordance with a second embodiment of the present invention, and shows, in order from left to right, the image plane curve and the distortion curve of the second embodiment of the present invention. A six-piece optical lens system in accordance with the second embodiment of the present invention comprises a stop and a lens group. The lens group comprises, in order from an object side to an image side: a first lens element , a second lens element , a third lens element , a fourth lens element , a fifth lens element , a sixth lens element , an IR cut filter , and an image plane , wherein the six-piece optical lens system has a total of six lens elements with refractive power. The stop is disposed between an image-side surface of the first lens element and an object to be imaged. 210 211 290 212 290 211 212 210 The first lens element with a positive refractive power has an object-side surface being convex near an optical axis and the image-side surface being concave near the optical axis , the object-side surface and the image-side surface are aspheric, and the first lens element is made of plastic material. 220 221 290 222 290 221 222 220 The second lens element with a negative refractive power has an object-side surface being convex near the optical axis and an image-side surface being concave near the optical axis , the object-side surface and the image-side surface are aspheric, and the second lens element is made of plastic material. 230 231 290 232 290 231 232 230 The third lens element with a positive refractive power has an object-side surface being convex near the optical axis and an image-side surface being convex near the optical axis , the object-side surface and the image-side surface are aspheric, and the third lens element is made of plastic material. 240 241 290 242 290 241 242 240 The fourth lens element with a negative refractive power has an object-side surface being concave near the optical axis and an image-side surface being convex near the optical axis , the object-side surface and the image-side surface are aspheric, and the fourth lens element is made of plastic material. 250 251 290 252 290 251 252 250 The fifth lens element with a positive refractive power has an object-side surface being convex near the optical axis and an image-side surface being concave near the optical axis , the object-side surface and the image-side surface are aspheric, and the fifth lens element is made of plastic material. 260 261 290 262 290 261 262 260 261 262 The sixth lens element with a negative refractive power has an object-side surface being concave near the optical axis and an image-side surface being convex near the optical axis , the object-side surface and the image-side surface are aspheric, the sixth lens element is made of plastic material, and at least one of the object-side surface and the image-side surface is provided with at least one inflection point. 270 260 280 The IR cut filter made of glass is located between the sixth lens element and the image plane and has no influence on the focal length of the six-piece optical lens system. The detailed optical data of the second embodiment is shown in table 3, and the aspheric surface data is shown in table 4. TABLE 3 Embodiment 2 f(focal length) = 3.88 mm, Fno = 2.0, FOV = 79 deg. surface Curvature Radius Thickness Material Index Abbe # Focal length 0 object infinity infinity 1 infinity 0.327 2 stop infinity −0.327 3 Lens 1 1.423 (ASP) 0.629 plastic 1.544 56.000 3.015 4 8.844 (ASP) 0.105 5 Lens 2 13.342 (ASP) 0.240 plastic 1.651 21.500 −6.430 6 3.191 (ASP) 0.321 7 Lens 3 76.456 (ASP) 0.375 plastic 1.544 56.000 8.123 8 −4.704 (ASP) 0.340 9 Lens 4 −0.935 (ASP) 0.300 plastic 1.651 21.500 −7.756 10 −1.290 (ASP) 0.054 11 Lens 5 1.598 (ASP) 0.505 plastic 1.544 56.000 4.890 12 3.532 (ASP) 0.397 13 Lens 6 −2.441 (ASP) 0.351 plastic 1.535 56.000 −4.819 14 −44.788 (ASP) 0.350 15 IR-filter infinity 0.300 glass 1.517 64.167 — 16 infinity 0.314 17 Image plane infinity infinity TABLE 4 Aspheric Coefficients surface 3 4 5 6 7 8 K: −2.1068E−01 2.0422E+01 2.0000E+02 −2.3869E+01 −2.0000E+02 9.2746E+00 A: 1.8456E−02 −1.3366E−01 −1.8491E−01 −1.1020E−02 −1.6041E−01 −8.4559E−02 B: −1.0364E−01 3.3693E−01 3.5028E−01 3.4179E−01 −8.1879E−02 5.0537E−02 C: 4.5838E−01 −1.4034E+00 −2.9639E−01 −9.5536E−01 1.1847E−01 −4.8834E−01 D: −1.1063E+00 4.3167E+00 3.6460E−01 3.0478E+00 −4.6313E−01 1.2425E+00 E: 1.4314E+00 −7.9455E+00 −1.1263E+00 −5.9263E+00 1.1656E+00 −1.5340E+00 F: −9.6001E−01 7.5364E+00 1.7259E+00 6.0032E+00 −1.8689E+00 9.4117E−01 G: 2.3708E−01 −2.8720E+00 −9.0811E−01 −2.2223E+00 1.3601E+00 −1.9481E−01 surface 9 10 11 12 13 14 K: −3.8492E+00 −8.1860E−01 −1.0326E+01 −9.6924E+00 −3.8076E−02 −8.2970E+04 A: 5.3620E−02 6.6518E−02 −8.5750E−02 6.5469E−03 1.0197E−01 2.9142E−02 B: −3.1810E−01 −7.0297E−02 −6.4731E−02 −1.9336E−01 −2.2577E−01 −1.1730E−01 C: 5.5834E−01 1.2388E−01 9.1308E−02 1.9941E−01 1.6223E−01 8.4428E−02 D: −2.9473E−01 −3.6020E−02 −6.8868E−02 −1.3123E−01 −5.4690E−02 −2.9048E−02 E: −6.1125E−02 −1.0475E−02 2.1713E−02 5.1045E−02 9.9586E−03 5.2327E−03 F: 1.1268E−01 3.8370E−03 −1.3723E−03 −1.0270E−02 −9.5227E−04 −4.7375E−04 G: −3.2276E−02 4.5103E−06 −2.5691E−04 8.2612E−04 3.7854E−05 1.7039E−05 In the second embodiment, the equation of the aspheric surface profiles of the aforementioned lens elements is the same as the equation of the first embodiment. Also, the definitions of these parameters shown in the following table are the same as those stated in the first embodiment with corresponding values for the second embodiment, so an explanation in this regard will not be provided again. Moreover, these parameters can be calculated from Table 3 and Table 4 as the following values and satisfy the following conditions: Embodiment 2 f [mm] 3.88 f23/f45 −4.24 Fno 2.0 f12/f34 −0.05 FOV [deg.] 79 f34/f56 −3.15 f1/f2 −0.47 f45/f6 −2.15 f2/f3 −0.79 f1/f234 −0.48 f3/f4 −1.05 f234/f5 −1.27 f4/f5 −1.59 f234/f56 −0.23 f5/f6 −1.01 f123/f4 −0.44 f1/f3 0.37 f123/f45 0.33 f2/f4 0.83 f123/f456 −0.39 f3/f5 1.66 V1-V2 34.5 f4/f6 1.61 V3-V4 34.5 f1/f23 −0.07 f/TL 0.85 f23/f4 5.66 In the present six-piece optical lens system, the lens elements can be made of plastic or glass. If the lens elements are made of plastic, the cost will be effectively reduced. If the lens elements are made of glass, there is more freedom in distributing the refractive power of the six-piece optical lens system. Plastic lens elements can have aspheric surfaces, which allow more design parameter freedom (than spherical surfaces), so as to reduce the aberration and the number of the lens elements, as well as the total track length of the six-piece optical lens system. In the present six-piece optical lens system, if the object-side or the image-side surface of the lens elements with refractive power is convex and the location of the convex surface is not defined, the object-side or the image-side surface of the lens elements near the optical axis is convex. If the object-side or the image-side surface of the lens elements is concave and the location of the concave surface is not defined, the object-side or the image-side surface of the lens elements near the optical axis is concave. The six-piece optical lens system of the present invention can be used in focusing optical systems and can obtain better image quality. The six-piece optical lens system of the present invention can also be used in electronic imaging systems, such as, 3D image capturing, digital camera, mobile device, digital flat panel or vehicle camera. While we have shown and described various embodiments in accordance with the present invention, it should be clear to those skilled in the art that further embodiments may be made without departing from the scope of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A shows a six-piece optical lens system in accordance with a first embodiment of the present invention; FIG. 1B shows the image plane curve and the distortion curve of the first embodiment of the present invention; FIG. 2A shows a six-piece optical lens system in accordance with a second embodiment of the present invention; and FIG. 2B shows the image plane curve and the distortion curve of the second embodiment of the present invention.
Myoelectric devices are controlled by electromyographic signals generated by contraction of residual muscles, which thus serve as biological amplifiers of neural control signals. Although nerves severed by amputation continue to carry motor control information intended for the missing limb, loss of muscle effectors due to amputation prevents access to this important control information. Targeted muscle reinnervation (TMR) was developed as a novel strategy to improve control of myoelectric upper limb prostheses. Severed motor nerves are surgically transferred to the motor points of denervated target muscles, which, after reinnervation, contract in response to neural control signals for the missing limb. TMR creates additional control sites, eliminating the need to switch the prosthesis between different control modes. In addition, contraction of target muscles, and operation of the prosthesis, occurs in reponse to attempts to move the missing limb, making control easier and more intuitive. TMR has been performed extensively in individuals with high-level upper limb amputations and has been shown to improve functional prosthesis control. The benefits of TMR are being studied in individuals with transradial amputations and lower limb amputations. TMR is also being investigated in an ongoing clinical trial as a method to prevent or treat painful amputation neuromas.
https://www.scholars.northwestern.edu/en/publications/targeted-muscle-reinnervation-for-the-upper-and-lower-extremity
On completion of the programme the candidate will have the following learning outcomes: Knowledge - Give an account of the basic theories and fundamental physics relevant for medical physics - Explain the foundation for for medical diagnostics and modern radiotherapy - Explain selected experimental methods and measurement techniques in medical physics - Demonstrate a high level of knowledge in the field of medical phyiscs, and expert knowledge within the field of the Master's thesis project Skills - Perform an independent research project, under supervision, but show initiative and independence according to established research norms - Handle and present scientific data, evaluate prescision and uncertainties and use programming tools for analysis of data - Analyse relevant topics in medical physics and debate ways to explore these topics/questions using scientific methods - Get acquainted with the research community/environment and acquire the needed tools and resources for performing the scientific work - Analyse and evaluate scientific sources of information and use these in a structured manner to reach new ideas/hypothesis in the field of medial physics - Analyse, evaluate and debate own results in a scientifically sound manner, in light of the current knowledge in the field General competence - be able to analyse scientific problems in general and participate in discussion about different ways to address and solve problems - give good written and oral presentation of scientific topics and results - communicate scientific problems, analyses and conclusions within medical physics, both to specialists and the general public - be able to reflect over central scientific problems in his/her own work and other people's work - demonstrate understanding and respect for scientific values like openness, precision and reliability How to Apply Admission Requirements This programme is avalible for citizens from within the European Union/EEA/EFTA. Follow these links to find the general entry requirements and guidelines on how to apply: - Citizens from within the European Union/EEA/EFTA (1 March) - Nordic citizens and applicants residing in Norway (15 April) Admission Requirements A bachelor's degree (3-years) within physics or other relevant discipline. To qualify for admission to the master's programme the average grade for the specialization in the bachelor's degree should be at least C. Who may apply The Master's Programme in Physics, specialization Medical physics and technology, is not offered to international students. For more information about the application procedure for students residing in Norway (with Norwegian ID-number) please see:
https://www.uib.no/en/studies/MAMN-PHYS/MAMN-FYMED
A massively parallel simulation code, called dHybrid, has been developed to perform global scale studies of space plasma interactions. This code is based on an explicit hybrid model; the numerical stability and parallel scalability of the code are studied. A stabilization method for the explicit algorithm, for regions of near zero density, is proposed. Three-dimensional hybrid simulations of the interaction of the solar wind with unmagnetized artificial objects are presented, with a focus on the expansion of a plasma cloud into the solar wind, which creates a diamagnetic cavity and drives the Interplanetary Magnetic Field out of the expansion region. The dynamics of this system can provide insights into other similar scenarios, such as the interaction of the solar wind with unmagnetized planets. keywords:hybrid codes, particle MHD codes, space plasmas, AMPTE, artificial atmospheres, solar wind Pacs:52.65.Ww, 52.65.Kj, 96.50.Ek 1 Introduction To understand many space plasma scenarios, such as the solar wind interaction with cometary atmospheres or with unmagnetized planets (e.g. Mars) [1, 2] it is usually necessary to invoke the dynamics of ions. On one hand, MHD codes cannot always capture all the physics, (e.g. finite Larmor radius effects). On the other hand, full particle in cell (PIC) codes are computationally demanding and it is not always possible to simulate large scale space phenomena [3, 4, 5]. Hybrid codes are useful in these problems, where the ion time scale needs to be properly resolved and the high frequency modes on the electron time/length scales do not play a significant role. To address problems where the hybrid approximation is necessary we have developed the code dHybrid, in which the ions are kinetic (PIC) and the electrons are assumed massless and treated as a fluid [6, 7, 8]. There is also the possibility to use it as a particle MHD code [9, 10], by neglecting the ion kinetics. The present version of dHybrid is fully parallelized, thus allowing for the use of massively parallel computers. Advanced visualization is performed by taking advantage of the close integration with the OSIRIS data analysis and visualization package . A stability analysis of the algorithm is performed, and stabilization mechanisms (associated with numeric instabilities due to very low density regions) with minimum impact on performance are discussed. The parallel scalability and performance of the algorithm is also presented. The dHybrid framework allows the study of a wide class of problems, including global studies of space plasma shock structures. This is illustrated by a set of simulations of an artificial gas release in the solar wind, depicting the AMPTE release experiments. The relevance of the example chosen is due to its resemblance to the solar wind interaction with planetary/cometary exospheres (e.g. Mars and Venus)[12, 13], thus illustrating the possible scenarios to be tackled with dHybrid. In the following section we describe the hybrid model used in dHybrid. Its numerical implementation, focusing on stability and parallel scalability, is presented in section 3. We then describe the key features of the shock structures formed by the solar wind interaction with an unmagnetized artificial atmosphere. Comparison between present three dimensional simulations and previous 2D simulations [13, 14] are also presented in section 4. Finally, we state the conclusions. 2 Hybrid model and dHybrid Hybrid models are commonly used in many problems in plasma physics (for a review see, for instance, ref. ). When deriving the hybrid set of equations the displacement current is neglected in Ampère’s Law and the kinetics of electrons is not considered. Various hybrid approximations can be considered if electron mass, resistivity and electron pressure are included or not in the model. Quasi-neutrality is also implicitly assumed. The appropriate approximation is chosen in accordance to which time scales and spatial scales are relevant for the dynamics of the system. In dHybrid, the electron mass, the resistivity and the electron pressure are not considered, but due to the code structure, such generalization is straightforward. Shock jump conditions (i.e. Rankine-Hugoniot relations) are altered by implicitly neglecting the electron temperature. The differences are significant when the of the plasma dominating the shock is high and in this case care should be taken when analyzing results. The electric field under these conditions is thus , in which is the electron fluid velocity. The electric field is perpendicular to the local magnetic field, since the massless electrons short-circuit any parallel component of the electric field, and it can be determined from |(1)| where is the ion fluid velocity. The magnetic field is advanced in time through Faraday’s law where the electric field is calculated from eq. (1). Ions in the hybrid model have their velocity determined by the usual Lorentz force. The Boris particle pusher is used, using the electric field and the magnetic field to advance velocities in time . In the particle MHD model one uses |(2)| to determine individual particle velocities, where the second term on the right hand side is the pressure term for the ions, assuming an adiabatic equation of state, and where is the Boltzmann constant. Ion fluid velocity is then obtained in the usual manner, integrating over the velocities . The ion species in dHybrid are then always represented by finite sized particles to be pushed in a 3D simulation box. The fields and fluid quantities, such as the density and ion fluid velocity , are interpolated from the particles using quadratic splines and defined on a 3D regular grid. These fields and fluid quantities are then interpolated back to push the ions using quadratic splines, in a self consistent manner. Equations are solved explicitly, based on a Boris pusher scheme to advance the particles in the hybrid approach, and a two step Lax-Wendroff scheme to advance the magnetic field [4, 5]. Both schemes are second order accurate in space and time, and are time and space centered. The present version of dHybrid uses the MPI framework as the foundation of the communication methods between processes, and the HDF5 framework as the basis of all diagnostics. The three-dimensional simulation space is divided across processes, and 1D, 2D and 3D domain decompositions are possible. The code can simulate an arbitrary number of particle species and, for each of them, either the particle MHD or the hybrid model can be applied. Periodic boundary conditions are used for both the fields and the particles, and ion species are simulated with arbitrary charge to mass ratios, arbitrary initial thermal velocity and spatial configurations. This flexibility allows for simulations where only the kinetic aspects of one of the ion species is followed. Normalized simulation units are considered for all the relevant quantities. Time is normalized to , space is normalized to , mass is normalized to the proton mass and charge is normalized to the proton charge , where is the ion collisionless skin depth with and where is ion the sound velocity. In this system the magnetic field is normalized to and the electric field is normalized to . Henceforth all equations will be expressed in these normalized units. Using the described implementation of the hybrid model, dHybrid can model a wide range of problems, from unmagnetized to magnetized plasmas in different configurations. 3 Stability and scalability of dHybrid The stability criterion on the time-step for all the algorithm is determined by the Lax-Wendroff method, as this is usually more stringent than the stability condition for the Boris algorithm due to the rapid increase in the Alfvèn velocity as the density goes to zero. The discretized equation (1) is |(3)| and the two step space-centered and time-centered Lax-Wendroff scheme to solve Faraday’s law is |(4)| |(5)| where represents the time step, the index represent values displaced by half cell size, where , and represent grid points along , and and represents the iteration step. These equations thus require the use of staggered grids , where the displaced terms are calculated using an average of the eight neighbor values around a given point. The general layout of the dHybrid algorithm is as follows. One starts of with particle positions at time , velocities at time and (interpolated from time ), and the magnetic field at time grid 1 (position indexes ). In step (i) , and are calculated from particles velocities, positions and from values in grid 1, (ii) electric field is calculated at from eq. (3), (iii) the magnetic field is calculated from eq. (4), (iv) particle velocites are calculated at using the Boris algorithm, positions at and are calculated from and and density and fluid velocity are calculated in grid 2: and . In step (v) the magnetic field is calculated at from grid 2 values and then (vi) is calculated using eq. (3) displaced half grid cell, (vii) the magnetic field is advanced in time to using eq. (5) and finally, (viii) particle velocities are advanced via Boris algorithm to . To obtain the Courant-Friedrichs-Levy stability condition, linearized versions of eq. (3) through eq. (5) are used considering constant density, arbitrary velocities and parallel propagating waves relative to the background magnetic field. The equations are then Fourier transformed to local grid modes, parallel plane waves , where is the cell size in . An amplification matrix relating with is then obtained. Requiring that all the eigenvalues of the amplification matrix be , yields the stability criterion |(6)| where is the background density, is the constant magnetic field, is the ion fluid velocity along and the two signs are due to the two different eigenvalues. The condition (6) sets a limit on the time step or, inversely, given a time step a limit on the lowest allowable density which can be present in a grid cell. We stress that all quantities in eq. (6) are expressed in normalized units. Using the same calculation method, a stability criterion was found in for similar field equations using a different implementation of the Lax-Wendroff algorithm. The stability criterion however is not the same since the specifics of the numerical approach differ: our implementation, described in , makes use of staggered grids to improve accuracy and guarantee that the equations are always space centered. As can be seen from eq. (3), under the MHD and hybrid models, the algorithm breaks in regions where the density is close to zero . The problem is either physical, if particles are pushed out of a given region of space by a shock or other means, or it can be a numerical artifact due to poor particle statistics. One method to tackle this problem is by an implicit calculation of the electric field , which requires an iteration method to solve the electric field equation. One other method, discussed in [19, 20] involves the use of an artificial resistivity. If the problem is physical, it can be avoided by considering that if no particles exist in a given region of space, then both the charge density and current density are zero, and the electric field is defined by . This equation has to be solved only on volumes where the density goes below a given stability threshold. This region can span several processes and thus will involve several communication steps. Fast elliptic solvers can be used to solve this equation, although the complex vacuum/plasma boundaries that arise complicate the problem. Usually it is found that the problem is numerical in nature and it is only due to the limited number of computational particles per mesh cell used in the simulation. Thus, if in any given mesh cell there are too few particles, the density drops and the Alfvèn velocity increases, breaking the stability criterion for the field solver. Three methods are considered to maintain the stability of the algorithm: (i) the number of particles per cell can be increased throughout the simulation, (ii) the time step can be decreased, and (iii) a non-physical background density can be added as needed in the unstable zones, rendering them stable. The two former options are obvious, and yield meaningful physical results at expense of computational time. The last solution is non-physical and, thus, can yield non-physical results. In short, the first two options are chosen and the last one is implemented to be used only as last resort. Each time the electric field is to be calculated, the density in each mesh cell is automatically checked to determine if it is below the stability threshold, using eq. (6), and is set to the minimum value if it is. The minimum density value is thus calculated using the time step, the grid size, and the local values for the magnetic field and fluid velocity, minimizing the impact of using a non-physical solution. Testing showed that as long as the number of cells that are treated with this method is kept low enough (up to of the total number of cells), the results do not change significantly. The approach followed here guarantees good parallel scalability of the algorithm since the algorithm is local. The overall algorithm was also designed as to be as local as possible and to minimize the number of communication steps between processes. This was accomplished by joining several vectors to transmit at the same step and it resulted in the following parallel scheme: (i) after the first step in the main algorithm guard cells of , and are exchanged between neighboring processes, (ii) guard cells of are exchanged, (iii) guard cells of are exchanged, (iv) particles that crossed spacial boundaries are exchanged between neighboring processes, (v) guard cells of , and are exchanged, (vi) guard cells of are exchanged and finally (vii) guard cells of are exchanged. Scalability of dHybrid was studied on a Macintosh dual G5 cluster at , interconnected with a Gigabit ethernet network. The particle push time is per particle, and the field solver time is of the total iteration time for 1000 iterations on a single process. One plasma species is initialized evenly across the simulation box with a drift velocity of , a thermal temperature of and a charge to mass ratio of (protons). A perpendicular magnetic field with constant intensity of is set across the box. The benchmark set up consists of two different ”parallel” scenarios. In the first scenario a 96 cell cubic grid is used, with 8 particles per cell, all diagnostics off, and 1000 iterations are performed. The simulation space is then evenly split up among the number of processes in each run, in a 1D partition. The average time per iteration is taken, relative to the time per iteration in a single processor. Fig. 1 compares the ideal speed up against the achieved results. The minimum speed up obtained is when using 32 processors. We observe that in this case the maximum problem size that could be set on one machine is limited by memory considerations and therefore, when the problem is split up by 32 processors, the problem size per processor is much smaller and the communication time relative to the overall iteration time starts to penalize the code performance, reaching of the loop time for 32 processors. In the second scenario, the problem size increases proportionally to the number of processors used. Fig. 2 shows the results for runs with 2, 4, 8, 16 and 32 processor runs. The efficiency in this case is showing good parallel scaling, as expected, with similar efficiency as other state-of-the-art massively parallel codes [11, 21]. The total communication time takes of the total iteration time in this case, thus showing that this problem is more indicative of the parallel efficiency of the algorithm; the penalty for scaling up to many processors is not significant. Other test runs with the same setup but with no magnetic field were considered, and the efficiency in this case was higher with a constant value of for all the runs. This indicates that the drop in efficiency patent in Fig. 2 is mainly due to particle load balancing across processes, induced by the magnetic field, which makes particles have a Larmor radius of the simulation box size in the x dimension. 4 3D simulations of unmagnetized objects As a test problem for dHybrid, we have modeled the interaction of the solar wind with an unmagnetized object, mimicking the AMPTE release experiments, thus allowing the validation of the code against the AMPTE experimental results and other codes [18, 22, 23, 24, 25, 26]. The AMPTE experiments consisted of several gas (Lithium and Barium) releases in the upstream solar wind by a spacecraft orbiting the earth [27, 28, 29]. After the release, the expanding cloud of atoms is photoionized by the solar ultraviolet radiation, thus producing an expanding plasma and forming an obstacle to the flowing solar wind. The solar wind drags the Sun’s magnetic field corresponding, at , to a fairly uniform background field with a magnitude of about in the AMPTE experiments. The measured solar wind density was , flowing with a velocity of and having an ion acoustic speed of . The cloud expansion velocity was . A number of codes were developed to study the AMPTE release experiments in the solar wind, most of them 2D codes. These simulations show that the MHD model lacked certain key physics [30, 31]. The correct modeling of the cloud dynamics can only be obtained in hybrid simulations, because in the AMPTE releases, the ion Larmor radius of the solar wind particles is in the same order of magnitude of the cloud size itself. The problem is intrinsically kinetic and can only be fully assessed in a 3D simulation as this yields realistic field decays over space providing realistic dynamics for the ions. This is also of paramount importance for the ion pick up processes in planetary exospheres [32, 33]. In our simulations, the background magnetic field was set to with a solar wind velocity of and with a cloud expansion velocity of . The relative pressure of the solar wind plasma due to the embedded magnetic field (low plasma), and the relative pressure of the plasma cloud due to the expanding velocity of the cloud (high plasma), were kept fixed. These two pressures control the shock structure, and determine the physical behavior of the system. The simulations were performed on a computational cubic grid of cells, 12 solar wind particles per cell, a time step of and a cubic box size of in each dimension. A 2D parallel domain decomposition in x and y was used. The normalizing quantities are for spatial dimensions, for the velocities, for the time, for the magnetic field and for the density. In the simulations, the solar wind is flowing from the side to the side and the magnetic field is perpendicular to this flow, along the direction. As the cloud expands, a magnetic field compression zone is formed in the front of the cloud. The solar wind ions are deflected around the cloud due to the magnetic barrier, drift in the direction piling up in the lower region of the cloud, and are accelerated in this process. The magnetic field assumes a particular importance to test the model as it has a very clear signature, characteristic of the AMPTE experiments. In Fig. 3 the magnetic field evolution is shown in the center plane of the simulation. As the plasma bubble expands a diamagnetic cavity is formed due to the outward flow of ions that creates a diamagnetic current. A magnetic field compression zone is also evident - the kinetic pressure of the cloud ions drives the Interplanetary Magnetic Field outwards, creating the compressed magnetic field. These results reproduce 2D run results obtained in previous works [13, 14], and are in excellent agreement with the AMPTE experiments. In Fig. 4, taken at the same time steps, it is visible that the solar wind ions are being pushed out of the cloud area. The solar wind coming from the direction is deflected around the magnetic field pile up zone and drifts in the direction. This is due to the the electric field generated inside the cloud, dominated by the outflowing ions, that creates a counter clock-wise electric field. This electric field, depicted in Fig. 5, is responsible for the solar wind ion drift in the direction. The same electric field also affects the cloud ions, that are pushed out in the side of the cloud, and pushed back in on the other side. The ejection of ions in the side, known as the rocket effect , is one of the reasons of the reported bulk cloud drift in the direction due to momentum conservation and is thoroughly examined along with other effects in . One other interesting aspect is that as the simulation evolves, there are regions of space in which the density drops, making this test problem a good choice to test the low density stability problem resolution. It was found that density dropped below the stability limit not in the center of the cloud, but behind it, in the downwind area in the side of the cloud (Fig. 4). In the center of the cloud although the solar wind is pushed out, high density is observed, due to the presence of the cloud ions. It was also found that this was an example of a low density due only to poor particle statistics, as it happened only when 8 particles per cell were used and was eliminated by increasing the number of particles per cell to 12. The results, however, were very similar in the two runs with 8 particles per cell versus the 12 particle per cell run, due to the non-physical stabilization algorithm used. The total ion fluid velocity is shown in Fig. 6. The dark isosurface in the outer part of the cloud represents fluid velocities in the order of . This is a clear indication of an acceleration mechanism acting on these particles, which is due to the same electric field generated by the outflowing cloud ions. The observations made at the time of the AMPTE releases match the results from the simulations. A shock like structure, as in the simulations, was observed and reported along with a large diamagnetic cavity, coincident with the peak cloud density area [18, 29]. Both the instability in the cloud expansion and the rocket effect responsible for the cloud recoil in the direction, observed in the actual experiments and reported in several papers [13, 14, 26, 27], were also observed in dHybrid simulations. Other features like ion acceleration on the downwind side of the cloud, and the charge density pile up on the side of the cloud, which were unobserved in previous simulations, were captured in our simulations due to the use of much higher resolutions. These effects are due to the cloud expansion that creates an electric field capable of deflecting the solar wind around the cloud and accelerate the solar wind particles. 5 Conclusions In this paper we have presented the code dHybrid, a three dimensional massively parallel numerical implementation of the hybrid model. The stability of the algorithm has been discussed and a stabilization criterion with no impact on parallel scalability has been proposed and tested. The AMPTE release experiments were modeled, and the main physical features were recovered with dHybrid. Zero densities on hybrid and MHD models are a source of instabilities. In the hybrid case these are usually numerical artifacts due to poor particle statistics. Numerical stability analysis of dHybrid has been carried out and a constraint both on the time step and on the density minimum has been found. This constraint helps to effectively determine at run time where instabilities will occur and suppress them. The parallel scalability of the algorithm has been studied yielding a parallel efficiency for scaled problem sizes for typical runs with 32 processes, showing excellent scalability, an indication that the parallelization method is efficient in tackling larger problems. The zero density stabilized algorithm does not suffer from parallel performance degradation, thus avoiding the pitfalls of other solvers that require inter-process communication steps. dHybrid has been tested through comparison both with previous two dimensional codes and the experimental results from the AMPTE release experiments. The key features of the AMPTE release experiments are recovered by dHybrid. Parallel dHybrid allows the full scale study of the solar wind interaction with unmagnetized objects. Similar problems, such as planetary exosphere erosion in Mars and in Venus can be tackled, and will be the topic of future papers.
https://www.arxiv-vanity.com/papers/physics/0611174/
Revive Digital are a full service digital marketing agency, based in Southend, Essex. They work with companies in Essex and London to develop their digital marketing strategies and generate new business, using channels such as Paid Search (PPC), SEO (search engine optimisation), Social Media, E-mail Marketing, Content Marketing and running measurable campaign strategies. Phil has been involved in website design and digital marketing for over 20 years, and has an excellent team that works out of their Southend office.
https://www.southendpeers.co.uk/our-members/phil-thomas/
CROSS-REFERENCE TO RELATED APPLICATION This application claims the benefit of priority of U.S. provisional application No. 63/186,893, filed 11 May 2021, the contents of which are herein incorporated by reference. BACKGROUND OF THE INVENTION The present invention relates to wearable training devices and, more particularly, a wristband configured to provide immediate feedback, such as an audible sound, when the arm moves in a pitching motion deemed proper. Softball players risk injury if they are not moving their arm properly, in a natural body motion, when they pitch. Good softball pitchers typically hold their arm straight and rotate their pitching arm around their shoulder joints to pitch. Such a motion enables a straight pitch that does not risk injury to the pitcher's arm. Accordingly, to throw a softball straight and safely it is important that the pitcher maintain the movement of their pitching arm during the pitch in the vertical plane. Currently, for fast-pitch softball, pitching instructors must take slow motion videos to see if the pitcher is indeed using proper form—i.e., keeping their arm motion along the vertical plane. And as a result, a significant amount of time is used up during a lesson by the instructor watching replays. Since many youth instructors are not employed full time to teach one player, the limited time an instructor dedicates to a play involves them watching such replays, depriving the novice pitcher and instructor the time that could be spent on the other fundamentals of pitching, the sport generally, or related teachings. As a corollary, parents paying for instructions are paying a significant amount money for the instructor to spend time watching replays. Moreover, to properly and accurately underhand pitch, the pitcher must be able to consistently control the direction of the ball prior to its release point. Furthermore, when underhand pitching, the inner forearm should graze, brush against, or otherwise tangentially contact a specific region of the pitcher's lower body. The specific region is adjacent the hip and lower rib cage area or adjacent lateral portions of their body. Accordingly, to better train people to safely and accurately underhand pitch, a person must practice moving their arm only in the vertical plane prior to releasing the softball. As can be seen, there is a need for a wearable device for the arm of the pitcher, wherein the wearable device is configured to produce an audible sound or other immediate feedback when the arm engages in a proper form or pitching motion during at least a portion of the underhand pitching process. The wearable device can be embodied in a wristband, though other wearables that engage the pitching arm are contemplated by the present invention. As a result of this immediate feedback, when the pitching motion is correct, both the instructor and student know immediately if the pitching motion was performed the correct way. Thereby, the immediate feedback teaches the user how to move their arm in a safe an efficient way during the pitching process, as opposed to after the fact during film study. There are no other devices like this out there which is unfortunately leading young pitchers to not perform correctly and risk injury. By practicing the pitching motion, the audible indication/feedback will confirm and teach proper pitching arm motion, and relative position of the arm with respect to the lateral, midportion of the body, during the underhand softball pitching process. SUMMARY OF THE INVENTION In one aspect of the present invention, a method of teaching underhand pitching of a softball, the method includes the following: providing a wearable adapted to receive an arm of a human wearer; and placing a wearable below an elbow of the human wearer, wherein the wearable has an engagement point facing up and slightly inwards, and wherein the engagement point is operatively associated with a transducer configured to convert a force to an audible sound; and further attaching a visual indicium over the engagement point, wherein the visual indicium protrudes beyond an exterior surface of the wearable by up to approximately one-quarter of an inch, wherein the visual indicium is a patch, and wherein the wearable is a wristband. In another aspect of the present invention, a method of teaching underhand pitching of a softball, the method includes the following: providing a wearable adapted to receive an arm of a human wearer, wherein the wearable has an engagement point, and wherein the engagement point is operatively associated with a transducer configured to convert a force to an audible sound; and placing the wearable along the arm adjacent the associated elbow so that the engagement point is facing up and slightly inwards so that when the arm moves in a vertical plane through an underhand pitching motion the audible sound is produce, wherein the production of the audible sound is cause by contact of the engagement point against a back side of a rib cage of the human wearer. In yet another aspect of the present invention, to device for training underhand pitching, the device includes the following: a wearable adapted to receive an arm of a human wearer; an engagement point along an exterior surface of the wearable; and the engagement point is operatively associated with a transducer configured to convert a force to an audible sound; and a visual indicium covering the engagement point and the visual indicium protruding up to approximately a quarter inch therefrom, wherein the wearable is a wristband, wherein the visual indicium is a patch, and wherein the transducer is fully embedded in the wristband. These and other features, aspects and advantages of the present invention will become better understood with reference to the following drawings, description and claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of an exemplary embodiment of the present invention. FIG. 2 is a perspective view of an exemplary embodiment of the present invention. FIG. 3 is an elevation view of an exemplary embodiment of the present invention. FIG. 4 is an exploded perspective view of an exemplary embodiment of the present invention. FIG. 5 is a perspective view of an exemplary embodiment of the present invention, shown in use, illustrating proper placement of the wearable training device. FIG. 6 14 100 14 30 50 14 20 is a perspective view of an exemplary embodiment of the present invention, shown in use, illustrating that as the full round arm swing of the pitch comes around, the engagement point of the wearable training device (in the represented embodiment, the engagement point is indicated by the patch ) will contact against a lateral portion of the pitcher's body, around their lower rib cage and or hip area, if the pitcher's arm is traveling in the vertical plane. At this moment of contact, the compressive force experienced by the engagement point will be converted into an audible sound by way of the transducer , thereby providing an audio indication by the pitcher has maintained their pitching arm in the vertical plane throughout the pitching motion. DETAILED DESCRIPTION OF THE INVENTION The following detailed description is of the best currently contemplated modes of carrying out exemplary embodiments of the invention. The description is not to be taken in a limiting sense but is made merely for the purpose of illustrating the general principles of the invention, since the scope of the invention is best defined by the appended claims. Broadly, an embodiment of the present invention provides a movement assessment apparatus in the form of a wearable training apparatus for softball pitching. The apparatus has a wearable form embodying a transducer device operatively associated with an engagement point along the external surface of the wearable form, wherein the transducer converts a force into an audible sound wave. A patch may externally cover and thus identify this engagement point. In use, so that when the wearable form is worn on the forearm below the elbow of a human wearer, wherein the patch is facing up and turned slightly inwards, the transducer device will produce a sound (or squeak) when compressed between the forearm and back side of the rib cage of the wearer. This intersection of the forearm and back side of the rib cage is indicative of proper pitching form. FIGS. 1 through 6 100 100 10 10 Referring now to , the present invention may include a wearable training device and system . The wearable training device may be dimensioned and adapted in a wearable form capable of being worn on an arm of a human user. In certain embodiments, the wearable form may be in the form of an endless loop or wristband. 20 10 100 20 10 20 10 12 10 A transducer may by operatively associated with the wearable form to enable the wearable training device and system . In certain embodiments, the transducer is embodied, embedded, or otherwise connected inside the wearable form . By inside, it is understood that the transducer is in the material or body of the wearable form , as opposed to a passageway it defines for receiving an arm. The transducer is configured to make an audible sound when subjected to a predetermined force, pressure, or impact energy while embodied, embedded, or otherwise connected inside the wearable form . 20 22 22 20 24 10 24 24 The transducer defines a lumen and is configured to produce a sound in response to air passing through the lumen . The transducer may have a body member that may be of any shape or size if it functions as disclosed herein (including but not limited to fitting inside the wearable form ). For example, in the illustrated embodiment, the body member is in the form of a sphere or ball. In other embodiments, the body member may have a different shape, such as a cube or other shape. 24 24 26 24 26 22 22 24 The body member may be formed of a material that will return to its original shape or form once the squeezing or compression force is removed from the body member . An extension portion may extend from the body member , wherein the extension portion the defines the lumen . The lumen is configured to produce a sound in response to air passing through therethrough, wherein the air is urged through compression of the body member . 100 30 10 30 20 10 30 30 50 The wearable training device and system may include a visual indicium , or patch, which is removably attachable along an exterior surface of the wearable form . The visual indicium or patch is dimensioned and adapted to create a more solid surface when the transducer is place in the wearable form , directly under the visual indicium . The visual indicium or patch protrudes from the exterior surface of the wristband by approximately one-quarter of an inch, further facilitating the force signal that the transducer converts to an audible sound wave when compressed between the forearm and back side of the rib cage/hip area of the pitcher. 10 14 30 30 50 FIG. 5 The proper placement of the wearable form is straight down below the inside of the elbow and between the inside of the arm, as illustrated in . When worn correctly, upon impact of the engagement point , the transducer will make a sound to help the pitcher, instructor, and anyone (e.g., proud parent) in earshot realize that the pitcher used the proper throwing motion. Also, the visual indicia gives the pitcher a spot to target, making it easier for them to visualize their target. Proper pitching mechanics are so important because when done the wrong way, underhand pitching can cause injury to the shoulder over time, especially when as the pitcher gets older and adds more mass to their arm . 10 20 20 30 30 10 20 One method of manufacturing the present invention may include the following. First, a manufacturer may gather a wristband , transducer , and a patch big enough to cover the transducer . Then they could heat-press a patch onto the wristband over the engagement point along an exterior of the wristband. Next, they may cut the inside of the wristband just big enough to get the transducer inside and set with adhesive, such as hot glue. The final step is to then sew the incision made into the wristband shut. 40 10 40 10 50 One could possibly add some circuitry in the wearable form that works with a software application, wherein the circuitry includes sensors, accelerometers, and other electronic components to measure force, motion, acceleration, and velocity of the moving wearable form , and thus the arm t is associated with. 100 10 30 A method of using the present invention may include the following. The wearable training device and system disclosed herein. The wearable form must be placed on the forearm below the elbow and the patch critically must be facing up and turned slightly inwards to be used to the best of its capabilities. Additionally, there are other sports where a thrower hurls an object underhand, such as horseshoes, bowling, curling and bocce. The present invention would be a boon for these activities having a common underhand arm motion. Also, the present invention could possibly be used in other sports that would benefit from an audible indicator being heard along a certain point during an arm motion, such as when throwing the football. As used in this application, the term “about” or “approximately” refers to a range of values within plus or minus 10% of the specified number. And the term “substantially” refers to up to 90% or more of an entirety. Recitation of ranges of values herein are not intended to be limiting, referring instead individually to any and all values falling within the range, unless otherwise indicated, and each separate value within such a range is incorporated into the specification as if it were individually recited herein. The words “about,” “approximately,” or the like, when accompanying a numerical value, are to be construed as indicating a deviation as would be appreciated by one of ordinary skill in the art to operate satisfactorily for an intended purpose. Ranges of values and/or numeric values are provided herein as examples only, and do not constitute a limitation on the scope of the described embodiments. The use of any and all examples, or exemplary language (“e.g.,” “such as,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments or the claims. No language in the specification should be construed as indicating any unclaimed element as essential to the practice of the disclosed embodiments. In the following description, it is understood that terms such as “first,” “second,” “top,” “bottom,” “up,” “down,” and the like, are words of convenience and are not to be construed as limiting terms unless specifically stated to the contrary. It should be understood, of course, that the foregoing relates to exemplary embodiments of the invention and that modifications may be made without departing from the spirit and scope of the invention as set forth in the following claims.
How far can the arm of compassion reach? What do we need to do differently to start including acts of caring and sharing into our daily thoughts? To answer the first question, I believe that the arm of compassion has no boundaries. The potential is limitless and its power and strength is immeasurable. As for the second question, I believe it requires some soul searching on the part of each reader. For me, I see now that my soul searching precipitated me to write this Blog. I not only wanted to bring awareness to others but I also had the need to express my thoughts with the hopes that somehow and in some way I could be a catalyst for others to reach out and help someone. Or at least muster us some curiosity because curiosity is a precursor to exploration and through exploring the depths of our soul, we gain wisdom. We hear daily about sexual abuse by educators, bullying in school and in the workplace, neglect and injury being caused to others and most of the time we stand on the sidelines and watch. I know that I could have stood on the sidelines and watched Betty deteriorate further but I choose to show by example to others that I wanted to share a piece of me with Betty as I openly and willingly came to her aid. No money was exchanged - Betty's daughter and husband made it clear. It wasn't about the money; it was about compassion for another human being who needed help. When I first started this journey, I thought it was all about Betty but I soon discovered that this experience what helping me define my purpose for living for in truth it was about me. It is about having a strong sense of inner peace knowing that I can reach out and touch the soul of someone else and at the same time experience the joy and peace within my soul. It is interesting to discover, that when I made Betty happy and laugh, I actually was happy and laughed too. And when she looked at me and said, this is fun, I, too was having fun. Isn't that what we all crave for? It is the interconnectedness between individuals that helps feed our soul, heal our minds and nurtures our bodies. We are social being who thrive on relationships and a strong desire to be part of a community. It is that interconnectedness that helps us survive instead of decay. I experienced first hand how an elderly human being responded to kindness and genuine thoughtfulness. It was like awakening a sleeping flower - when it felt the sunlight it opened up and became alert and responsive. Inner peace and happiness is our birthright and I believe is imbedded within our very core. When we feel its energy and absorb those feelings, we are one with our Creator. Love and compassion go hand in hand with that sense of peace and happiness. Every time we become insensitive to others, and ourselves we rob ourselves of this gifts. There will always be conflict in the world because there are people in the world who thrive on discord and tormenting others. Sometimes it started as a game and other times it is their mental state. This blog is not reaching out to those individuals but rather it is being written to people like you and me who I believe choose to reach down into the depths of our soul and wonder question and seek meaning and purpose for being here in the first place. Whether we like it or not, we need encouragement and support in our lives. We notice it most when we are very young and when we are very old. But that strong connection to feel accepted, loved and respected is within each one of us. This is our chance to participate with an open and sincere heart to share messages of compassion and it is certainly a chance for our voice to be heard.
http://compassionspeaksout.com/arm-compassion
A major susceptibility locus for atopic dermatitis maps to chromosome 3q21. Atopic dermatitis (eczema) is a chronic inflammatory skin disease with onset mainly in early childhood It is commonly the initial clinical manifestation of allergic disease, often preceding the onset of respiratory allergies. Along with asthma and allergic rhinitis, atopic dermatitis is an important manifestation of atopy that is characterized by the formation of allergy antibodies (IgE) to environmental allergens. In the developed countries, the prevalence of atopic dermatitis is approximately 15%, with a steady increase over the past decades. Genetic and environmental factors interact to determine disease susceptibility and expression, and twin studies indicate that the genetic contribution is substantial. To identify susceptibility loci for atopic dermatitis, we ascertained 199 families with at least two affected siblings based on established diagnostic criteria. A genome-wide linkage study revealed highly significant evidence for linkage on chromosome 3q21 (Zall=4.31, P= 8.42 10(-6)). Moreover, this locus provided significant evidence for linkage of allergic sensitization under the assumption of paternal imprinting (hlod=3.71,alpha=44%), further supporting the presence of an atopy gene in this region. Our findings indicate that distinct genetic factors contribute to susceptibility to atopic dermatitis and that the study of this disease opens new avenues to dissect the genetics of atopy.
Eggplant, also known as aubergine, is one of my favourite vegetables, but you really need to know how to cook it correctly to achieve the best texture and taste. Here are 5 quick and easy tips on how to make eggplant taste great: - Trust – Follow your instincts and take the right eggplant home. Quality is key here. When selecting an eggplant, choose one that is vibrant in colour, heavy for its size, relatively firm and with little to no imperfections. - Taste – Focus on flavour as eggplant thrives on seasoning. You will be rewarded with a delicious meal if you embrace the idea of seasoning with salt, pepper, herbs and spices. My favourite seasonings include ginger, garlic, lemon, salt, pepper, oregano, Italian herbs, paprika or soy sauce. If you have a favourite, give it a try! Score eggplant slices with a knife, making shallow indents before adding the seasoning. Some people prefer to season with salt to draw out any moisture, but I don’t find it necessary. Scoring also helps in the cooking process. - Texture - Eggplant slices are like sponges and if you cook them in too much oil, the taste is not pleasant. Without any oil and it can be dry and rubbery. You don’t want it too mushy or to have a sponge-like texture either. To maintain control and reduce the amount of oil used, try drizzling or brushing eggplant slices with a small amount of oil before cooking. You could also add breadcrumbs to help reduce the amount of oil absorbed. - Technique - Embrace the grilling or barbecue (BBQ) technique. Grilling eggplant slices about 1cm thick on a hot BBQ also results in gorgeous grill lines that can add interest to any plate. Try roasting them sliced, whole or in wedges in the oven or even sliced in a sandwich press for a quick alternative to frying. For those preferring to use the stove, set the temperature to low. It takes longer, but the end result is a healthier version that’s sure to please. - Timing – Timing is everything and to ensure your eggplant cooks well, don’t overcrowd it in a dish when roasting. If cooking on the stove, be sure to toss part way through the cooking process, so it cooks evenly. A delicious winter dish that I have always enjoyed is the vegetarian version of this Traditional Egyptian eggplant moussaka*, which has been passed down through the generations of my family. Check it out and I hope you enjoy it too! For more great recipe ideas, type your vegetable or cuisine in the search bar or check out Salapedia.
https://www.lovemysalad.com/blog/5-tips-how-make-eggplants-taste-great
On March 2, 2021, Mikhail Gorbachev, the final chief of the Soviet Union, turned 90 years of age. His 5 years in energy had been tumultuous, multi-faceted, and noticed his nation opened up, the Cold War ended, and finally the collapse of the Soviet Union in December 1991. Fondly remembered in the West, he was for a while reviled in his homeland. How ought to we choose him as a political chief? What had been his major achievements and failures? Gorbachev was born in the southern area of Stavropol, deep in the Caucasus, a mountainous area comprising many alternative nationality teams. Visitors there included the KGB chief Yuri Andropov, who took on the function of Gorbachev’s patron, guaranteeing his rise via the native celebration ranks. The impression from these early days is of an lively man with strict work habits and excessive ambitions. He met his future spouse Raisa Titarenko at Moscow State University, the place he studied regulation. In 1980, the ageing and ailing Leonid Brezhnev introduced Gorbachev into the ruling Politburo, with a process of reviving Soviet agriculture, a process that appeared insuperable. At this time, Gorbachev, at 49, was by some margin the youngest member of the celebration’s ruling physique. After Brezhnev’s dying in November 1982, and an “interregnum” when two aged leaders adopted him—Andropov (1982-84) and Konstantin Chernenko (1984–March 1985)—Gorbachev was the broad selection for celebration chief in March 1985. Few modifications had been anticipated. Gorbachev made lengthy speeches, hardly ever listened to others (his spouse excepted), however sought common contact with the inhabitants. A 12 months later, at the celebration’s 27th Congress, Gorbachev iterated his insurance policies of Glasnost and Perestroika (Frankness and Reconstruction), to a physique of deeply entrenched celebration hardliners. The following month, the catastrophe at the Chernobyl nuclear plant offered his first main disaster. Half-hearted concealment proved unattainable as a radiation cloud moved northward and unfold throughout Europe. Gradually, Gorbachev introduced political modifications to undermine the authority of the Communist Party, eradicating a lot of the previous political elite. The 19th Party Conference in 1988 inaugurated multi-party elections and a brand new Congress the following 12 months that elected a brand new Supreme Soviet and in the following 12 months, a president (Gorbachev was elected in that place). But financial reforms proved more durable to finish, eliciting widespread opposition. Glasnost resulted in a free media that uncovered and uncovered a lot of the issues that had lengthy riddled society: Stalin’s crimes, extra celebration privileges, and calls for for extra autonomy and even independence from some Soviet republics, led by the Baltic States. In international coverage, he reached a brand new settlement with the United States after a number of summits, made unilateral cuts to medium-range Soviet nuclear weapons, visited the United States, and invited the archetypal Cold War president Ronald Reagan to Moscow, with the two leaders wandering round Red Square to the bemusement of onlookers. Above all, he deserted the Brezhnev Doctrine that had enforced army cooperation between the Soviet Union and the East European satellite tv for pc states to forestall political modifications. In 1989, on the 40th anniversary of the German Democratic Republic (East Germany), he flew to East Berlin and warned chief Erich Honecker that he wanted to make modifications. As the Communist regimes collapsed like dominoes later in the 12 months, and the 28-year-old Berlin Wall was dismantled, Moscow merely watched it occur. A 12 months later, the two German states reunited – extra merely, West Germany swallowed up the East. Gorbachev acceded to the calls for of the United States and German chief Helmut Kohl that the unified state ought to be a part of NATO. By 1990, the Soviet Union confronted an financial disaster, precipitated by failure of financial reforms and debilitating strikes of coalminers and steelworkers. In 1991, he initiated a brand new Union Agreement to protect the USSR. To forestall its prevalence in August, hardliners kidnapped him in Foros, Crimea and initiated a putsch in Moscow, which did not ignite. In the interim, a number of republics declared full independence, together with Ukraine and Belarus (the Baltic States had preceded them). But it was in his native Russia that Gorbachev was upstaged. In June 1991, Russia elected its personal president, the ebullient Siberian, Boris Yeltsin, a former pal whom Gorbachev had fired as Moscow’s Mayor in 1987, and gunning for revenge. Yeltsin banned the Communist Party, derided Gorbachev for appointing the putsch leaders, and declared Russian management over sources, together with the military on Russian territory. A couple of months later, he was unceremoniously evicted from the Kremlin, an emperor with out an empire. The state based by Lenin, which reached its peak underneath Stalin with the defeat of Nazism in 1945, ended quietly underneath Gorbachev, a person who sought peaceable change however lacked the means to protect the Soviet state. We ought to acknowledge his function in the finish of the Cold War and respect his efforts to alter the inflexible system he inherited. His process was in all probability unattainable.
https://publicnews.in/world/opinion-mikhail-gorbachev-cold-war-hero-or-the-man-who-lost-the-empire/
"The oldest living Nantucket resident is given the Boston Post Cane, an award they hold until passing" - Nantucket Nectars Bottle-cap This sounds pretty cool. A nice little honor to hold while clawing your way through old age. Why leave it as just an honor? Let's convert this to a fantasy setting. and give it a little power. Oh, and a curse. No point to it if there isn't a curse. The Gnarly Post Cane is given to the oldest living human resident of Village of Gnarly Post. It is believed that the cane ensures bountiful harvests, and it may be truth, as even during the worst years of drought and famine, the crops in Gnarly Post thrive. The holder of the Gnarley Post Cane can expect to live a productive life well into his or her early 120's or longer, assuming they don't come to an unnatural end. The Village of Gnarly Post has more then it's share of heart attacks among its middle aged population. Wether that has something to do with the Gnarly Post Cane, coincidence, or some other factor is anyone's guess. Beneath the Temple of Edea - By Vance Atkins Leicester's Rambles B/X No fucking level stated This twelve page single-column adventure features a sixteen room linear dungeon with hobgob...
https://www.tenkarstavern.com/2011/03/stealing-ideas-from-bottle-caps.html
Writing by hand requires a child to correctly identify the sticks, curves and/or circles that make up a letter, then reproduce those shapes in a particular orientation, using a set sequence of pen strokes. Before the skill is automatized, the handwriting process can be quite mentally taxing. New writers are also struggling to develop the fine motor skills needed to grip a pen or pencil and the language encoding skills required for reading and spelling. Add to this the challenge of writing in a straight line and creating letters of the same height and width and you’ll find that reversing letters is a common mistake for beginners to make. This is particularly the case for symbols built from the same set of shapes, including b/d, p/q, f/t, i/j, m/w and n/u. Nonetheless, most children grow out of letter reversal by age 7 and it only becomes a cause for concern when errors occur beyond first and second grade. A guest post from the authors of ‘The Illustrated Guide to Dyslexia and Its Amazing People' Following a blog that is current can keep you informed of the latest research and help you stay abreast of dyslexia related events and dates for your diary. If you’re active or thinking of becoming active in a dyslexia campaign, it’s a great way to connect with other advocates, particularly those working outside of your area. Blogs are also an ideal way to go about researching, as they are typically full of can-do posts and avoid the dense format of reference material. You may discover authors who are themselves dyslexic and thus write in a more intuitive manner. For parents of children who have just received a diagnosis, it can be helpful to read about the experiences of families who have embarked on a similar journey. Dyslexia is a specific learning difference that can affect both children and adults and cause difficulties with reading, spelling and math. It’s important for parents and teachers to understand that dyslexia does not affect intellect. Rather, it is a different way of processing language in the brain. Often individuals who are dyslexic struggle to split words into their component sounds. For children who are learning how to read and write, this causes frustration and poor performance in activities involving literacy skills. Because reading is required across the curriculum, students may quickly fall behind their same-age peers and lose confidence in the classroom. That’s why it’s important to recognize the symptoms early on so children can gain access to appropriate coping strategies and accommodations that can help them achieve their full potential at school. Giftedness is often defined as an intellectual ability linked to an IQ score of 130 or over. However, not all gifted children excel in an academic area. Some may display high creative, artistic, musical and/or leadership abilities relative to their peers. Giftedness can be focused in one skill, or it may be more general. It's also important for parents and educators to understand that it can sometimes come with specific learning differences that impact on performance at school. In these situations it's important to help a child develop their talents while also overcoming any challenges posed by the SpLDs. In some cases, it may be appropriate for the child to attend a special program or a school specifically for gifted children, so they have ample opportunities for advancement in a classroom environment that is sensitive to their needs and provides adequate stimulation. With access to the right resources and emotional and academic support, every gifted child can achieve their full potential at school. In the UK, the definition of disability is covered under the Equality Act of 2010 and hinges on how “substantial” the effect of the disability is deemed to be. It includes provisions for people with dyslexia who implement coping strategies but also considers workplace contexts and situations in which said strategies cannot be used. In the US, the Americans with Disabilities Act (ADA) discusses how a disability affects the individual, specifically if it interferes with their “life activities.” Dyslexia is currently evaluated on a case-by-case basis and most dyslexic individuals are considered to have some impairment in learning, reading and/or writing. Many parents and teachers struggle to distinguish between specific learning disabilities that impact on literacy skills. This confusion is made even worse when they have such similar names. While dyslexia is traditionally associated with reading, dysgraphia affects writing. Both are language disorders that can cause a child to struggle in the classroom, but they are separate conditions with unique neurological and behavioral profiles (1). Children with dysgraphia may have trouble with letter formation and word spacing in handwriting. They can experience difficulty with written expression, from translating ideas into language, and organizing their thoughts, to using grammar, capital letters, and punctuation correctly. For students with dyslexia, it is often English spelling and sounding out words in reading that are problematic. Being able to spell correctly depends partially on phonemic awareness, which is the ability to hear the sounds that make up words. But as a lot of English vocabulary is pronounced differently from how it is spelled, there is also a bit of memorization involved. When spelling becomes a source of difficulty or stress at school, it’s important to remind students that it is just one aspect of knowing a word. Computers and mobile devices can help them increase their accuracy in writing and they may want to try a phonics-based program of strategy instruction to improve their skills. Additionally, learning how to touch-type is a useful intervention, particularly when specific learning difficulties like dyslexia are present. This is because typing harnesses muscle memory in the hands to encode spelling as a pattern of keystrokes. A font is a formal set of text characters, including letters, numbers and punctuation, which has been created by a graphic designer in a particular style. Not all fonts are created equal and some typefaces may be more or less accessible for readers with visual impairments, visual processing disorders and dyslexia. For example, Dyslexie font is a font designed specifically for dyslexic readers. OpenDyslexic was also designed for people with dyslexia. Additional factors such as letter spacing, the spacing between words and lines on a page, font size, text colour and background can all impact on readability and reading speed. A guest post by Cigdem Knebel. Dyslexia is “difficulty” in learning to read or interpret letters, words, and other symbols. Many dyslexic students experience challenges when it comes to language skills development, including reading, writing and spelling. The condition is often referred to as a “learning difficulty” because dyslexia makes it harder for them to reach their full potential in a traditional school environment. Nonetheless, phonics-based, multi-sensory and evidence-based reading instruction has been shown to improve the language skills of dyslexic children. So while dyslexia may still be classed as a learning difficulty, the terminology “learning difference” may be more appropriate. And fortunately, with the right classroom accommodations, most students can achieve academic success alongside their peers. A good quote can work in the same way as a supportive teacher or coach, providing us with the encouragement we need to strengthen our self-resolve. Quotes can make us feel better about what’s going in our lives. They often teach important life-lessons and are a great way for an individual to share his or her wisdom about an experience. When it comes to quotes about dyslexia, you’ll find a mix of anecdotes, advice and words of wisdom. You will also encounter dyslexic individuals discussing their experience at school pre-diagnosis or in cases where they didn’t receive the help they needed. That’s why one of the most important steps in addressing dyslexia is ensuring that everyone in the child’s life, from family to educators, is informed so the right accommodations can be put in place.
https://www.readandspell.com/blog/Dyslexia?page=2&field_blog_tags_tid%5B1%5D=49&field_blog_tags_tid%5B2%5D=52&field_blog_tags_tid%5B3%5D=40&field_blog_tags_tid%5B4%5D=41&field_blog_tags_tid%5B5%5D=48&field_blog_tags_tid%5B6%5D=55&field_blog_tags_tid%5B8%5D=46&field_blog_tags_tid%5B9%5D=18&field_blog_tags_tid%5B10%5D=50
Caribbean Women Healers: Decolonizing Knowledge Within Afro-Indigenous Traditions, is a multi-year collaborative research co-produced by faculty and digital librarians and technical professionals from the University of Oregon Libraries’ Digital Scholarship Services (DSS). This digital humanities project contributes to existing Black Digital Humanities by centering deep-listening and digital decolonization methodologies that prioritize human dignity, traditional ecological knowledge (TEK), and data stewardship. More specifically, Caribbean Women Healers highlights how Afro-Indigenous (Black and Black-Indigenous) women elders mobilize their intergenerational knowledge and roles as healers, teachers, and community leaders within Caribbean healing traditions to effect change well beyond the traditional centers of those communities. Introduction History of the Project Contextualizing Caribbean Women Healers in Black DH Openness is a concept that has come to characterize knowledge and communication systems, epistemologies, society and politics, institutions or organizations, and individual personalities. In essence, openness in all these dimensions refers to a kind of transparency which is the opposite of secrecy and most often this transparency is seen in terms of access to information especially within organizations, institutions or societies. [Peters 2015] - social violence that produces early death, expressed in the saying, “la muerte es parte de la vida” (death is a part of life); - rapid emigration from their communities to urban centers and to other countries expressed in the statement “el campo se está vaciando” (the countryside is being emptied); - fundamentalist Christian religious social values and norms that demonize Afro-Indigenous/Black traditions and practices. Participatory Methodology & Decolonizing the Digital Healer Community-Centric Methodology and Their Impact on the Project - “Antes que todo, Dios. (Gracias a la Misericordia)”: Humility before Creation, and all beings – human, plant and animal – are a manifestation of Creation - “Todo Vive.”: All of Earth is alive; All of Earth is life. - “Moyumba/Ancestros”: To honor ancestors and elders and all those who have come before us. - “Obedi ka ka, obedi le le.”: Knowldege was shared throughout the world. - “Convivir.”: To be in relation with each other across a long span of time. - “Compartir.”: To generate intimacy and authenticity in our relations through storytelling, laughter, sitting together, eating together, etc. - “Cara a Cara.”: To see each other’s faces, to know the truth of our experiences in each other’s gazes/eyes/faces - “Ser generosa.”: To never arrive empty handed, to never let someone leave empty handed, either. - “Ser recíproca.”: Enabling balance in the universe between all living beings, material and immaterial, even in the creation of knowledge. - “Hay que fluir.”: To be flexible and easy-going in the rhythms of life’s chaos and unexpected events. Understanding the Technology in Use Open Digital Stewardship as Methodology Supporting Decolonization Open Source and Sustainability Conclusion Successes Challenges Just Futures Future Work Notes Works Cited This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
http://www.digitalhumanities.org/dhq/vol/16/3/000631/000631.html
A rich vocabulary is both a great asset and a great joy. When you have an extensive vocabulary, you can provide precise, vivid descriptions; you can speak more fluently and with more confidence; you can understand more of what you read, and you can read more sophisticated texts. A good vocabulary can enrich your personal life, help you achieve academic success, and give you an edge over others in the workplace. Whether you want to improve your vocabulary for a standardized test, learn more effective communication skills to use in the workplace, or be more articulate in social situations, the 501 questions in this book will help you achieve your goal.(501 Vocabulary Questions PDF) 501 Vocabulary Questions PDF How to Use This Book Each chapter begins with a list of words and their definitions. These are words you can expect to find in newspapers and magazines, in business documents, in textbooks, and on standardized tests like the SAT. The 501 words are divided by theme into 25 chapters. Each chapter has 20 questions to test your knowledge of the words in that chapter. The questions may be multiple-choice, matching, fill in the blank, synonym/antonym, or analogy. In addition, the four “Word Pairs” chapters ask you to complete a cross word puzzle with the chapter’s vocabulary words. Answers to each question are provided at the end of each chapter. 501 Vocabulary Questions PDF The questions increase slightly in difficulty towards the end of the book, but you can complete the chapters in any order you wish. If you prefer one theme over another, you can skip ahead to that chapter. Just be sure to come back and complete each section. When you are ready to begin, review the word list at the beginning of each chapter. Read each definition carefully.(501 Vocabulary Questions PDF) You may find that you do not know the exact meaning of words that you thought were familiar, even if you know the context in which the word is often used. For instance, the phrase moot point has come to mean a point not worth discussing because it has no value or relevance. This is a non-standard use of the word but one that has come to be accepted. Moot actually means debatable or undecided. You may also find that some words have secondary meanings that you do not know. Also, Click Here to download All CSS Related books 501 Vocabulary Questions PDF To help seal the words and their meanings in your memory, try these general vocabulary-building strategies: 1. Create flashcards. Use index cards to create an easy and effective study tool. Put the vocabulary word on one side and its meaning and a sample sentence on the other. You can copy the sample sentence from the word list, but you will learn the word faster and remember it better if you create a sentence of your own.(501 Vocabulary Questions PDF) 2. Use the words as you learn them. The best way to remember what a word means is to use it. Make it an active part of your vocabulary as soon as possible. Use the word in a letter to a friend, as you write in your journal, or in your next conversation with a coworker. Share your new words with your best friend, your siblings, or your spouse. 3. Keep it manageable. You can’t learn 501 new words overnight, and you will only get frustrated if you try to memorize them all at once. (501 Vocabulary Questions PDF) 4. Review, review, review. After you learn a set of words, remember to review those words regularly. If you simply keep moving forward with new words without stopping to review everything you have already learned, much of your effort will be in vain.(501 Vocabulary Questions PDF) 501 Vocabulary Questions PDF Repetition is the key to mastery, especially with vocabulary. The more you review the words and their meanings and the more you use them, the more quickly and permanently they will become part of your vocabulary.
https://pdflot.blogsterick.com/501-vocabulary-questions-pdf/
May 17, 2019 We are on the last stretch of the year in our Spanish classes. This is an important time of the year. Students have been doing great and their grades reflect it. Please encourage them to remain focused and to continue to study for quizzes and tests as the year ends. This will assure that their grade doesn't slip down on the last weeks. 6th grade continues to converse about activities they like to do (verbs) and will memorize adjectives to describe others. There will be plenty of reinforcing and class work to help them practice describing others in Spanish during class. Please provide a space and time for your child to study to memorize the adjectives and expressions for this unit. Homework and links to videos will be posted in Google Classroom. ++++++++++ 23 de enero, 2019 Dear parents, January has been a busy month in our Spanish class! Sixth graders continue to reinforce use of vocabulary for body parts and vocabulary for school supplies. They've been recently introduced to the nouns for school subjects in Spanish. We are using the vocabulary in context as we increase phrases to communicate. They’ll learn about Spanish articles (feminine and masculine, singular and plural). Please encourage your child to read vocabulary and phrases out-loud 3 times each day to assist memorization. Link to Quizlet has been added to Google Classroom. They should be using the website to study vocabulary, play games, and test themselves at least 4 times a week for 10 minutes. If you have any questions please do not hesitate to contact me. Blessings, Cecilia López Spanish Teacher ++++++++++++++++++++++++++++ 23 de octubre, 2018 Sexto grado Today, sixth graders practiced counting by 3's then by 5's to 60 in Spanish. They have been working with writing and recognizing the numbers from 1-100 out of sequence in Spanish. They've had some fun playing BINGO en español! They've added two questions and the answers to the dialogue to talk about themselves. A handout was given to them today and they practiced having the conversation with a partner several times during class. On Friday, October 26, 6th grade will have a test on the dialogue they have been rehearsing. They should study the dialogue until memorized (make sure you can ask and respond the questions without needing to refer to the handout). The questions are: ¿De dónde eres? - Where are you from? Soy de - I am from... (California, Lafayette, Los Estados Unidos, Walnut Creek, etc). ¿Cuántos años tienes? - How old are you? Tengo _______ años. They will have to fill-in the response to all the questions in the dialogue. They should practice writing the answers several times. Spelling always counts in Spanish! Have a wonderful week! Cecilia Lopez Spansih Teacher St. Perpetua School 17 de septiembre, 2018 Sexto grado ¡Viva México! With that phrase yelled 3 times in the Mexican Palace of Government, Mexicans began the celebrations of their Independence Day in Mexico City yesterday. On Tuesday, students will hear the story of the night before the beginning of the independence war which is called "el grito". This is usually celebrated on the 15th of September at midnight. In class, we continue to review and bring to memory the basic vocabulary for communication found on the Preliminary chapter. On Tuesday they will have an oral assessment on introducing themselves, and on Thursday they will have a written assessment on the same phrases. I am trying to establish the expectations to pay attention to detail when memorizing words and phrases, and to pronunciation in order to develop a foundation of writing using proper grammar and spelling. Hopefully as they learn to speak in Spanish in the class, the skill of writing is being developed simultaneously. This week they'll be introduced to responding to class directions in Spanish and numbers. Students should be visiting Quizlet every day for 10 minutes. Click on "Useful Links" to find the link. They should be repeating the Spanish words out loud (make sure their volume is up in the computer), and committing the vocabulary to memory. Students should try all modes of practice on Quizlet, including writing of the vocabulary and testing themselves. Thank you for your continued support. Cecilia Lopez Spanish Teacher St. Perpetua School +++++++++++++++ 7 de septiembre, 2018 Séptimo grado Please continue to memorize vocabulary (and spelling) on page 22 in your textbook. Follow the link to Quizlet in Google Classroom. Read phrases to introduce yourself out loud each day 5 times. Be ready to participate in our oral communication activity on Tuesday, September 11. Have a great weekend! ++++++++++++++ September 6, 2018 Students in 6th grade that scored low on their quiz last week will be given a second opportunity to take the examencito (quiz) on greetings, asking and telling how you are, and goodbyes in Spanish this Tuesday, September 11. Make sure to practice your spelling and have the phrases well memorized. We continue to review previous knowledge on the Para empezar chapter. I am assessing and reinforcing as I discover what areas need improvement. Objectives in this chapter are: - Greet people at different times of the day - Introduce yourself to others - Respond to classroom directions - Begin using numbers - Tell time - Identify parts of the body - Talk about things in the classroom - Use the Spanish alphabet to spell words - Describe weather and seasons Culture: Mayan Glyps, Mexican holidays, Los sanfermines, the Aztecs and the Aztec calendar Reversed seasons in the Norhtern and Souther Hemispheres. PLEASE CONTINUE TO MEMORIZE VOCABULARY ON PAGE 22 IN YOUR TEXTBOOK. MEMORIZE MEANING AND PRONUNCIATION. FOLLOW THE LINKS TO QUIZLET FOUND IN GOOGLE CLASSROOM. SPELLING ALWAYS COUNTS IN SPANISH. ++++++++++++++++++++++++ August 28, 2018 6th Grade Spanish News Students in 6th grade will have an examencito (quiz) on greetings, asking and telling how someone feels, and goodbyes. The examencito will be on Friday, August 31. Students in 6th grade have been practicing greeting, asking and telling how someone feels, and saying goodbye (formal and informal) in our Spanish class. They were introduced to Google Classroom. A class has been set up for them to access from home to be able to complete assignments and projects at home. Please make sure to remind them that they have to log in using their St. Perpetua e-mail account. Last Tuesday, all students were able to log in and follow the link to their vocabulary and phrases flash cards found in Quizlet. If they follow the link through Google Classroom, they do not have to create an account in their name. +++++++++++++++ August 21, 2018 6th grade Spanish news ¡Bienvenidos! We've started our Spanish 1A program! We have Spanish every Tuesday and Friday. Students will need the following materials for class: 1. Realidades 1A textbook 2. Spiral notebook (70 pages) 8 1/2 X 11 3. 1 inch binder or a section and binder paper in an organizational binder. 4. At least 2 sharpened pencils and 2 black or blue pens 5. On red or green pen to correct work 6. Glue stick, scissors, crayons/markers Our first unit is the Preliminary Chapter, Para empezar, will cover greetings, introductions, goodbyes; numbers, time; parts of the body. Grammar will cover lexical use of estar, ser, and plural commands. The cultural focus will be on appropriate behavior when greeting someone. Upon completion of this chapter, your child will be able to:
http://www.stperpetuaschool.org/teachers___administrators/mrs__cecilia_lopez/6th_grade_spanish
Melting Point:1025 Properties: White or light gray powder,slight soluble in water.It can react with water to release hydrogen fluoride at high temperature,may be slowly dissolved in acid to release hydrogen fluoride.Its melting point is 560 degrees. Sodium Potassium Rubidium Cesium Francium atomic syol Li Na K Rb Cs Fr atomic nuer 3 11 19 37 55 87 atomic mass 6.94 22.99 39.10 85.47 132.91 223 valence electron configuration 2s 1 3s 1 4s 1 5s 1 6s 1 7s 1 melting point/boiling point ( C) 180.5 30/10/2014· Ionic compounds have very high melting and boiling points, are soluble in water, and almost always conductive when dissolved in a solution or melted. Covalent compounds have low melting and boiling points (they melted almost immediately) and did not conduct electricity, although the citric acid did, it was not as bright as the other compounds. Alkali and Alkaline Earth Metals The elements in group one of the periodic table (with the exception of hydrogen - see below) are known as the alkali metals because they form alkaline solutions when they react with water. This group includes the elements lithium, sodium, potassium, rubidium, caesium and … ESPI High Purity Metal Specialists since 1950 with a company mission to provide a reliable resource for researcher\\\''s high purity metals, metal compounds and metal alloys for all major universities, international and domestic manufacturing companies and 7/8/2020· A meer of the alkali group of metals. It has the atomic syol Na, atomic nuer 11, and atomic weight 23. | Review and cite SODIUM protocol, troubleshooting and other Brigham Young University BYU ScholarsArchive Theses and Dissertations 1969-08-01 A thermodynamic approach to the study of phase equilibria in the sodium-potassium alloy system I. CuSO4(s) ⇔ Cu+2(aq) + SO4-2(aq) Like the sodium chloride, the copper sulphate was split into its individual ionic components. The positive Cu+2 ions were attracted to the oxygen (negative) end of the water molecules, and the negative SO4-2 Question: Bond LabLaboratory Details All Labs Will Have Pre-lab Comments Found Within ANGEL. You Will Learn Of Any Changes In The Lab In The Pre-lab Comments. These Changes Take Precedence Over What Is Found In The Lab.For You To Get An Idea Of Real Lab Techniques, You Will Find Some Video Links To Laboratory Videos.All Lab Reports Must Be In MS-WORD. Potassium and calcium play important roles in tree and plant metabolism, and as a result both are found in moderately significant quantities in wood. When that wood is burnt at high temperatures, alkaline compounds of potassium and calcium form. If the Melting and Boiling Points, Densities and Solubility for Inorganic Compounds in Water Physical constants for more than 280 common inorganic compounds. Density is given for the actual state at 25 C and for liquid phase at melting point temperature. 15/1/2012· Potassium chloride, or KCl, is an ionic solid. It is in the form of white colour. It’s melting point is about 770 C, and the boiling point is 1420 C. Potassium chloride is mainly useful in making fertilizers since plants need potassium for their growth and development. Three different methods were used to measure the concentrations of sodium species at various loions in an oxygen‐natural gas fired soda‐lime‐silica glass melting furnace. Measurements were made in the coustion space using the obsorption of visible light by sodium atoms and in the flue duct using laserinduced breakdown spectroscopy (LIBS). Sodium on the Wikipedia for Schools About this schools Wikipedia selection SOS Children, an eduion charity, organised this selection.SOS Children has looked after children in Africa for forty years. Can you help their work in Africa? Sodium-potassium alloys containing 40–90 percent potassium (by weight) at about 25 C are silver-white liquids distinguished by their high chemical activity (they ignite upon exposure to air). The electrical conductivity and heat conductivity of liquid sodium-potassium alloys are both lower than the corresponding values for sodium and potassium. (Original post by GoodStudent 1710) Hi for this question I wrote that magnesium has a smaller ionic radius than sodium so stronger attraction between nucleus and outer shell electrons - But in the mark scheme there wasn''t this point so does that mean I don''t get a decreasing melting and boiling points In general, their densities increase when moving down the table, with the exception of potassium, which is less dense than sodium. Reactions of Alkali Metals Alkali metals react violently with water, halogens, and acids. The Potassium is similar to these potassiums: Silicon, Lithium, Sodium and more. Chemical element with the syol Si and atomic nuer 14. Hard and brittle crystalline solid with a blue-grey metallic lustre; and it is a tetravalent metalloid and semiconductor. 18/8/2020· The alkali metals also have low densities. They are low enough for the first three (lithium, sodium and potassium) to float on water. The melting point of francium will be around 27°C. This is Sodium-potassium phase diagram, i. e. melting point of sodium as a function of potassium content in it (in atomic percent) Molten sodium is used as a coolant in some types of fast neutron reactors. It has a low neutron absorption cross section, which is required to achieve a high enough neutron flux, and has excellent thermal conductivity. calcium calcium cyclamate part Prior art date 1955-03-01 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) US491531A Many sodium compounds are useful, such as sodium hydroxide for soap-making, and sodium chloride for use as a de-icing agent and a nutrient (edible salt).Sodium is an essential element for all animals and some plants.In animals, sodium ions are used against potassium ions to build up charges on cell meranes, allowing transmission of nerve impulses when the charge is dissipated. Potassium in feldspar Template:See also Elemental potassium does not occur in nature because it reacts violently with water. As various compounds, potassium makes up about 1.5% of the weight of the Earth''s crust and is the seventh most abundant element. and is the seventh most abundant element. THERMODYNAMIC PROPERTIES OF MOLTEN NITRATE SALTS Joseph G. Cordaro 1, Alan M. Kruizenga 2, Rachel Altmaier 2, Matthew Sampson 2, April Nissen O melts between 38 – 44 C to give a clear liquid. [6, 7]. Water is loss above 60 C and continues to Molten sodium oxide is a good conductor of electricity. State the half-equation for the reaction occurring at the positive electrode during the electrolysis of molten sodium oxide. This was expected to be a high-scoring question but this was not found in practice. In The melting point of potassium is very sensibly lowered by small quantities of sodium (Kurnakow) (17), and the value and constancy of the melting points of the last two fractions show that any sodium present has been separated by distillation. In a preliminary Sodium-potassium alloys containing 40–90 percent potassium (by weight) at about 25 C are silver-white liquids distinguished by their high chemical activity (they ignite upon exposure to air). The electrical conductivity and heat conductivity of liquid sodium-potassium alloys are both lower than the corresponding values for sodium and potassium.
https://rubytree.pl/1376854799+if-metals-calcuim-potassium-sodium-were-high-melting-point.html
Presenter: Christa Marshall, Phy.D. Description: Drawing on evidence from various treatment court models, intentional family engagement leads to better outcomes for participants and the family unit. In addition, evidence from related systems such as schools, substance use treatment programs, and mental health treatment programs reveals that integrating family and friends into the recovery model has benefits all around. Emerging Topics: Sleep Hygiene Presenter: Brian L. Meyer, Ph.D., LCP, is a Clinical Psychologist and the Psychology Program Manager for Community-Based Outpatient Clinics at the Central Virginia VA Health Care System and an Assistant Professor in the Department of Psychiatry at Virginia Commonwealth University. Description: Imagine a life in which you slept only 2-3 hours a night: how would that change you? Insomnia is a symptom of almost every significant mental health problem found in Veterans Treatment Courts, including Substance Use Disorders, PTSD, Depression, and chronic pain. Nonetheless, it often goes unaddressed until later in treatment, even though treating insomnia would improve every one of those diagnostic problems. This presentation addresses the links between insomnia and other mental health disorders. It then provides a systematic approach to sleep hygiene that can dramatically improve sleep and decrease insomnia by 30-50% in just two sessions. So take off your shoes, pull up your blankets, and learn how to help your clients sleep better – and maybe you will sleep better, too. Strength at Home: A Trauma-Informed, Evidence-Based Intimate Partner Violence Intervention for Veterans and Civilians Presenter: Casey Taft, Ph.D., staff psychologist at the National Center for PTSD, VA Boston Healthcare System Description: The webinar discusses common risk factors for intimate partner violence, such as post-traumatic stress disorder, head injury, substance use, and other core themes that may underlie trauma and lead to abusive relationship behavior. The webinar also details the scientific evidence behind Strength At Home and how it enhances client motivation for change and facilitates personal responsibility. Military Leadership Principles in Veterans Treatment Courts Presenter: Major General Clyde "Butch" Tate (retired), chief counsel, National Association of Drug Court Professionals Description: This webinar addresses the core principles of military leadership and how to apply them successfully to veterans treatment court teams and participants. Attendees will learn to identify and describe leadership do's and don’ts when working with justice-involved veterans and how to apply these principles in treatment courts.
https://justiceforvets.org/resources/training/academy/webinars/
Zelda: Breath of the Wild wins Game of the Year at 21st D.I.C.E. Awards, Nintendo big winner of the night Winners of the 21st D.I.C.E. Awards were announced last night. The Academy of Interactive Arts & Sciences (AIAS) announced the D.I.C.E. Awards winners during a ceremony on Thursday, February 22 in Las Vegas. The Legend of Zelda: Breath of the Wild won game of the year, and walked off with three other awards. Cuphead also took home three awards, Playerunknown’s Battlegrounds took home two, and as did Horizon: Zero Dawn which lead the group with ten nominations, one of which was Game of the Year. Nintendo's Genyo Takeda was honored with a lifetime achievement award during the ceremony. Takeda, who retired last year, was general manager of Nintendo’s Integrated R&D division, and designed the save system for The Legend of Zelda. He also produced various games for Nintendo such as arcade game EVR Race, Punch-Out!!, Super Punch-Out!! (arcade and SNES), StarTropics, Pokemon Puzzle League, and Dr. Mario 64. Takeda was also director on various titles. The full list of winners and nominees are posted below. Winners are posted in bold.
https://www.vg247.com/zelda-breath-wild-wins-game-year-21st-d-c-e-awards-nintendo-big-winner-night
A lot of data is tabular in nature, and is efficiently encoded in text files. While such files are easy to produce and read, they bring with them several challenges when used in visualization tools and other programs that have to understand some of the data’s properties. Examples include categorical data, special values in numerical columns (which are common in Census data), and information about the data like its producer. Here is a proposal for a simple data description format that provides that missing information. I call it qnch. Goals The goal of this language is to provide all the necessary information that is commonly needed when parsing a data file and trying to find more information about it. The goal is not to cover every use case that could possibly exist, nor to create elaborate taxonomies of data types. There are other efforts for that, and I have no desire to compete with them. qnch is meant to be simple, readable, easy to implement and use, yet versatile and useful. Information about data files is usually given in accompanying text or PDF files, in a way that cannot be parsed automatically. This makes the processing of the data for visualization and other purposes much more of a challenge than necessary. While it may make sense to figure out a way to include all the additional information in the actual data file, it’s not realistic to expect all data producers to switch to such a format, or all data processing and visualization programs to be able to read it. A separate file does not require the original data to be altered at all, and can be produced either by the data source directly or by a third party. I can even imagine setting up a registry that you can give the URL of a data file and that points to where you can find a third-party qnch file. qnch is written using a structured format called YAML. This was chosen on purpose to make the file human-readable (and also -writable) without all the clutter and complexity of XML. An equivalent XML implementation would consist mostly of markup without any benefits. YAML is a very clean format that is very similar in design to JSON (and is, in fact, a true superset of JSON), and thus includes useful semantic features like lists and key-value pairs. Below is a discussion of the different features and some examples that show the usage in actual qnch syntax. This is not a complete specification, but it contains enough information to get the discussion going about additional features and other ideas. Basic information for the Parser, Format The parser needs to know a few things about the file to parse it. In addition to the encoding, the type is specified here. For delimited files, most of the information for the parser is specified at the beginning of the file. Character numbers for fixed-column formats are given as part of the dimension definitions. - encoding (optional). What encoding is used for the data file? This is important for textual data and perhaps categories, since numbers are rather immune to encoding changes. The default is utf-8, which conveniently includes ASCII as a subset. Other ISO encodings can be specified here, as well. Others, such as non-Unicode Asian encodings, or exotic encodings like EBCDIC are explicitly not supported. Unicode is simply the way to go. - type (mandatory). This can be delimitedor fixed. Delimited means that there is a delimiter character that separates the fields on each line. Fixed means that the values are located at specific character indices on each line. There are two shortcuts here, csv and tsv. These specify a delimiter of a comma and tab, respectively. - delimiter (optional). Given for delimited files, this is the character that separates values on each line. If not given for type fixed, defaults to a comma. Any character can be used here. - quotation-character (optional). For delimited files, this is used to enclose fields whose values contain the delimiter character. This defaults to double quotes. - headers (optional). For a delimited file, is there a header line? Value can be trueor false, default is true. If false, the sequence in which dimensions are specified is considered the sequence of dimensions on each line. - strict (optional). Whether the parser should throw errors when an unspecified value is encountered. The default is false. If set to true, this will throw errors when there are columns that were not defined, categories that were not specified, or when a numerical value is outside the range specified by the minimum and maximum fields below. Data Meta-Data This is the place for human-readable information about the dataset. The list below is what seems to be useful for most cases, but there is clearly room for more. The source field therefore can contain additional pieces of information that are not specified here, but that will be shown to the user if present in a qnch file. - name (optional). The name of the dataset. If not provided, the name of the data file is used. - description (optional). A longer description of the dataset. - date (optional). The date of the data set, if applicable. This field can contain any level of precision, from only a year down to a time stamp that is precise to the second. - source (optional). Information about the data source. This field has a number of named sub-fields, specified below. - organization (optional). The name of the publisher of the data set. - organization-url (optional). A URL identifying the publisher. - contact (optional). The name and email address of a person you can talk to about the data. - info-url (optional). A URL pointing to the page where the dataset can be found. - data-url (optional). A URL pointing to the canonical location of the data file itself. - citation (optional). How to cite this dataset in a publication. - Sub-fields that are not named above are acceptable, but the program may not be able to do more with them than display them in a list. - row-pattern (optional). A regular expression that is matched against each line before it is broken up into fields to decide whether to consider it or not. This is useful for U.S. Census data where housing and person records are mixed in the same file. In this case, you need to specify a separate qnch file for each type of record in the dataset. - dimensions (optional). The list of columns, or data dimensions, in the file using the format described below. This list can be left out if all that is desired is the metadata for the entire file. Using the U.S. Census Public-Use Microdata Sample (PUMS) as an example, here is what the complete header of a qnch file could look like that applies to the household records (indicated by the letter “H” at the start of the row): name: U.S. Census 2000, Households type: fixed source: organization: Census Bureau organization-url: http://census.gov/ info-url: http://www.census.gov/main/www/pums.html downloaded: 2009-08-10 row-pattern: ^H strict: true Columns/Dimensions Each column has data associated with it that depends on its type and on the type of file. The first list are items that both numerical and categorical columns share. Columns can be specified in any order for delimited files with header and for fixed column files; they need to be specified in the order they appear in for delimited files without header. - name (mandatory). This is the name of the dimension. If no id is given, this also serves as the id that is used in delimited files with header lines to find the right column. This is usually a more human-readable version of the id. - id (optional). Used to find the column in delimited files with header. - description (optional). An additional, longer description of the dimension. - variable-name (optional). This is mostly inspired by the U.S. Census data: provide an additional name that is used as a variable name to hold the value, like in a statistical package. - type (optional). The type of data in this column: numerical, categorical, text. The default is numerical. Numerical and categorical are specified further below, but text is simply treated as strings that can only be shown as textual information. That means, no further processing is performed, and no aggregates are created. This is useful for place names, product descriptions, and other text that is never used for actual analysis. - characters (mandatory for fixed column format). The range of characters that contains the data for this data dimension. Both ends are inclusive and must be specified (even if the field is only one character long), counting starts at 1. For a value that starts in the fifth column and whose field is four characters long, this would be: 5-8 - categories (optional). For a categorical dimensions, this field contains a list of category definitions as described below. Numerical columns have the following additional fields that can be specified. - minimum, maximum (optional). These can be used to scale charts or to show to the user before the actual data is loaded in. The parser must not rely on these values, though, and will accept values outside this range unless strict is set to true. - special-values (optional). Define strings that are treated as special values in this numerical dimension. The values are specified like categories below. Values are matched as strings before the field is parsed as a number, so special values do not have to be (but can be) valid numbers. This is important to be able to differentiate between 0, 00, 000, 0-0, and other values that can be found in Census data. - precision (optional). The smallest difference that the acquisition method used can resolve. If not given, this is assumed to be the smallest difference a standard float can encode. If specified, it can be used to show the granularity of the data in a visualization. - unit (optional). The unit this number is measured in, the default is no unit. Programs can, but do not have to, offer to convert numbers between units. Given the vast number of units out there, we would need to severely limit the allowed units to make this a requirement, though. Again using the U.S. Census, here is a column definition for a simple numerical column: - name: Housing unit weight variable-name: HWEIGHT characters: 102-105 minimum: 0 maximum: 1975 Almost all numerical columns have special values. Here is the number of bedrooms: - name: Number of Bedrooms variable-name: BEDRMS type: numerical characters: 124-124 special-values: <blank>: Not in universe 0: No bedrooms 5: 5 or more bedrooms Categories are specified as a list of key-value pairs, with the key being the name found in the file, and the value being the human-readable description. This is the same for special values in numerical dimensions. There is one special value, <blank>, that describes an empty field. Such fields can exist in both delimited and fixed formats, and usually mean “missing” or “not in universe”. Here is a definition for the categorical heating fuel field in the Census data: - name: Heating Fuel variable-name: FUEL type: categorical characters: 132-132 categories: <blank>: Not in universe/unknown 1: Gas from pipes 2: Gas from tank, bottles, LP 3: Electricity 4: Fuel oil, kerosene, etc. 5: Coal or coke 6: Wood 7: Solar energy 8: Other fuel 9: No fuel used Matching Data and Meta-Data Matching a CSV or other data file with its qnch file is problematic, because they may have come from different sources, file names can change, and the data file does not contain any meta data to match against (like its canonical URL). The obvious convention is to name the qnch file the same as the data file, but with the extension .qnch. When opening a data file, the parser will look for that file by replacing the extension (or adding it, when the data file has no extension). A similar convention is to look for underscore characters in the data filename and remove the last part the file name consisting of that underscore and the rest before replacing the extension. This would shorten the file named HouseholdsNC.txt to Households.txt, which would then match Households.qnch. This is only done when no HouseholdsNC.qnch file is found. If none of these conventions turn up a qnch file, the program should ask the user to pick one, or otherwise offer to import the data based on best guesses. Status I will incorporate a qnch parser into the next version of Parallel Sets. Eventually, I will separate the parser and the Data Wizard out of the ParSets program and make it a separate project. qnch files for all datasets available for download through Parallel Sets will also be made available. I hope that based on my implementation and the description here, others will contribute qnch files, parsers, and producers. So now it’s your turn! Let me know what you think, what’s missing, etc.!
https://eagereyes.org/blog/2009/qnch-data-description-language-for-tabular-data
If you read this blog before you start learning French, it will help you a lot. It gave me a clear idea of what to look for while learning French. This Blog made the process of learning French much easier. It’s an informative blog to read and learn about French language, especially for beginners. I have been learning french for the past few years and feel I am making good progress…except when it comes to understanding spoken french. I can make myself understood in french but am generally lost if they respond with anything more than a few words. What do think is the best way to improve comprehension in french – is it particularly difficult or just me? I’d always assumed the Swedes were just good at everything, hence their omnipresence on North American hockey teams. She firmly denied these superpowers. “English is a lot more like Swedish than you realize.” To improve your German quickly, you must speak from the very first day you start learning German. This speak from day one approach is the fastest and most efficient way to learn German – especially if you speak with native German speakers. If you are lost when you see “conjugate,” conjugating is this: the verb regarder means “to look,” in french. If you want to say “I am looking,” or “I look,” you write “Je regarde,” because when you take off the ending of the verb (which for this case is -er) in the Je form (Je means I), you replace it with “e.” Now, if you wish to say “They are looking,” or “We are looking,” you will need a different ending. They say that Romanian is the closest living language to Latin, and has preserved a lot of Latin’s grammatical structure. Articles are a bit of a puzzle in Romanian, with definite articles attached as a suffix to the end of nouns (frate/ fratele, brother/the brother), while indefinite articles appear before nouns (copil/un copil, child/a child). “Accord du verbe. In French, the past participles in compound tenses and moods sometimes have to agree with another part of the sentence, either the subject or the direct object. It’s a lot like adjectives: when a agreement is required, you need to add e for feminine subjects/objects and s for plural ones.” Danish is said to be the hardest Scandinavian language to learn because of its speaking patterns. It is generally spoken more quickly and more softly than other Scandinavian languages. Danish is also flatter and more monotonous than English. Another ça phrase in the neighborhood of ça va, ça marche can just be generally used to check if someone is okay with something. You can also say “comment ça marche?” to ask how something works (like a vending machine or a cell phone). Beginning Conversational French is an online course from ed2go that teaches you the basics with audio, written and interactive materials. Lessons are focused around dialogue scenarios, so you’ll get a taste of practical French with communication placed at the forefront of learning. This is one of the first phrases most people learn. Consequently, it’s easy to dismiss its importance and incredible versatility. Basically, ça (it, that) is a handy noun and aller (to go) is a handy verb. In the early stages of your learning I strongly suggest to listen to the language as much as possible. This means getting your ears used to the sound of the language and not worrying too much about vocabulary memorization or mastering grammar rules – these come later! Consider your current level of French. If you don’t feel confident in your ability to fully understand native speakers, you’ll want to consider video sources that are accompanied by a transcript, subtitles or a “cheat sheet.” Many popular French learning podcasts offer transcripts for their listeners. All of FluentU’s French language videos have interactive subtitles which allow you to see every single word’s definition on-screen, if desired. These kinds of resources are ideal if you need help while watching videos. You’ll still want to try without looking, but this way you can check yourself and make sure you’re not getting things mixed up in your mind. If in doubt, play it safe. French as a language uses a lot of similar sounds and it’s easy to mistake certain combinations of words for others. This was typical. In fact, I was a good student, and did better than most of my classmates in French. I passed all the grammar tests and other school French tests with high marks. Yet when it came time to speak, I could only string words together with great uncertainty, and really didn’t understand what I heard. I certainly didn’t read French newspapers, which were available in Montreal. Nor did I watch French movies. I couldn’t understand them. Learning one-on-one with a tutor allows for a completely tailored learning experience and more opportunities to practice speaking. Compared to a classroom where the teacher has to split attention among dozens of pupils, private tutoring usually yields quicker results. However, private tutoring doesn’t come cheap and you’ll need to be prepared to pay a high hourly rate for an experienced tutor. I didn’t know the word for “meaning” in French, so I said the English word “connotation” with a thick French accent. I paused and studied my teacher coyly, waiting for her to correct me. She looked at me expectantly as if to say, “Well, duh! Connotation! Everyone knows connotation!” (And see how easy it actually is to learn French… even if you’ve tried and failed before) (そして英語学習がどれだけ簡単か、肌で感じてみてください…今までに失敗したことのある人でもそれが分かるでしょう) (Y vea qué tan fácil es en realidad aprender inglés… aún si lo ha intentado y fallado antes) No great achievement ever happens overnight, and learning French is no different. The first step to learn French is to make some smart, realistic goals to help yourself organize your time and plan your studies. Browsing italki. italki is my go-to place to find native German speakers. The prices are reasonable (especially compared to private, face-to-face lessons) and you can meet in the comfort of your own home. Spaced Repetition Systems (SRS). SRS is a great method for memorising vocabulary and phrases using virtual flashcards. My favourite SRS tool, Anki, is free and allows you to create your own flashcards, so you can build a deck from your personalised French phrasebook. It opens the door to a history and culture. Learning French is your gateway into the fascinating French-speaking world. You’ll be able to access the great works of French writers in their original versions, enjoy wonderful French movies, and understand beautiful French songs. This is true for any of the many places throughout the world where French is spoken. Every day, start a new “entry” in a notebook by marking the date. Play your video. Try to understand and hold as much of each sentence in your memory as you can. When the sentence ends, pause. Begin writing out the sentence and speak each word out loud as you’re writing it. You might have to replay a few times to get the entire sentence. You might have to do some quick research, or look through a dictionary for a mystery word when you only have a vague idea of how it’s spelled beyond the first few letters. You might need to turn to an internet message board to ask a question about the usage of a particular phrase and then observe the resulting debate between native speakers. This is a process. Enjoy it. French Today has lessons and audiobooks that focus on teaching French the way it’s actually spoken first and foremost. Using their materials, you can become familiar with grammar and vocabulary concepts while also developing an understanding of what that grammar and vocabulary really sounds like in action. First of all, anything is possible with the right method, motivation and dedication. Some language programs will definitely prepare you with practical language elements within the timeframe they promise, but you will definitely not be fluent. You won’t be able to talk with anyone about absolutely anything in French, but you will know some of the basics that can help you survive in France without being completely lost. Benny Lewis, is, I think, the most successful polyglot blogger on the Internet; the one with the greatest reach. With this website, Fluent in 3 Months, he was one of the earliest language learners to use the Internet to encourage others to learn languages, and to talk about it. I too am what you would I already have a very good basis in french I never regarded the pronounciation as a monster to conquer actually its the most delicate thing that attracted me to want to master french my problem is that I have a big lack and shortage of vocab. That stands as a barrier of getting to be fluent en français also the structure of the phrases and daily expressions which turns out to be less complex than a phrase I try to come up with using my humble list of vocabs Some French videos on YouTube are really well done, and provide a fun support to learn French. So do French songs, French movies, French blogs,French podcasts, the many French apps… There is so much to choose from nowadays! For Business – being bilingual isn’t just good for your resumé, it can change your career. As a major language for global commerce, knowing some French can be extremely advantageous for anyone doing business in western Europe or the western half of Africa. Countries in West Africa represent rapidly emerging markets that will be harder to access if you can’t understand French. In Europe, French remains an important language for many businesses. As a language nerd, I’m a big fan of Benny Lewis, whose “Speak from Day One” approach should be, I think, language-learning gospel. He’s written several posts about why learning Czech, Turkish, German, Mandarin Chinese, Hungarian, and other languages is not as hard as you think. His point is that with the right attitude and approach, learning a new language—despite what detractors might claim—is never as difficult a task as it’s often made out to be. Knowing some common French greetings and good-byes will be indispensable when traveling in French-speaking countries. Saying hello and good-bye in French will quickly become second nature because you’ll use them day in and day out with everyone you come across. I will most definitely take your advice. I am learning French at school and I’m not doing too well at all. We had exams earlier this month and I am sure that I failed because I did not finish the papers. The rest of the students did and so I felt stupid and wanted to just quit the class. My teacher said my biggest problem is my lack of vocabulary since the way that I speak is quite nice. Reading this article though has just given me the extra push that I need to stick with it. I really believe that I can do it now. Thanks for the inspiration! 🙂 However, it’s highly recommended that you gradually expand your vocabulary at least to the 1,000 most commonly used words in French. With just 1,000 words, you’ll be able to understand about 80% of written texts. The traditional meaning of quand même is along the lines of “all the same,” or “still,” and it’s used this way. But it also tends to be used as a filler word quite often, to the point where it’s difficult to say exactly what its function is. A lot of the time. you’ll find that it’s used for emphasis. This step is crucial. Why do you want to learn French? Is it because you have family or French origins? Is it because you’re going to visit France soon? Is it because it’ll help your professional or personal endeavors? Is it because you want to read the original French text of Les Misérables or Madame Bovary? Whatever the reason, you need to take it, write it down, and place it somewhere you’ll notice often. This will be your motivation during those days you don’t feel like practicing… it’s all psychological. Without the will power or dedication, you won’t be any closer to French fluency. Especially if you’re learning French by yourself. I just started learning Italian on my own and my motivation is speaking to my girlfriend and my upcoming trip to Italy. We add new courses on a regular basis so the opportunities to learn and improve are always growing. And if you own an iPhone, Android, or Windows 8 phone the key to speaking French is already in your pocket. IE Languages offers an e-book on informal and spoken French that comes with numerous audio files, so you can study spoken French directly. You can also get this at a discounted rate with their combo pack, which includes the French tutorial (helpful if you’re still struggling with grammar concepts or you want a complete overview of the language). What do the methods mentioned above have in common? They all cost money. For thrifty folks who have a little more patience and motivation than the average learner, there ways to learn French for free: learn french fast learn to speak french french for beginners I would love to get in contact with a native speaker to practice. I have been teaching 12-14 year olds French but I am forgetting the upper level grammar. I don’t feel as fluent as I used to be. I would love to start by writing…speaking… Like all Romance languages, French’s Latin derivations make much of the vocabulary familiar to English speakers (edifice, royal, village). Linguists debate the concrete number, but it’s said that French has influenced up to a third of English vocabulary, giving it more lexical common ground with English than any other romance language. Ça va? (literally “it’s going?”) asks someone how things are. The usual response is ça va, which means things are fine. Ça ne va pas, on the other hand, indicates things are perhaps not going so well. In most French-speaking countries it’s considered good manners to greet everyone. So, whether you’re speaking to a clerk, a waiter, or just bumping into someone on the street, take the time to say a polite bonjour before you proceed. This also means that when step on the bus or train you should say a quick bonjour to anyone within hearing distance. Famous Hungarian polyglot Kato Lomb once said that language learning success is a function of motivation plus time divided by inhibition. I would use the word resistance instead of inhibition. A person’s inhibition is only one form of resistance to learning a language. Frustration with teaching methods is another, and in some ways more important form of resistance. Modern spoken French and the French you might have studied in books/schools are VERY different. In any language, there will always a difference in spoken vs. written form but the French really take this to the next level! It may be so. You may have “covered” it. But would you be able to remember all these words after… a week? Let along be able to use them in a conversation, nor deduct by yourself the grammar constructions that rules the sentences. This exclamation is typically followed by exasperated hand wringing over the difficulty of the pronunciation, the seemingly endless list of exceptions to every grammar rule, the conjugations, and so on. Now that I’ve officially eclipsed the three-month milestone in my French language studies, I’d like to dispel, once-and-for-all, the (surprisingly) pervasive notion that French is somehow impossibly difficult to learn. Spoiler alert: it’s not. Keep it fun. The selection process itself should be enjoyable. Look for sources you can watch multiple times in a row, and look for content that you find genuinely interesting. What film character would you most like to be for Halloween? What topics would you like to be able to discuss fluently? If you love soccer, track down some French language sporting event videos and acquire all the soccer vocabulary you’ll need to argue about teams at the bar. If you love movies starring Romain Duris (and who doesn’t?) compile your favorites. Look for language you want to make your own. French can seem difficult to pronounce at first, and even a little difficult to understand. It isn’t like English, Swedish or the tonal languages. French tends to roll along in a fairly monotonous range of tones. There are the nasal sounds which seem to sound the same, but aren’t.
http://learntospeak.it/where-to-learn-french-in-singapore-get-your/
Table Design. Saturday , December 23rd , 2017 - 07:31:30 AM Quote from The Mirrored Coffee Table Placement for Small Apartment : Do you have small apartment? Feel confused of right arrangement of your apartment? Then the idea to put mirrored coffee table can be a good decision for you. However, you cannot put random item because it will need special preparation to bring best decoration result. Whole of detail that you apply in your apartment will bring specific effect, especially when you only have small space inside the area. Then, the mirrored coffee table idea should also be the thing which can take position as solution of high quality item placement in the house. About ♦ Contact ♦ Privacy ♦ TOS ♦ Copyright Copyright © 2016. fightcardbooks.com. All Rights Reserved. Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does fightcardbooks.com claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner.
http://www.fightcardbooks.com/the-mirrored-coffee-table-placement-for-small-apartment/glass-cocktail-tables-wrought-iron-coffee-table-clear-glass-coffee-table-coffee-table-base-tempered-glass-coffee-table-white-round-coffee-table-glass-for-coffee-table/
The ongoing COVID-19 (Coronavirus) pandemic is anticipated to result in a potential downturn in the Blue Glass IRCF market. Get a hands-on over our resourceful insights that draw the roadmap of how companies are using the global Coronavirus crisis for business gains. Our elaborate reports on COVID-19 analysis offer an in-depth insight about the current trends and drivers that are likely to influence the market growth. A recent market study done by the analysts on the global Blue Glass IRCF market reveals that the global Blue Glass IRCF market is expected to reach a value of ~US$ XX by the end of 20XX growing at a CAGR of ~XX% during the forecast period (20XX-2oXX). The Blue Glass IRCF market study includes a thorough analysis of the overall competitive landscape and the company profiles of leading market players involved in the global Blue Glass IRCF market. Further, the presented study offers accurate insights pertaining to the different segments of the global Blue Glass IRCF market such as the market share, value, revenue, and more. Deep analysis of the regulatory framework and investment scenario of the global Blue Glass IRCF market Information related to the ongoing and pipeline research and development projects Impact of the technological advances on the growth of the Blue Glass IRCF market In-depth pricing analysis of the different market segments A thorough assessment of the top factors shaping the growth of the Blue Glass IRCF market The presented report segregates the Blue Glass IRCF market into different segments to ensure the readers gain a complete understanding of the different aspects of the Blue Glass IRCF market. Competitive Outlook This section of the report throws light on the recent mergers, collaborations, partnerships, and research and development activities within the Blue Glass IRCF market on a global scale. Further, a detailed assessment of the pricing, marketing, and product development strategies adopted by leading market players is included in the Blue Glass IRCF market report.
The September-October 2007 issue of TR News includes the following articles: Visualization in Transportation 101 A Vision of Visualization: Communicating Problem-Solving Concepts to the Public Visualization as a Common Language for Planning: Good Practices, Caveats, and Areas for Research Visualization Issues for Transportation Agencies: Approaches and Challenges Visualization to Solve Problems in Freight Transportation Visualization and the Larger World of Computer Graphics: What’s Happening Out There? Visualization Education and Training: Learning the New Tools—and New Tools for Learning Research Agenda for Visualization in Transportation: Incorporating New Legislative Directives for Planning Treatment of Soils with Compost to Mitigate Pavement Cracking: Texas Tests and Applies Stabilization Method The TR News is TRB's bimonthly magazine featuring timely articles on innovative and state-of-the-art research and practice in all modes of transportation. It also includes brief news items of interest to the transportation community, research pays off articles, profiles of transportation professionals, workshop and conference announcements, new book notices, and news of TRB activities. Submissions of manuscripts for possible publication are accepted at any time. Copies of the TR News may be purchased individually or ordered on an annual subscription basis.
http://www.trb.org/Publications/Blurbs/159313.aspx
Oct. 23, 2014—New insights to the workings of a protein that moves neurotransmitters across the nerve cell membrane could aid the design of more effective antidepressants. New view of neurotransmitter transport Apr. 24, 2014—Dynamic measurements of the bacterial leucine transporter shed light on the transporters that play roles in neuropsychiatric and addiction disorders. Neurotransmitter’s role in bone balance Nov. 7, 2013—Removal of the neurotransmitter norepinephrine from the space outside cells plays an important role in the regulation of bone remodeling. New clue to ADHD May. 15, 2012—A rare genetic change adds support to the idea that altered dopamine signaling is a key risk factor for ADHD.
https://news.vanderbilt.edu/tag/neurotransmitter/
Media Evolution: Striving to Serve Trust takes years to build, seconds to break. How can media earn your trust? Join our Email List About the Project Every citizen should have access to news and stories that impact their lives and have the ability to have trust in the organization providing the information to be factual and unbiased. The changes in our digital world requires both a fresh understanding of a community’s needs, accessibility, and an understanding of how to communicate the benefits and values of a news organization back to its community. Connecting with the local audience is increasingly becoming difficult due to the increased availability of news sources. In rural and remote environments, this availability of local news is limited but complicated due to the vast geographic area covered and limited connectivity, along with a distrust of media in delivering unbiased news. Knowing and understanding the local audience is key to becoming a curator of local news, through thoughtfully and strategically connected stories that the community might fine valuable, that will resonate with them, and that will have a meaningful impact on them. The year-long project will look at how rural news organizations can better connect with their audiences. The project will tackle the challenges of better understanding and serving communities to enhance the relationship with the varied audiences. To achieve this, the team will interact with identified audiences and stakeholders to develop a better understanding about the audiences energeticcity.ca serves through surveys, research, and community engagement activities. The Media Evolution: Striving to Serve project will drive digital innovation, assist the publisher to better understand their communities, to develop new business models and to share learnings with the wider industry. The objective of this public consultation is to develop a plan to increase reader trust and engagement at Energeticcity.ca.
https://energeticcity.ca/evolution/
Ellava Row Limited was set up on Friday the 16th of November 2018. Their current partial address is Dublin, and the company status is Strike Off Listed. The company's current director has been the director of 0 other Irish companies. Ellava Row Limited has 1 shareholder. Company Vitals - Company Name:Ellava Row Limited - Time in Business:3 Years - Company Number:637808 - Current Status:Strike Off Listed - May Trade As:Ellava Row Ltd - Size:Small Company - Principal Activity:Hairdressing and Other Beauty Treatment - Partial Address:Dublin - RiskWatchCEASED TRADING NOTIFICATION Documents |Document||Pages||Effective||Received| |Advertisement||1||11/07/2022||11/07/2022| |Form H15: Application for Voluntary strike-off||3||11/07/2022||11/07/2022| |Letter Of No Objection||2||11/07/2022||11/07/2022| |Form B1B73 - Annual Return and Change of ARD||3||16/05/2022||27/06/2022| |FINANCIAL STATEMENT||10||16/05/2021||22/06/2022| |Constitution||15||16/11/2018||19/09/2018| This company has 7 other documents.
https://www.businessbarometer.ie/Risk-Management/Ellava-Row-Limited-637808
Adecco Group Institute, the study and dissemination center of the Adecco group, wishes to know the potential level of satisfaction of an active media in each of the Spanish autonomous communities. To do this, the Adecco Monitor for Job Opportunities and Satisfaction elaborates on this level of satisfaction, as well as on employment opportunities on the labor market. Labor disturbances are reduced The restrictions on the normal development of economic activity resulting from the measures to combat the pandemic have led to a collapse in the number of strikes and their participation. Thus, the number of strikes in Spain has been reduced for the third consecutive quarter, with which there are 8.5 disputes per 100,000 companies. Not only is this a figure that results in an inter-annual reduction of 49.2%, but it is the smallest record that statistics have collected in at least 20 years. The number of conflicts fell in all autonomous regions for the third consecutive quarter, which has not happened either, at least in the last 20 years. A clear example of the decline in the number of strikes is that, for the first time, statistics show the case of an autonomy that has not recorded any. This is La Rioja, where the drop from one year to the next was therefore 100%. Leaving aside the case of La Rioja, the Balearic Islands, the Canary Islands, Cantabria, Asturias, the Valencian Community, Extremadura, Galicia and Navarra have recorded annual decreases in the number of strikes. ‘at least 65%. After La Rioja, without conflict, the Valencian Community, the Canary Islands and the Balearic Islands were placed, with less than 2 strikes per 100,000 companies. The three regions with the highest number of conflicts remain the Basque Country (73.5 strikes per 100,000 companies, after a decrease of 39.4%), Navarre (now with 47.4 conflicts, with a decrease of 66.8 %) and Asturias (18.3 strikes, still every 100,000 companies; -66.1%). The number of workers participating in strikes has decreased in line with the number of conflicts. Across Spain, strike participants fell 49.7% year-on-year, leaving 16.8 strikers for every 10,000 workers. This is the lowest figure for at least 20 years. In 15 autonomies, the number of strikers decreased and in the other two autonomies, it increased. The only ones showing an increase are the region of Murcia and Extremadura. In Murcia, participation rose to 26.6 strikers for 10,000 employees (+ 188% over one year). In Extremadura, the increase, although significant (+ 65.2%), brought strike participation to just 2.4 strikers per 10,000 employees, less than a fifth of the national average. La Rioja presents, once again, an exceptional situation, because, not having had any strikes, there are no strikers either. They are followed, with the lowest participation in strikes, in the Canaries and Castilla-La Mancha, with 0.6 participants in strikes in both cases (year-on-year decreases of 84.9% and 64.2 %, respectively). The Balearic Islands and Andalusia have 1.7 strikers per 10,000 employees (drops of 86.9% and 89.1%, respectively). These five regions are the only ones with less than 2 participants in strikes for every 10,000 people employed. The Basque Country remains the region with the highest participation in strikes, despite the drop of 67.5%, so there are 66.9 strikers, still for 10,000 people employed. It is followed by Catalonia (43 participants in the conflicts, after an annual decrease of 10.3%, which is the smallest reduction among the 15 autonomies which decreased participation in strikes). In addition, the third autonomy with the highest proportion of strikers is the Region of Murcia, with the data indicated above. History of highly skilled tasks Over the past year, employment in our country has been reduced in 14 of the 17 autonomies. However, when we break down the variation in the number of employees into two main categories, depending on the level of training required by each position, we find that 99% of the jobs lost are medium or low skilled. Indeed, if 615,900 jobs of average or low qualification were lost (-4.6% over one year), only 6,700 highly qualified positions were eliminated (-0.1%). The above is confirmed by observing that while only two autonomies show an increase in employment in medium or low skill positions, eight communities have increased employment in highly skilled positions. Eight autonomies followed the general model of job destruction in both categories. Among them, the cases of the Canary Islands (-15.2% over one year in jobs with medium or low qualification and -4% in those with high level of qualification) and the Valencian Community (-4.2% and – 1.7%, respectively) stand out. Seven other regions saw their number of people in highly skilled jobs increase at the same time as employment in the others declined. The Balearic Islands present the greatest contrast, with a 15.5% drop in employment with medium or low qualifications (the largest drop of all autonomies) and a 10.6% increase in people employed in highly skilled tasks (the second largest regional increase). In the region of Murcia, too, the contrast is marked (-5.3% in the case of medium or low qualification and + 15.4% in the case of high qualification), but with the advantage of having obtained an increase in l total employment. There are two special cases. One is that of La Rioja, which is the only autonomous region where employment has increased in both professional categories (+ 0.8% over one year in highly qualified positions and + 0.1% in average or weak). The other is Extremadura, which is the only one to have destroyed highly qualified jobs (-8.8%) and created jobs with medium or low qualifications (+ 5.2%). As a result of the comments in the previous paragraphs, there has been a general increase in the proportion of highly skilled jobs in total employment. In Spain as a whole, it rose to 34.5%, 1.3 percentage points higher than a year earlier. This is the highest proportion recorded in the statistics. The Community of Madrid (45.7%), the Basque Country (38.8%) and Catalonia (37.3%) have the highest proportions of skilled jobs. However, the largest increase corresponds to the Balearic Islands (+ 5.3 pp, reaching 32.3%). Extremadura (24.7%), Castilla-La Mancha (28.2%) and La Rioja (28.5%) are in the opposite situation. People who want to work longer and can’t find where Hourly underemployment is the situation faced by those who work less full time, want and are available to work more hours, but cannot find where to do it. This group was continuously reduced from March 2014 to June 2020; but from there it started to increase. In Spain, there are just over 1.8 million people in a situation of hourly underemployment, 15,600 more than a year earlier (+ 0.9% over one year) and the highest number since June 2018. comparison with the extent of the decline in economic activity. The explanation is that those who are underemployed are, by definition, part-time workers. The latter group experienced a year-over-year decline of 145,700 people (-4.9%). It can be deduced from this that a part of the underemployed people became unemployed or inactive, which limited the increase in the group analyzed. Taking into account the moving average of the last four quarters, it can be seen that the proportion of people in a situation of underemployment in the total number of jobs remained the same as in the previous year, at 8.8%. Only in three autonomous communities, this proportion is 10% or more. These are the cases of Extremadura (11.2%, after a drop of one tenth year on year), the Region of Murcia (10.8%; implies a reduction of 3 tenths) and of Navarre (10.4%, the same as the previous year). At the opposite extreme, three autonomous regions stand out with 7% or less of employees in a situation of hourly underemployment: La Rioja (6.6%), Catalonia (6.8%) and the Basque Country (7 %; in all three cases, with a decrease of one tenth from one year to the next). * If you found this article interesting, we encourage you to follow us on TWITTER and subscribe to our DAILY NEWSLETTER.
https://bcfocus.com/the-number-of-people-who-want-to-work-longer-hours-and-cannot-find-a-place-to/
In addition to 27 of Connecticut's acute care hospitals, CHA's membership includes a variety of other facilities and related healthcare organizations, including psychiatric hospitals, rehabilitation centers, nursing homes, infirmaries, and clinics. Physician group practices, insurance companies, health maintenance organizations (HMOs), preferred provider organizations (PPOs), and other organizational members round out the list. CHA's members are listed by category below: Acute Care Hospitals - Short-term general and children's general hospitals, including their related healthcare organizations, but excluding federal general hospitals. Other Hospitals - Non-governmental psychiatric hospitals; non-governmental hospitals devoted to rehabilitation care or to the diagnosis of chronic disease, substance abuse, or the terminally ill; federal hospitals and state operated long-term hospitals. Non-Hospital Institutional Healthcare Providers - Chronic and convalescent nursing homes, rest homes with nursing supervision, homes for the aged and similar institutions; infirmaries, dispensaries, clinics, home health agencies, rehabilitation centers, and other similar organizations for the diagnosis, care, or treatment of patients, but not rendering inpatient care. Other Organizations - Physician group practices; organizations established for the purpose of insuring care such as insurance companies, health maintenance organizations, and preferred provider organizations; organizations that are not directly involved in providing healthcare, but have a substantial interest in health matters as determined by the Board of Trustees.
https://cthosp.org/about-cha/membership-listing/?mobileFormat=true
Biomechanics in sport incorporates detailed analysis of sport movements in order to minimise the risk of injury and improve sports performance. Sport and exercise biomechanics encompasses the area of science concerned with the analysis of the mechanics of human movement. It refers to the description, detailed analysis and assessment of human movement during sport activities. Mechanics is a branch of physics that is concerned with the description of motion/movement and how forces create motion/movement. In other words sport biomechanics is the science of explaining how and why the human body moves in the way that it does. In sport and exercise that definition is often extended to also consider the interaction between the performer and their equipment and environment. Biomechanics is traditionally divided into the areas of kinematics which is a branch of mechanics that deals with the geometry of the motion of objects, including displacement, velocity, and acceleration, without taking into account the forces that produce the motion while kinetics is the study of the relationships between the force system acting on a body and the changes it produces in body motion. In terms of this there are skeletal, muscular and neurological considerations we also need to consider when describing biomechanics. Application According to Knudson human movement performance can be enhanced in many ways as effective movement encompasses anatomical factors, neuromuscular skills, physiological capacities and psychological/cognitive abilities. Biomechanics is essentially the science of movement technique and as such tends to be most utilised in sports where technique is a dominant factor rather than physical structure or physiological capacities. The following are some of the areas where biomechanics is applied, to either support performance of athletes or solve issues in sport or exercise: - The identification of optimal technique for enhancing sports performance - The analysis of body loading to determine the safest method for performing a particular sport or exercise task - The assessment of muscular recruitment and loading - The analysis of sport and exercise equipment e.g., shoes, surfaces and rackets. Biomechanics is utilised to attempt to either enhance performance or reduce the injury risk in the sport and exercise tasks examined. Principles of Biomechanics It is important to know several biomechanical terms and principles when examining the role of biomechanics in sport and exercise. Forces and Torques A force is simply a push or pull and it changes the motion of a body segment or the racket. Motion is created and modified by the actions of forces (mostly muscle forces, but also by external forces from the environment). When force rotates a body segment or the racquet, this effect is called a torque or moment of force. Example - Muscles create a torque to rotate the body segments in all tennis strokes. In the service action internal rotation of the upper arm, so important to the power of the serve, is the result of an internal rotation torque at the shoulder joint caused by muscle actions (latissimus dorsi and parts of the pectoralis major and deltoid). To rotate a segment with more power a player would generally apply more muscle force. Newton’s Laws of Motion Newton’s Three Laws of Motion explain how forces create motion in sport. These laws are usually referred to as the Laws of Inertia, Acceleration, and Reaction. - Law of Inertia - Newton’s First Law of inertia states that objects tend to resist changes in their state of motion. An object in motion will tend to stay in motion and an object at rest will tend to stay at rest unless acted upon by a force. Example - The body of a player quickly sprinting down the field will tend to want to retain that motion unless muscular forces can overcome this inertia or a skater gliding on ice will continue gliding with the same speed and in the same direction, barring the action of an external force. - Law of Acceleration - Newton’s Second Law precisely explains how much motion a force creates. The acceleration (tendency of an object to change speed or direction) an object experiences is proportional to the size of the force and inversely proportional to the object’s mass (F = ma). Example - When a ball is thrown, kicked, or struck with an implement, it tends to travel in the direction of the line of action of the applied force. Similarly, the greater the amount of force applied, the greater the speed the ball has. If a player improves leg strength through training while maintaining the same body mass, they will have an increased ability to accelerate the body using the legs, resulting in better agility and speed. This also relates to the ability to rotate segments, as mentioned above. - Law of Reaction - The Third Law states that for every action (force) there is an equal and opposite reaction force. This means that forces do not act alone, but occur in equal and opposite pairs between interacting bodies. Example - The force created by the legs “pushing” against the ground results in ground reaction forces in which the ground “pushes back” and allows the player to move across the court (As the Earth is much more massive than the player, the player accelerates and moves rapidly, while the Earth does not really accelerate or move at all). This action-reaction also occurs at impact with the ball as the force applied to the ball is matched with an equal and opposite force applied to the racket/body. Momentum Newton’ Second Law is also related to the variable momentum, which is the product of an object’s velocity and mass. Momentum is essentially the quantity of motion an object possesses. Momentum can be transferred from one object to another. There are different types of momentum which each have a different impact on the sport. Linear Momentum Linear momentum is momentum in a straight line e.g. linear momentum is created as the athlete sprints in a straight line down the 100m straight on the track. Angular Momentum Angular momentum is rotational momentum and is created by the rotations of the various body segments e.g. The open stance forehand uses significant angular momentum. The tremendous increase in the use of angular momentum in ground strokes and serves has had a significant impact on the game of tennis. One of the main reasons for the increase in power of the game today is the incorporation of angular momentum into ground stroke and serve techniques. In tennis, the angular momentum developed by the coordinated action of body segments transfers to the linear momentum of the racquet at impact. Centre of Gravity The Center of Gravity (COG) is an imaginary point around which body weight is evenly distributed. The center of gravity of the human body can change considerably because the segments of the body can move their masses with joint rotations. This concept is critical to understanding balance and stability and how gravity affects sport techniques. The direction of the force of gravity through the body is downward, towards the center of the earth and through the COG. This line of gravity is important to understand and visualise when determining a person's ability to successfully maintain balance. When the line of gravity falls outside the Base of Support (BOS), then a reaction is needed in order to stay balanced. The center of gravity of a squash racquet is a far simpler process and can usually be found by identifying the point where the racket balances on your finger or another narrow object. Balance Balance is the ability of a player to control their equilibrium or stability. You need to have a good understanding of both static and dynamic balance: Static Balance The ability to control the body while the body is stationary. It is the ability to maintain the body in some fixed posture. Static balance is the ability to maintain postural stability and orientation with center of mass over the base of support and body at rest. Dynamic Balance The ability to control the body during motion. Defining dynamic postural stability is more challenging, Dynamic balance is the ability to transfer the vertical projection of the center of gravity around the supporting base of support. Dynamic balance is the ability to maintain postural stability and orientation with center of mass over the base of support while the body parts are in motion. Correct Biomechanics As mentioned above, correct biomechanics provide efficient movement and may reduce the risk of injury. In sport, it is always good to consider abnormal or faulty biomechanics as a possible cause of injury. These abnormal biomechanics can be due to anatomical or functional abnormalities. Anatomical abnormalities such as leg length discrepancies cannot be changed, but the secondary effects can be addressed such as a shoe build up or orthotics for example. Functional abnormalities that can occur can be muscle imbalances after a long period of immobilisation. In biomechanics the different planes of motion and axes are often referred to. Have a look at this video, to refresh your memory. Incorrect technique can cause abnormal biomechanics which can lead to injuries. Below are some examples of the relationship between faulty technique and associated injuries. |Sport||Technique||Injury| |Cricket||Mixed bowling action||Pars interarticularis stress fractures| |Tennis||Excessive wrist action with backhand||Extensor tendinopathy of the elbow| |Swimming||Decreased external rotation of the shoulder||Rotator cuff tendinopathy| |Running||Anterior pelvic tilit||Hamstring injuries| |Rowing||Change from bow side to stroke side||Rib stress fractures| |Ballet||Poor turnout||Hip Injuries| Lower Limb Biomechanics As humans, ambulation is our main form of movement, that is we walk upright and are very reliant on our legs to move us about. How the foot strikes the ground and the knock on effect this has up the lower limbs to the knee, hips, pelvis and low back in particular has become a subject of much debate and controversy in recent years. Lower limb biomechanics refers to a complex interplay between the joints, muscles and nervous system which results in a certain patterning of movement, often referred to as ‘alignment’. Much of the debate centers around what is considered ‘normal’ and what is considered ‘abnormal’ in biomechanical terms as well as the extent to which we should intervene should abnormal findings be found on assessment. This section examines the biomechanics of the lower extremity in particular the anatomy and biomechanics of the foot and ankle, the impact of Q Angle on the mechanics of the hip and knee and finally the implications of this on gait. Foot and Ankle Biomechanics The foot and ankle form a complex system which consists of 26 bones, 33 joints and more than 100 muscles, tendons and ligaments. It functions as a rigid structure for weight bearing and it can also function as a flexible structure to conform to uneven terrain. The foot and ankle provide various important functions which includes: supporting body weight, providing balance, shock absorption, transferring ground reaction forces, compensating for proximal malalignment, and substituting hand function in individuals with upper extremity amputation/paralysis all which are key when involved with any exercise or sport involving the lower limbs. This page examines in detail the biomechanics of the foot and ankle and its role in locomotion . Go to Page Q Angle An understanding of the normal anatomical and biomechanical features of the patellofemoral joint is essential to any evaluation of knee function. The Q angle formed by the vector for the combined pull of the quadriceps femoris muscle and the patellar tendon, is important because of the lateral pull it exerts on the patella . The direction and magnitude of force produced by the quadriceps muscle has great influence on patellofemoral joint biomechanics. The line of force exerted by the quadriceps is lateral to the joint line mainly due to large cross-sectional area and force potential of the vastus lateralis. Since there exists an association between patellofemoral pathology and excessive lateral tracking of the patella, assessing the overall lateral line of pull of the quadriceps relative to the patella is a meaningful clinical measure. Such a measure is referred to as the Quadriceps angle or Q angle. It was initially described by Brattstrom . Go to Page Biomechanics of Gait Sandra J. Shultz describes gait as: “...someone’s manner of ambulation or locomotion, involves the total body. Gait speed determines the contribution of each body segment. Normal walking speed primarily involves the lower extremities, with the arms and trunk providing stability and balance. The faster the speed, the more the body depends on the upper extremities and trunk for propulsion as well as balance and stability. The legs continue to do the most work as the joints produce greater ranges of motion through greater muscle responses. In the bipedal system the three major joints of the lower body and pelvis work with each other as muscles and momentum move the body forward. The degree to which the body’s center of gravity moves during forward translation defines efficiency. The body’s center moves both side to side and up and down during gait.” Bipedal walking is an important characteristic of humans.This page will present information about the different phases of the gait cycle and important functions of the foot while walking . Go to Page Upper Limb Biomechanics Correct biomechanics are as important in upper limb activities as they are in lower limb activities. The capabilities of the upper extremity are varied and impressive. With the same basic anatomical structure of the arm, forearm, hand, and fingers, major league Baseball Pitchers pitch fastballs at 40 m/s, swimmers cross the English Channel, gymnasts perform the iron cross, and olympic boxers in weight classes ranging from flyweight to super heavyweight showed a range of 447 to 1,066 pounds of peak punching force. The structure of the upper extremity is composed of the shoulder girdle and the upper limb. The shoulder girdle consists of the scapula and clavicle, and the upper limb is composed of the arm, forearm, wrist, hand, and fingers. However, a kinematic chain extends from the cervical and upper thoracic spine to the fingertips. Only when certain multiple segments are completely fixed can these parts possibly function independently in mechanical roles. This section reviews the anatomical structures enabling these different types of movement and examines the biomechanics or ways in which the muscles cooperate to achieve the diversity of movement of which the upper extremity is capable. Scapulohumeral Rhythm Scapulohumeral rhythm (also referred to as glenohumeral rhythm) is the kinematic interaction between the scapula and the humerus, first published by Codman in the 1930's. This interaction is important for the optimal function of the shoulder. When there is a change of the normal position of the scapula relative to the humerus, this can cause a dysfunction of the scapulohumeral rhythm. The change of the normal position is also called scapular dyskinesia. Various studies of the mechanism of the shoulder joint that have attempted to describe the global motion capacity of the shoulder refer to that description, Can you evaluate the shoulder to see if the function is correct and explain the complex interactions between components involved in placing the hand in space? Go to Page Sport Specific Biomechanics Running Biomechanics Running is similar to walking in terms of locomotive activity. However, there are key differences. Having the ability to walk does not mean that the individual has the ability to run. There are some differences between the gait and run cycle - the gait cycle is one third longer in time, the ground reaction force is smaller in the gait cycle (so the load is lower), and the velocity is much higher. In running, there is also just one stance phase while in stepping there are two. Shock absorption is also much larger in comparison to walking. This explains why runners have more overload injuries. Running Requires: - Greater balance - Greater muscle strength - Greater joint range of movement Go to Page Cycling Biomechanics Cycling was initially invented by Baron Carl von Drais in 1817, but not as we know it. This was a machine which initially had two wheels that were connected by a wooden plank with a rudder device for steering. It involved people running along the ground whilst sitting down; giving them the name of a 'running machine' (in all senses) or a velocipied. This was solely used by the male population at the time of invention. The velocipied then made a huge design development in the 1860's at the Michaux factory in Paris. They added leaver arms to the front wheel which were propelled by pedals at the feet. This was the first conventional bicycle, and since then and up until the current day the bicycle has made great design and technological advances. A survey in 2014 estimated that over 43% of the United Kingdom population have or have access to a bike and 8% of the population aged 5 and above cycled 3 or more times a week. With such a large amount of people cycling, whether it be professional, recreational or for commuting this increase the chance of developing an injury, so it is time we understood the biomechanics of cycling. Go to Page Baseball Pitching Biomechanics Baseball pitching is one of the most intensely studying athletic motions. Although focus has been more on the shoulder movement, entire body movement is required to perform baseball pitching. Throwing is also considered one of the fastest human motions performed, and maximum humeral internal rotation velocity reaches about 7000 to 7500o/second. Go to Page Tennis Biomechanics Tennis biomechanics is a very complex task. Consider hitting a tennis ball. First the athlete needs to see the ball coming off their opponent's racket. Then, in order, they have to judge the speed, spin, trajectory and, most importantly, the direction of the tennis ball. The player then needs to adjust their body position quickly to move around the ball. As the player prepares to hit the ball the body is in motion, the ball is moving both in a linear and rotation direction if there is spin on the ball, and the racquet is also in motion. The player must coordinate all these movements in approximately a half a second so they strike the ball as close to the center of the racket in order to produce the desired spin,speed and direction for return of the ball. A mistake in any of these movements can create an error. The International Tennis Federation (ITF) provides detailed resources on tennis biomechanics including a number of presentations below. Biomechanics of Tennis: An Introduction Biomechanical Principles for the Serve in Tennis Biomechanics of the Forehand Stroke These articles provide some more detailed information on Serve and Ground Stroke Biomechanics and also look at the implications for strength training and rehabilitation.. Tennis Serve Biomechanics in Relation to Ball Velocity and Upper Limb Joint Injuries Biomechanics of the Tennis Ground Strokes: Implications for Strength Training References - ↑ 1.0 1.1 Hall SJ. What Is Biomechanics?. In: Hall SJ. eds. Basic Biomechanics, 8e New York, NY: McGraw-Hill; 2019. http://accessphysiotherapy.mhmedical.com/content.aspx?bookid=2433§ionid=191508967. (last accessed June 03, 2019). - ↑ 2.0 2.1 2.2 2.3 2.4 Brukner P. Brukner and Khan's Clinical Sports Medicine. North Ryde: McGraw-Hill; 2012. - ↑ The British Association of Sport and Exercise Sciences. More About Biomechanics. http://www.bases.org.uk/Biomechanics (accessed 2 May 2016). - ↑ Basi Biomechanics. Online lecture notes. Available from:http://www.mccc.edu/~behrensb/documents/Week1KinesiologyFINAL-MICKO_000.pdf (last accessed 03 June 2019) - ↑ 5.0 5.1 Knudson D. Fundamentals of Biomechanics. Springer Science and Business Media; 2007 May 28. - ↑ Flip Teach. Basic Biomechanics Part 1. Published 22 August 2013. Available from: https://www.youtube.com/watch?v=XMzh37kwnV4 (last accessed 03 June 2019) - ↑ Hall SJ. Kinetic Concepts for Analyzing Human Motion. In: Hall SJ. eds. Basic Biomechanics, 8e New York, NY: McGraw-Hill; 2019. http://accessphysiotherapy.mhmedical.com/content.aspx?bookid=2433§ionid=191509336. (last accessed June 03, 2019). - ↑ 8.0 8.1 8.2 8.3 8.4 8.5 8.6 Hall SJ. Basic Biomechanics. Boston, MA:: McGraw-Hill; 2007. - ↑ 9.0 9.1 9.2 9.3 Hall SJ. Linear Kinetics of Human Movement. In: Hall SJ. eds. Basic Biomechanics, 8e New York, NY: McGraw-Hill;2019 http://accessphysiotherapy.mhmedical.com/content.aspx?bookid=2433§ionid=191511320. (last accessed June 03, 2019). - ↑ Hall SJ. Kinetic Concepts for Analyzing Human Motion. In: Hall SJ. eds. Basic Biomechanics, 8e New York, NY: McGraw-Hill; 2019 http://accessphysiotherapy.mhmedical.com/content.aspx?bookid=2433§ionid=191509336. (last accessed June 03, 2019). - ↑ Hall SJ. Equilibrium and Human Movement. In: Hall SJ. eds. Basic Biomechanics, 8e New York, NY: McGraw-Hill; 2019 http://accessphysiotherapy.mhmedical.com/content.aspx?bookid=2433§ionid=191511590. (last accessed June 03, 2019). - ↑ Bannister R: Brain's Clinical Neurology, ed 3. New York, NY,Oxford University Press, Inc, 1969, pp 51-54, 102 - ↑ 13.0 13.1 Susan B O sullivan, Leslie G Portnry. Physical Rehabilitation :Sixth Edition. Philadelphia: FA Davis. 2014. - ↑ Goldie PA, Bach TM, Evans OM. Force Platform Measures for Evaluating Postural Control - Reliability and Validity. Arch Phys Med Rehabil. 1989; 70:510-517 - ↑ The SalmonellaPlace. Anatomical Planes, Axes and Directions. Published on December 23, 2012. Avalaible from: https://www.youtube.com/watch?v=uKQGNh_herE - ↑ Forrest, Mitchell R L et al. “Risk Factors for Non-Contact Injury in Adolescent Cricket Pace Bowlers: A Systematic Review.” Sports medicine. 47.12 (2017): 2603–2619. Web. - ↑ Stuelcken, M., Mellifont, D., Gorman, A. et al. Wrist Injuries in Tennis Players: A Narrative Review. Sports Med (2017) 47: 857. - ↑ Johnston T.R., Abrams G.D. Shoulder Injuries and Conditions in Swimmers. In: Miller T. (eds) Endurance Sports Medicine. Springer, Cham. 2016:127-138. - ↑ Goom TS, Malliaras P, Reiman MP, Purdam CR. Proximal Hamstring Tendinopathy: Clinical Aspects of Assessment and Management. J Orthop Sports Phys Ther. 2016 Jun;46(6):483-93 - ↑ D'Ailly PN, Sluiter JK, Kuijer PP. Rib stress fractures among rowers: a systematic review on return to sports, risk factors and prevention. The Journal of Sports Medicine and Physical Fitness. 2015;56(6):744-753. - ↑ Bowerman EA, Whatman C, Harris N, Bradshaw E. Review of the Risk Factors for Lower Extremity Overuse Injuries in Young Elite Female Ballet Dancers. Journal of Dance Medicine & Science. 2015; 19:51-56. - ↑ 22.0 22.1 Houglum PA, Bertoti DB. Brunnstrom's Clinical Kinesiology. FA Davis; 2012. - ↑ Horton MG, Hall TL. Quadriceps Femoris Muscle Angle:Normal Values and Relationships with Gender and Selected Skeletal Measures. Phy Ther 1989; 69: 17-21 - ↑ Brattstrom H. Shape of the intercondylar groove normally and in recurrent dislocation of patella. Acta Orthop Scand Suppl. 1964;68:1–40. - ↑ 25.0 25.1 Shultz SJ et al. Examination of Muskoskeletal Injuries. 2nd ed, North Carolina: Human Kinetics, 2005. p55-60. - ↑ Codman EA: The Shoulder,Boston: G.Miller and Company,1934 - ↑ Kibler WB. The Role of the Scapula in Athletic Shoulder Function. Am J Sports Med 1998;26:325-337 Level of Evidence: 3B - ↑ Norkin C; Levangie P; Joint Structure and Function; A Comprehensive Analysis; 2nd;'92; Davis Company. - ↑ 29.0 29.1 Subotnick S. Sports Medicine of the Lower Extremity. Harcourt (USA):Churchill Livingstone, 1999. - ↑ iSport Cycling. History of Cycling. http://cycling.isport.com/cycling-guides/history-of-cycling. (accessed 24th May 2016) - ↑ Cycling UK. Cycling UK Cycling Statistics. http://www.cyclinguk.org/resources/cycling-uk-cycling-statistics#How many people cycle and how often? (accessed 24 May 2015) - ↑ Seroyer ST, Nho SJ, Bach BR, Bush-Joseph CA, Nicholson GP, Romeo AA. The Kinetic Chain in Overhand Pitching: Its Potential Role for Performance Enhancement and Injury Prevention. Sports Health: A Multidisciplinary Approach. 2010 Mar 1;2(2):135-46.
https://www.physio-pedia.com/Biomechanics_In_Sport
ABSTRACT: BACKGROUND:Understanding the large-scale patterns of microbial functional diversity is essential for anticipating climate change impacts on ecosystems worldwide. However, studies of functional biogeography remain scarce for microorganisms, especially in freshwater ecosystems. Here we study 15,289 functional genes of stream biofilm microbes along three elevational gradients in Norway, Spain and China. RESULTS:We find that alpha diversity declines towards high elevations and assemblage composition shows increasing turnover with greater elevational distances. These elevational patterns are highly consistent across mountains, kingdoms and functional categories and exhibit the strongest trends in China due to its largest environmental gradients. Across mountains, functional gene assemblages differ in alpha diversity and composition between the mountains in Europe and Asia. Climate, such as mean temperature of the warmest quarter or mean precipitation of the coldest quarter, is the best predictor of alpha diversity and assemblage composition at both mountain and continental scales, with local non-climatic predictors gaining more importance at mountain scale. Under future climate, we project substantial variations in alpha diversity and assemblage composition across the Eurasian river network, primarily occurring in northern and central regions, respectively. CONCLUSIONS:We conclude that climate controls microbial functional gene diversity in streams at large spatial scales; therefore, the underlying ecosystem processes are highly sensitive to climate variations, especially at high latitudes. This biogeographical framework for microbial functional diversity serves as a baseline to anticipate ecosystem responses and biogeochemical feedback to ongoing climate change. Video Abstract.
https://www.omicsdi.org/dataset/biostudies/S-EPMC7293791
New tree species discovered in Binningup After years of scientific investigation, botanists have confirmed the discovery of a new species of acacia tree in Binningup. Workers made the discovery by chance during extensive regeneration work carried out around the Water Corporation’s Southern Seawater Desalination Plant back in 2015. The Water Corporation commissioned botanists Geoff Cockerton and Kevin Thiele in 2016 and 2017 to search for other populations of the acacia to confirm the new species, with the search stretching through the bushlands from Yanchep to Albany. Water Corporation South West Regional Manager John Janssen said botanists made another effort to find the last piece of the puzzle in February 2018, and confirm beyond doubt that the Binningup plants were a new species, rather than a variation. “While driving through the Harvey region the botanists had two separate chance sightings of plants with similar characters as the Binningup acacia growing on the side of the road,” Mr Janssen said. “These chance sightings were a significant breakthrough as it proved the plants found at Binningup were not a variation of an existing species due to the local conditions, and provided further evidence needed to confirm the presence of a new species.” Mr Janssen said a submission to the WA Herbarium was made in July, 2018, and in September Acacia sp. Binningup was officially recognised and named as a new species. “We are thrilled to have played a part in the discovery of this new species near our plant, as it illustrates the thriving biodiversity we want to nurture right on our doorstep,” Mr Janssen said. Desalination plant environmental engineer Grant Griffith said the botanists’ work at the site had revealed other discoveries which could lead to more new species being declared. “Out of this work they think there could be five or six new species,” he said. He said one of the Acacia sp. Binningup plants could be more than 100 years old. Harvey shire president Tania Jackson, who is also a member of the desalination plant’s Community Reference Group, said the rehabilitation of the site and ongoing environmental work in the dune system near the plant had improved the quality and sustainability of the area. Get the latest news from thewest.com.au in your inbox.
https://www.harveyreporter.com.au/news/harvey-waroona-reporter/new-species-discovered-in-binningup-ng-b881001253z
Collectors: Ushakov P V 🔎 Collection date: 04.04.1956. Administrative regions: AU - State of South Australia 🔎 . Place of collection: Macquarie Island, the ship “Ob”. Groups of specimens: Algae specimens 🔎 ; Algae type specimens 🔎 Original label text: Myriogramme macquariensis sp. nov. A. Zin. О-в Маккуори, судно “Обь”. Литораль, скалы, 4 IV 1956, leg. П.В. Ушаков, det. А.Д. Зинова Record creation: 2021-02-13, Stepan Ivanov, Microtek 1600 Citation: Specimen LE A0000219 // Virtual herbarium of Komarov Botanical Institute RAS — http://re.herbariumle.ru/A0000219 Herbarium LE - Specimens - LE A0000219 View Add Copy Edit Delete All fields Botanical geography region Collection date Collectors Comment Coordinates (geographic position) Expedition Field collecting number Groups of specimens Habitat / plant community in location of collecting Hidden record Identifications Landscape in location of collection Life form Main identification Modern administrative regions Modern administrative subregion Object features Original label text Place of collection Processing status Record creation Specimen number Substrate in collecting location Toponym of place of collection Total results: Всего:
https://en.herbariumle.ru/?t=occ&id=75191
Pertussis, commonly known as whooping cough, is an acute and highly contagious cough illness lasting at least 2 weeks with one of the following: paroxysms of coughing, inspiratory “whoop”, or post-tussive vomiting (vomiting immediately after coughing). The incubation period of pertussis is commonly 9–10 days, with a range of 6–20 days. The clinical course of the illness is divided into three stages. The first stage, the catarrhal stage: coryza (runny nose), sneezing, low-grade fever, and a mild, occasional cough, similar to the common cold. The second stage, the paroxysmal stage, the patient has bursts, or paroxysms, of numerous, rapid coughs. The third stage, the convalescent stage, recovery is gradual. The cough becomes less paroxysmal and disappears in 2 to 3 weeks. Pertussis can strike at any age but is particularly dangerous for babies. Adolescents and adults are the most important reservoir for Bordetella pertussis and are often the source of infection for infants. Pertussis caused by the bacterium B. pertussis. B. pertussis is a small, aerobic gram-negative rod. It is fastidious and requires special media for isolation. Outbreaks of pertussis were first described in the 16th century, and the organism was first isolated in 1906. Epidemiology Pertussis occurs worldwide. In recent years, the annual number of confirmed Pertussis cases is about 40 to 90 in Taiwan. Pertussis epidemic occurred in Families, schools, hospitals, child care centers. The monthly distribution of pertussis cases shows that the number of cases peaks was not as noticeable as some foreign countries. According to the analysis of data from the notifiable infectious disease reporting system, the age distribution of confirmed cases indicated that most of the cases occurred in infants and adolescents. Figure: Pertussis by year, Taiwan Pertussis Surveillance in Taiwan Taiwan National Infectious Disease Statistics System Pertussis Prevention and Control Prevention methods In addition to awareness campaigns, timely vaccination remains significant. Vaccination The Taiwan government provides free immunizations to children including 5-in-1 (diphtheria and tetanus toxoid with acellular pertussis, haemophilus influenzae type b, and inactivated polio, DTaP-Hib-IPV), diphtheria and tetanus toxoids with acellular pertussis and inactivated polio vaccine (DTaP-IPV). Parents of newborns are given a children’s health handbook with a recommended immunization schedule. Children can receive vaccinations at health stations and contracted hospitals and clinics across Taiwan. Health stations regularly carry out health promotion programs for improving the coverage rate. Control methods Surveillance: Case detection and reporting Pertussis belongs to the third category of notifiable infectious diseases.If a doctor treats a patient suspected of having a notifiable infectious disease, the doctor must report the case within one week. Other Control methods Case management should including infection control, isolation, treatment and follow up. Other control methods include management of contacts, cases of education, travel advice, laboratory diagnosis, research and development. Pertussis outbreaks should be promptly investigated. Outbreaks investigation should including the dates of onset, age of patient, immunization status, geographic location and outcome (alive or dead) for each case.
https://www.cdc.gov.tw/En/Category/ListContent/bg0g_VU_Ysrgkes_KRUDgQ?uaid=tRFoq7YhMoz6moj0QsJcqA
The draft NSW Clean Air Strategy aims to support liveable communities, healthy environments and the NSW economy by reducing the adverse effects of air pollution on human health. Actions in the draft Strategy reflect the substantial and growing body of evidence on air pollution and its impacts and costs in New South Wales. The draft NSW Clean Air Strategy was released for community and stakeholder consultation on 18 March 2021. The consultation closed on 23 April 2021. The next steps are to provide the Minister for Energy and Environment with the final Strategy and the submissions report. NSW Government will then consider the final Strategy. Once approved, the final Strategy and documentation from public consultation will be published and stakeholders, including those who made a submission on the draft Strategy, will be notified. Priority areas in the draft Strategy are: - better preparedness for pollution events: improve information and how it is communicated to help reduce health impacts of air pollution on NSW communities, including impacts from bushfires, hazard reduction burns and dust storms - cleaner industry: drive improved management of air emissions by industry - cleaner transport, engines and fuels: further reduce air emissions and impacts from vehicles, fuels and non-road diesel sources - healthier households: support reducing air emissions from household activities, with the main priority being wood heater emissions - better places: reduce impacts of air pollution on communities through better planning and design of places and buildings. Please direct enquiries on the NSW Clean Air Strategy to [email protected].
https://www.environment.nsw.gov.au/topics/air/clean-air-strategy
Where Is The Coolest Place You’ve Been Tattooed? Before you start posting inappropriate pictures in the comment section below, I'm talking about an ACTUAL location, not a spot on your body. Chuck Zito is working hard to make The Vault in NY the coolest place on the planet. Chuck, best known for his work as Chucky Pancamo on the HBO show Oz, is opening a tattoo parlor inside a bank vault! The former Hells Angel has already invested over $100,000 to convert the vault into a tattoo parlor. Seriously, how cool is that?!
https://1057thehawk.com/where-is-the-coolest-place-youve-been-tattooed/
The Skills and Abilities Required by a Personal Injury Paralegal Personal injury paralegals assist personal injury lawyers in all aspects of personal injury litigation from case inception through appeal. Jamie Collins, a paralegal for Yosha Cook Shartzer & Tisch in Indianapolis, Indiana, and founder of The Paralegal Society relates the skills and knowledge necessary to succeed as a personal injury paralegal. Below is information on the role of paralegals in the area of personal injury law including daily responsibilities, challenges, and tips. Client Service A paralegal working on personal injury/wrongful death cases must know how to interview and screen prospective clients. The paralegal must review a file to determine what the client’s case involves and to determine its current status. Medical Analysis Personal injury paralegals must understand the medical aspects of a case to ascertain which medical records and bills to acquire and to determine if future cost projections or experts are required. The paralegal must be familiar with medical terminology and know how to prepare medical chronologies, medical expense itemizations, deposition summaries, and demand packages. A paralegal will address prescription medications and identify which ones may be related to a client’s claim. This means understanding the typical nerve root distribution pattern for injuries involving radicular symptoms (pain that radiates from the spine into a person’s extremities), becoming familiar with the human anatomy, and gaining knowledge of various types of injuries (e.g., if they pose permanent implications or may necessitate future surgery or lifelong expenses). Drafting Skills Drafting skills should be part of a personal injury paralegal's capabilities. A paralegal should be able to draft discovery responses and assert all necessary objections to ensure that they are nearly perfect prior to the attorney’s review. The paralegal must also prepare witness and exhibit lists, draft motions, final instructions, verdict forms, and be ready to tackle writing projects. Trial Preparation Personal injury paralegals are well-versed in the trial realm. Important tasks include witness preparation (helping to prepare the clients for trial) and preparing voir dire outlines, opening and closing statements, and witness outlines. A paralegal often determines the exhibits to be used and prepares them for viewing. Trial Personal injury paralegals play an important role at trial. At trial, the personal injury paralegal may perform the following functions: - Assist the attorney with the entire voir dire process (e.g., taking notes, striking, and selection of jurors) - Pull and pass exhibits to the attorney as needed - Act as a liaison to the client throughout trial - Ensure the attorney does not inadvertently waive an objection during trial by allowing certain evidence to be read into the record - Communicate with the bailiff or court reporter if issues arise or information needs to be shared - Bring witnesses into the courtroom when it is their turn to testify - Rework exhibit binders if an exhibit is added or needs to be removed prior to presenting it to the jury (this is an event which often transpires just outside the courtroom when a last minute issue arises with an exhibit) - Help the attorney to elicit key pieces of testimony from each witness based upon personal knowledge of the case - Assist with all aspects of trial strategy and act as a second set of eyes and ears (and another legal mind) in the courtroom It is also helpful to know the trial rules in the relevant geographic area, the Federal Rules of Civil Procedure, and the Federal Rules of Evidence for trial purposes. Trials are very exciting. As with any area of litigation, a personal injury paralegal must also have the following characteristics:
https://www.thebalancecareers.com/personal-injury-paralegal-skills-and-abilities-2164279
Cartography (/kɑːrˈtɒɡrəfi/; from Greek χάρτης chartēs, "papyrus, sheet of paper, map"; and γράφειν graphein, "write") is the study and practice of making and using maps. Combining science, aesthetics, and technique, cartography builds on the premise that reality (or an imagined reality) can be modeled in ways that communicate spatial information effectively. The fundamental objectives of traditional cartography are to: - Set the map's agenda and select traits of the object to be mapped. This is the concern of map editing. Traits may be physical, such as roads or land masses, or may be abstract, such as toponyms or political boundaries. - Represent the terrain of the mapped object on flat media. This is the concern of map projections. - Eliminate characteristics of the mapped object that are not relevant to the map's purpose. This is the concern of generalization. - Reduce the complexity of the characteristics that will be mapped. This is also the concern of generalization. - Orchestrate the elements of the map to best convey its message to its audience. This is the concern of map design. Modern cartography constitutes many theoretical and practical foundations of geographic information systems and geographic information science. History Ancient times What is the earliest known map is a matter of some debate, both because the term "map" is not well-defined and because some artifacts that might be maps might actually be something else. A wall painting that might depict the ancient Anatolian city of Çatalhöyük (previously known as Catal Huyuk or Çatal Hüyük) has been dated to the late 7th millennium BCE. Among the prehistoric alpine rock carvings of Mount Bego (France) and Valcamonica (Italy), dated to the 4th millennium BCE, geometric patterns consisting of dotted rectangles and lines are widely interpreted in archaeological literature as a depiction of cultivated plots. Other known maps of the ancient world include the Minoan "House of the Admiral" wall painting from c. 1600 BCE, showing a seaside community in an oblique perspective, and an engraved map of the holy Babylonian city of Nippur, from the Kassite period (14th – 12th centuries BCE). The oldest surviving world maps are from 9th century BCE Babylonia. One shows Babylon on the Euphrates, surrounded by Assyria, Urartu and several cities, all, in turn, surrounded by a "bitter river" (Oceanus). Another depicts Babylon as being north of the center of the world. The ancient Greeks and Romans created maps from the time of Anaximander in the 6th century BCE. In the 2nd century CE, Ptolemy wrote his treatise on cartography, Geographia. This contained Ptolemy's world map – the world then known to Western society (Ecumene). As early as the 8th century, Arab scholars were translating the works of the Greek geographers into Arabic. In ancient China, geographical literature dates to the 5th century BCE. The oldest extant Chinese maps come from the State of Qin, dated back to the 4th century BCE, during the Warring States period. In the book of the Xin Yi Xiang Fa Yao, published in 1092 by the China scientist Su Song, a star map on the equidistant cylindrical projection. Although this method of charting seems to have existed in China even before this publication and scientist, the greatest significance of the star maps by Su Song is that they represent the oldest existent star maps in printed form. Early forms of cartography of India included depictions of the pole star and surrounding constellations. These charts may have been used for navigation. Middle Ages and Renaissance Mappae mundi ("maps of the world") are the medieval European maps of the world. About 1,100 of these are known to have survived: of these, some 900 are found illustrating manuscripts and the remainder exist as stand-alone documents. The Arab geographer Muhammad al-Idrisi produced his medieval atlas Tabula Rogeriana (Book of Roger) in 1154. By combining the knowledge of Africa, the Indian Ocean, Europe, and the Far East (which he learned through contemporary accounts from Arab merchants and explorers) with the information he inherited from the classical geographers, he was able to write detailed descriptions of a multitude of countries. Along with the substantial text he had written, he created a world map influenced mostly by the Ptolemaic conception of the world, but with significant influence from multiple Arab geographers. It remained the most accurate world map for the next three centuries. The map was divided into seven climatic zones, with detailed descriptions of each zone. As part of this work, a smaller, circular map was made depicting the south on top and Arabia in the center. Al-Idrisi also made an estimate of the circumference of the world, accurate to within 10%. In the Age of Exploration, from the 15th century to the 17th century, European cartographers both copied earlier maps (some of which had been passed down for centuries) and drew their own, based on explorers' observations and new surveying techniques. The invention of the magnetic compass, telescope and sextant enabled increasing accuracy. In 1492, Martin Behaim, a German cartographer, made the oldest extant globe of the Earth. In 1507, Martin Waldseemüller produced a globular world map and a large 12-panel world wall map (Universalis Cosmographia) bearing the first use of the name "America". Portuguese cartographer Diego Ribero was the author of the first known planisphere with a graduated Equator (1527). Italian cartographer Battista Agnese produced at least 71 manuscript atlases of sea charts. Johannes Werner refined and promoted the Werner projection. This was an equal-area, heart-shaped world map projection (generally called a cordiform projection) which was used in the 16th and 17th centuries. Over time, other iterations of this map type arose; most notable are the sinusoidal projection and the Bonne projection. The Werner projection places its standard parallel at the North Pole; a sinusoidal projection places its standard parallel at the equator; and the Bonne projection is intermediate between the two. In 1569, mapmaker Gerardus Mercator first published a map based on his Mercator projection, which uses equally-spaced parallel vertical lines of longitude and parallel latitude lines spaced farther apart as they get farther away from the equator. By this construction, courses of constant bearing are conveniently represented as straight lines for navigation. The same property limits its value as a general-purpose world map because regions are shown as increasingly larger than they actually are the further from the equator they are. Mercator is also credited as the first to use the word "atlas" to describe a collection of maps. In the later years of his life, Mercator resolved to create his Atlas, a book filled with many maps of different regions of the world, as well as a chronological history of the world from the Earth's creation by God until 1568. He was unable to complete it to his satisfaction before he died. Still, some additions were made to the Atlas after his death and new editions were published after his death. In the Renaissance, maps were used to impress viewers and establish the owner's reputation as sophisticated, educated, and worldly. Because of this, towards the end of the Renaissance, maps were displayed with equal importance of painting, sculptures, and other pieces of art. In the sixteenth century, maps were becoming increasingly available to consumers through the introduction of printmaking, with about 10% of Venetian homes having some sort of map by the late 1500s. There were three main functions of maps in the Renaissance: - General descriptions of the world - Navigation and wayfinding - Land surveying and property management In medieval times, written directions of how to get somewhere were more common than the use of maps. With the Renaissance, cartography began to be seen as a metaphor for power. Political leaders could lay claim on territories through the use of maps and this was greatly aided by the religious and colonial expansion of Europe. The most commonly mapped places during the Renaissance were the Holy Land and other religious places. In the late 1400s to the late 1500s, Rome, Florence, and Venice dominated map making and trade. It started in Florence in the mid to late 1400s. Map trade quickly shifted to Rome and Venice but then was overtaken by atlas makers in the late 16th century. Map publishing in Venice was completed with humanities and book publishing in mind, rather than just informational use. Printing technology There were two main printmaking technologies in the Renaissance: woodcut and copper-plate intaglio, referring to the medium used to transfer the image onto paper. In woodcut, the map image is created as a relief chiseled from medium-grain hardwood. The areas intended to be printed are inked and pressed against the sheet. Being raised from the rest of the block, the map lines cause indentations in the paper that can often be felt on the back of the map. There are advantages to using relief to make maps. For one, a printmaker doesn't need a press because the maps could be developed as rubbings. Woodblock is durable enough to be used many times before defects appear. Existing printing presses can be used to create the prints rather than having to create a new one. On the other hand, it is hard to achieve fine detail with the relief technique. Inconsistencies in linework are more apparent in woodcut than in intaglio. To improve quality in the late fifteenth century, a style of relief craftsmanship developed using fine chisels to carve the wood, rather than the more commonly used knife. In intaglio, lines are engraved into workable metals, typically copper but sometimes brass. The engraver spreads a thin sheet of wax over the metal plate and uses ink to draw the details. Then, the engraver traces the lines with a stylus to etch them into the plate beneath. The engraver can also use styli to prick holes along the drawn lines, trace along them with colored chalk, and then engrave the map. Lines going in the same direction are carved at the same time, and then the plate is turned to carve lines going in a different direction. To print from the finished plate, ink is spread over the metal surface and scraped off such that it remains only in the etched channels. Then the plate is pressed forcibly against the paper so that the ink in the channels is transferred to the paper. The pressing is so forceful that it leaves a "plate mark" around the border of the map at the edge of the plate, within which the paper is depressed compared to the margins. Copper and other metals were expensive at the time, so the plate was often reused for new maps or melted down for other purposes. Whether woodcut or intaglio, the printed map is hung out to dry. Once dry, it is usually placed in another press to flatten the paper. Any type of paper that was available at the time could be used to print the map on, but thicker paper was more durable. Both relief and intaglio were used about equally by the end of the fifteenth century. Lettering Lettering in mapmaking is important for denoting information. Fine lettering is difficult in woodcut, where it often turned out square and blocky, contrary to the stylized, rounded writing style popular in Italy at the time. To improve quality, mapmakers developed fine chisels to carve the relief. Intaglio lettering did not suffer the troubles of a coarse medium and so was able to express the looping cursive that came to be known as cancellaresca. There were custom-made reverse punches that were also used in metal engraving alongside freehand lettering. Color The first use of color in map making cannot be narrowed down to one reason. There are arguments that color started as a way to indicate information on the map, with aesthetics coming second. There are also arguments that color was first used on maps for aesthetics but then evolved into conveying information. Either way, many maps of the Renaissance left the publisher without being colored, a practice that continued all the way into the 1800s. However, most publishers accepted orders from their patrons to have their maps or atlases colored if they wished. Because all coloring was done by hand, the patron could request simple, cheap color, or more expensive, elaborate color, even going so far as silver or gold gilding. The simplest coloring was merely outlines, such as of borders and along rivers. Wash color meant painting regions with inks or watercolors. Limning meant adding silver and gold leaf to the map to illuminate lettering, heraldic arms, or other decorative elements. Early-Modern Period The Early Modern Period saw the convergence of cartographical techniques across Eurasia and the exchange of mercantile mapping techniques via the Indian Ocean. In the early seventeenth century, the Selden map was created by a Chinese cartographer. Historians have put its date of creation around 1620, but there is debate in this regard. This map's significance draws from historical misconceptions of East Asian cartography, the main one being that East Asians didn't do cartography until Europeans arrived. The map's depiction of trading routes, a compass rose, and scale bar points to the culmination of many map-making techniques incorporated into Chinese mercantile cartography. In 1689, representatives of the Russian tsar and Qing Dynasty met near the border town of Nerchinsk, which was near the disputed border of the two powers, in eastern Siberia. The two parties, with the Qing negotiation party bringing Jesuits as intermediaries, managed to work a treaty which placed the Amur River as the border between the Eurasian powers, and opened up trading relations between the two. This treaty's significance draws from the interaction between the two sides, and the intermediaries who were drawn from a wide variety of nationalities. The Enlightenment Maps of the Enlightenment period practically universally used copper plate intaglio, having abandoned the fragile, coarse woodcut technology. Use of map projections evolved, with the double hemisphere being very common and Mercator's prestigious navigational projection gradually making more appearances. Due to the paucity of information and the immense difficulty of surveying during the period, mapmakers frequently plagiarized material without giving credit to the original cartographer. For example, a famous map of North America known as the "Beaver Map" was published in 1715 by Herman Moll. This map is a close reproduction of a 1698 work by Nicolas de Fer. De Fer, in turn, had copied images that were first printed in books by Louis Hennepin, published in 1697, and François Du Creux, in 1664. By the late 18th century, mapmakers often credited the original publisher with something along the lines of, "After [the original cartographer]" in the map's title or cartouche. Modern period In cartography, technology has continually changed in order to meet the demands of new generations of mapmakers and map users. The first maps were produced manually, with brushes and parchment; so they varied in quality and were limited in distribution. The advent of magnetic devices, such as the compass and much later, magnetic storage devices, allowed for the creation of far more accurate maps and the ability to store and manipulate them digitally. Advances in mechanical devices such as the printing press, quadrant and vernier, allowed the mass production of maps and the creation of accurate reproductions from more accurate data. Hartmann Schedel was one of the first cartographers to use the printing press to make maps more widely available. Optical technology, such as the telescope, sextant and other devices that use telescopes, allowed accurate land surveys and allowed mapmakers and navigators to find their latitude by measuring angles to the North Star at night or the Sun at noon. Advances in photochemical technology, such as the lithographic and photochemical processes, make possible maps with fine details, which do not distort in shape and which resist moisture and wear. This also eliminated the need for engraving, which further speeded up map production. In the 20th century, aerial photography, satellite imagery, and remote sensing provided efficient, precise methods for mapping physical features, such as coastlines, roads, buildings, watersheds, and topography. The United States Geological Survey has devised multiple new map projections, notably the Space Oblique Mercator for interpreting satellite ground tracks for mapping the surface. The use of satellites and space telescopes now allows researchers to map other planets and moons in outer space. Advances in electronic technology ushered in another revolution in cartography: ready availability of computers and peripherals such as monitors, plotters, printers, scanners (remote and document) and analytic stereo plotters, along with computer programs for visualization, image processing, spatial analysis, and database management, democratized and greatly expanded the making of maps. The ability to superimpose spatially located variables onto existing maps created new uses for maps and new industries to explore and exploit these potentials. See also digital raster graphic. In the early years of the new millennium, three key technological advances transformed cartography: the removal of Selective Availability in the Global Positioning System (GPS) in May 2000, which improved locational accuracy for consumer-grade GPS receivers to within a few metres; the invention of OpenStreetMap in 2004, a global digital counter-map that allowed anyone to contribute and use new spatial data without complex licensing agreements; and the launch of Google Earth in 2005 as a development of the virtual globe EarthViewer 3D (2004), which revolutionised access to satellite and aerial imagery. These advances brought more accuracy to geographical and location-based data and widened the range of applications for cartography, for example in the development of satnav devices. These days most commercial-quality maps are made using software of three main types: CAD, GIS and specialized illustration software. Spatial information can be stored in a database, from which it can be extracted on demand. These tools lead to increasingly dynamic, interactive maps that can be manipulated digitally. Field-rugged computers, GPS, and laser rangefinders make it possible to create maps directly from measurements made on site. Deconstruction There are technical and cultural aspects to producing maps. In this sense, maps can sometimes be said to be biased. The study of bias, influence, and agenda in making a map is what comprise a map's deconstruction. A central tenet of deconstructionism is that maps have power. Other assertions are that maps are inherently biased and that we search for metaphor and rhetoric in maps. It is claimed that the Europeans promoted an "epistemological" understanding of the map as early as the 17th century. An example of this understanding is that "[European reproduction of terrain on maps] reality can be expressed in mathematical terms; that systematic observation and measurement offer the only route to cartographic truth…". 17th-century map-makers were careful and precise in their strategic approaches to maps based on a scientific model of knowledge. Popular belief at the time was that this scientific approach to cartography was immune to the social atmosphere.[gobbledegook] A common belief is that science heads in a direction of progress, and thus leads to more accurate representations of maps. In this belief European maps must be superior to others, which necessarily employed different map-making skills. "There was a 'not cartography' land where lurked an army of inaccurate, heretical, subjective, valuative, and ideologically distorted images. Cartographers developed a 'sense of the other' in relation to nonconforming maps." Although cartography has been a target of much criticism in recent decades, a cartographer's 'black box'[definition needed] always seemed to be naturally defended to the point where it overcame the criticism. However, to later scholars in the field[when?], it was evident that cultural influences dominate map-making. For instance, certain abstracts on maps and the map-making society itself describe the social influences on the production of maps. This social play on cartographic knowledge "…produces the 'order' of [maps'] features and the 'hierarchies of its practices.'"[incomprehensible waffle] Depictions of Africa are a common target of deconstructionism. According to deconstructionist models, cartography was used for strategic purposes associated with imperialism and as instruments and representations of power during the conquest of Africa. The depiction of Africa and the low latitudes in general on the Mercator projection has been interpreted as imperialistic and as symbolic of subjugation due to the diminished proportions of those regions compared to higher latitudes where the European powers were concentrated. Maps furthered imperialism and colonization of Africa in practical ways by showing basic information like roads, terrain, natural resources, settlements, and communities. Through this, maps made European commerce in Africa possible by showing potential commercial routes and made natural resource extraction possible by depicting locations of resources. Such maps also enabled military conquests and made them more efficient, and imperial nations further used them to put their conquests on display. These same maps were then used to cement territorial claims, such as at the Berlin Conference of 1884–1885. Before 1749, maps of the African continent had African kingdoms drawn with assumed or contrived boundaries, with unknown or unexplored areas having drawings of animals, imaginary physical geographic features, and descriptive texts. In 1748, Jean B. B. d'Anville created the first map of the African continent that had blank spaces to represent the unknown territory. This was revolutionary in cartography and the representation of power associated with map making.[why?] Map types General vs. thematic cartography In understanding basic maps, the field of cartography can be divided into two general categories: general cartography and thematic cartography. General cartography involves those maps that are constructed for a general audience and thus contain a variety of features. General maps exhibit many reference and location systems and often are produced in a series. For example, the 1:24,000 scale topographic maps of the United States Geological Survey (USGS) are a standard as compared to the 1:50,000 scale Canadian maps. The government of the UK produces the classic 1:50,000 (replacing the older 1 inch to 1 mile) "Ordnance Survey" maps of the entire UK and with a range of correlated larger- and smaller-scale maps of great detail. Many private mapping companies have also produced thematic map series. Thematic cartography involves maps of specific geographic themes, oriented toward specific audiences. A couple of examples might be a dot map showing corn production in Indiana or a shaded area map of Ohio counties, divided into numerical choropleth classes. As the volume of geographic data has exploded over the last century, thematic cartography has become increasingly useful and necessary to interpret spatial, cultural and social data. A third type of map is known as an "orienteering," or special purpose map. This type of map falls somewhere between thematic and general maps. They combine general map elements with thematic attributes in order to design a map with a specific audience in mind. Oftentimes, the type of audience an orienteering map is made for is in a particular industry or occupation. An example of this kind of map would be a municipal utility map. Topographic vs. topological A topographic map is primarily concerned with the topographic description of a place, including (especially in the 20th and 21st centuries) the use of contour lines showing elevation. Terrain or relief can be shown in a variety of ways (see Cartographic relief depiction). In the present era, one of the most widespread and advanced methods used to form topographic maps is to use computer software to generate digital elevation models which show shaded relief. Before such software existed, cartographers had to draw shaded relief by hand. One cartographer who is respected as a master of hand-drawn shaded relief is the Swiss professor Eduard Imhof whose efforts in hill shading were so influential that his method became used around the world despite it being so labor-intensive. A topological map is a very general type of map, the kind one might sketch on a napkin. It often disregards scale and detail in the interest of clarity of communicating specific route or relational information. Beck's London Underground map is an iconic example. Although the most widely used map of "The Tube," it preserves little of reality: it varies scale constantly and abruptly, it straightens curved tracks, and it contorts directions. The only topography on it is the River Thames, letting the reader know whether a station is north or south of the river. That and the topology of station order and interchanges between train lines are all that is left of the geographic space. Yet those are all a typical passenger wishes to know, so the map fulfills its purpose. Map design Modern technology, including advances in Printing, the advent of Geographic information systems and Graphics software, and the Internet, has vastly simplified the process of map creation and increased the palette of design options available to cartographers. This has led to a decreased focus on production skill, and an increased focus on quality design, the attempt to craft maps that are both aesthetically pleasing and practically useful for their intended purposes. Map purpose and audience A map has a purpose and an audience. Its purpose may be as broad as teaching the major physical and political features of the entire world, or as narrow as convincing a neighbor to move a fence. The audience may be as broad as the general public or as narrow as a single person. Mapmakers use design principles to guide them in constructing a map that is effective for its purpose and audience. Cartographic process The cartographic process spans many stages, starting from conceiving the need for a map and extending all the way through its consumption by an audience. Conception begins with a real or imagined environment. As the cartographer gathers information about the subject, they consider how that information is structured and how that structure should inform the map's design. Next, the cartographers experiment with generalization, symbolization, typography, and other map elements to find ways to portray the information so that the map reader can interpret the map as intended. Guided by these experiments, the cartographer settles on a design and creates the map, whether in physical or electronic form. Once finished, the map is delivered to its audience. The map reader interprets the symbols and patterns on the map to draw conclusions and perhaps to take action. By the spatial perspectives they provide, maps help shape how we view the world. Aspects of map design Designing a map involves bringing together a number of elements and making a large number of decisions. The elements of design fall into several broad topics, each of which has its own theory, its own research agenda, and its own best practices. That said, there are synergistic effects between these elements, meaning that the overall design process is not just working on each element one at a time, but an iterative feedback process of adjusting each to achieve the desired gestalt. - Map projections: The foundation of the map is the plane on which it rests (whether paper or screen), but projections are required to flatten the surface of the earth. All projections distort this surface, but the cartographer can be strategic about how and where distortion occurs. - Generalization: All maps must be drawn at a smaller scale than reality, requiring that the information included on a map be a very small sample of the wealth of information about a place. Generalization is the process of adjusting the level of detail in geographic information to be appropriate for the scale and purpose of a map, through procedures such as selection, simplification, and classification. - Symbology: Any map visually represents the location and properties of geographic phenomena using map symbols, graphical depictions composed of several visual variables, such as size, shape, color, and pattern. - Composition: As all of the symbols are brought together, their interactions have major effects on map reading, such as grouping and Visual hierarchy. - Typography or Labeling: Text serves a number of purposes on the map, especially aiding the recognition of features, but labels must be designed and positioned well to be effective. - Layout: The map image must be placed on the page (whether paper, web, or other media), along with related elements, such as the title, legend, additional maps, text, images, and so on. Each of these elements have their own design considerations, as does their integration, which largely follows the principles of Graphic design. - Map type-specific design: Different kinds of maps, especially thematic maps, have their own design needs and best practices. Cartographic errors Some maps contain deliberate errors or distortions, either as propaganda or as a "watermark" to help the copyright owner identify infringement if the error appears in competitors' maps. The latter often come in the form of nonexistent, misnamed, or misspelled "trap streets". Other names and forms for this are paper townsites, fictitious entries, and copyright easter eggs. Another motive for deliberate errors is cartographic "vandalism": a mapmaker wishing to leave his or her mark on the work. Mount Richard, for example, was a fictitious peak on the Rocky Mountains' continental divide that appeared on a Boulder County, Colorado map in the early 1970s. It is believed to be the work of draftsman Richard Ciacci. The fiction was not discovered until two years later. Sandy Island (New Caledonia) is an example of a fictitious location that stubbornly survives, reappearing on new maps copied from older maps while being deleted from other new editions. Professional and learned societies Professional and learned societies include: - International Cartographic Association (ICA), the world body for mapping and GIScience professionals - British Cartographic Society (BCS) a registered charity in the UK dedicated to exploring and developing the world of maps - Society of Cartographers supports in the UK the practising cartographer and encourages and maintains a high standard of cartographic illustration - Cartography and Geographic Information Society (CaGIS), promotes in the U.S. research, education, and practice to improve the understanding, creation, analysis, and use of maps and geographic information. The society serves as a forum for the exchange of original concepts, techniques, approaches, and experiences by those who design, implement, and use cartography, geographical information systems, and related geospatial technologies. - North American Cartographic Information Society (NACIS), A North American-based cartography society that is aimed at improving communication, coordination and cooperation among the producers, disseminators, curators, and users of cartographic information. Their members are located worldwide and the meetings are on an annual basis - Canadian Cartographic Association (CCA) See also - Earth:Animated mapping – The application of animation to add a temporal component to a map displaying change - Cartogram – Map in which geographic space is distorted based on the value of a thematic mapping variable - Earth:Terrain cartography, also known as Cartographic relief depiction - Earth:Counter-mapping – Mapping by communities to contest state maps - Philosophy:Critical cartography – Mapping practices and methods of analysis grounded in critical theory - Earth:Figure-ground in map design - Geoinformatics – The application of information science methods in geography, cartography, and geosciences - Earth:Geovisualization - Geo warping – Adjustment of geo-referenced radar video data to be consistent with a geographical projection - Earth:History of cartography – The development of cartography, or mapmaking technology - Earth:History of Cartography Project – A publishing project in the Department of Geography at the University of Wisconsin–Madison - OpenStreetMap – Collaboratively edited world map available under free Open Database License - Pictorial map – Map that uses pictures to represent features - Astronomy:Planetary cartography – Cartography of solid objects outside of the Earth - Earth:Seafloor mapping – Measurement and presentation of water depth of a given body of water - Earth:Scribing (cartography) - Earth:Page layout (cartography) References - ↑ Robert Kunzig (May 1999). "A Tale of two obsessed archeologists, one ancient city, and nagging doubts about whether science can ever hope to reveal the past". Discover Magazine. http://discovermagazine.com/1999/may/archeologist. - ↑ Stephanie Meece (2006). "A bird's eye view – of a leopard's spots. The Çatalhöyük 'map' and the development of cartographic representation in prehistory". Anatolian Studies 56: 1–16. doi:10.1017/S0066154600000727. http://www.dspace.cam.ac.uk/handle/1810/195777. - ↑ Bicknell, Clarence (1913). A Guide to the prehistoric Engravings in the Italian Maritime Alps, Bordighera. - ↑ Delano Smith, Catherine (1987). Cartography in the Prehistoric Period in the Old World: Europe, the Middle East, and North Africa. In: Harley J.B., Woodward D. (eds.), The History of Cartography: Cartography in Prehistoric, Ancient and Mediaeval Europe and the Mediterranean v. 1, Chicago: 54-101 online, retrieved December 2, 2014. - ↑ Arcà, Andrea (2004). The topographic engravings of the Alpine rock-art: fields, settlements and agricultural landscapes. In Chippindale C., Nash G. (eds.) The figured landscapes of Rock-Art, Cambridge University Press, pp. 318-349; online academia.edu, retrieved December 2, 2014. - ↑ Uchicago.edu The Nippur Expedition - ↑ 7.0 7.1 Kurt A. Raaflaub; Richard J. A. Talbert (2009). Geography and Ethnography: Perceptions of the World in Pre-Modern Societies. John Wiley & Sons. p. 147. ISBN 978-1-4051-9146-3. - ↑ Catherine Delano Smith (1996). "Imago Mundi's Logo the Babylonian Map of the World". Imago Mundi 48: 209–211. doi:10.1080/03085699608592846. - ↑ Finel, Irving (1995). "A join to the map of the world: A notable discovery". British Museum Magazine 23: 26–27. - ↑ "History of Cartography". http://au.encarta.msn.com/encyclopedia_781534525/cartography_history_of.html. - ↑ J. L. Berggren, Alexander Jones; Ptolemy's Geography By Ptolemy, Princeton University Press, 2001 ISBN:0-691-09259-1 - ↑ "Geography". Geography. http://encarta.msn.com/encyclopedia_761552030_3/geography.html. - ↑ Miyajima, Kazuhiko (1997). "Projection methods in Chinese, Korean and Japanese star maps". in Johannes Andersen. Highlights of Astronomy. 11B. Norwell: Kluwer Academic Publishers. p. 714. ISBN 978-0-7923-5556-4. - ↑ Needham, Joseph (1971). Part 3: Civil Engineering and Nautics. Science and Civilization in China. 4. Cambridge University Press. p. 569. ISBN 978-0-521-07060-7. - ↑ 15.0 15.1 Sircar, D. C. C. (1990). Studies in the Geography of Ancient and Medieval India. Motilal Banarsidass Publishers. p. 330. ISBN 978-81-208-0690-0. - ↑ Woodward, p. 286 - ↑ S. P. Scott (1904), History of the Moorish Empire, pp. 461–462. - ↑ "Muhammad ibn Muhammad al-Idrisi.". Encyclopedia of World Biography. http://www.encyclopedia.com. - ↑ Parry, James (January 2004). "Mapping Arabia". Saudi Aramco World 55: 20–37. http://archive.aramcoworld.com/issue/200401/mapping.arabia.htm. - ↑ Globes and Terrain Models – Geography and Maps: An Illustrated Guide, Library of Congress - ↑ Henry Bottomley, « Between the Sinusoidal projection and the Werner: an alternative to the Bonne », Cybergeo : European Journal of Geography [Online], Cartography, Images, GIS, document 241, Online since 13 June 2003, connection on 27 July 2018. URL : http://journals.openedition.org/cybergeo/3977 ; DOI : 10.4000/cybergeo.3977 - ↑ Snyder, John (2007-09-01). "Map Projections in the Renaissance". https://www.press.uchicago.edu/books/HOC/HOC_V3_Pt1/HOC_VOLUME3_Part1_chapter10.pdf. - ↑ Britannica, Encyclopedia (2018-01-25). "Mercator Projection". https://www.britannica.com/science/Mercator-projection. - ↑ Britannica, Encyclopedia (2018-02-26). "Gerardus Mercator". https://www.britannica.com/biography/Gerardus-Mercator. - ↑ Carlton, Genevieve (2011). "Worldly Consumer: The Demand for Maps in Renaissance Italy". Imago Mundi 63: 123–126. - ↑ 26.0 26.1 Woodward, David. "Cartography and the Renaissance: Continuity and Change". The History of Cartography 3: 3–24. - ↑ Woodward, David. "The Italian Map Trade: 1480-1650". The History of Cartography 3: 773–790. - ↑ 28.0 28.1 Delano-Smith, Catherine (2005). "Stamped Signs on Manuscripts Maps in the Renaissance". Imago Mundi 57: 59–62. doi:10.1080/0308569042000289842. - ↑ 29.0 29.1 29.2 29.3 29.4 Woodward, David. "Techniques of Map Engraving, Printing, and Coloring in the European Renaissance". The History of Cartography 3: 591–610. - ↑ Richards, John F. (1997). "Early Modern India and World History". Journal of World History 8 (2): 197–209. doi:10.1353/jwh.2005.0071. ISSN 1527-8050. - ↑ Batchelor, Robert (January 2013). "The Selden Map Rediscovered: A Chinese Map of East Asian Shipping Routes, c.1619". Imago Mundi 65 (1): 37–63. doi:10.1080/03085694.2013.731203. ISSN 0308-5694. - ↑ Peter C. Perdue (2010). "Boundaries and Trade in the Early Modern World: Negotiations at Nerchinsk and Beijing". Eighteenth-Century Studies 43 (3): 341–356. doi:10.1353/ecs.0.0187. ISSN 1086-315X. - ↑ "Map Imitation" in Detecting the Truth: Fakes, Forgeries and Trickery , a virtual museum exhibition at Library and Archives Canada - ↑ Snyder, John (1987). Map projections: A Working Manual. https://pubs.er.usgs.gov/publication/pp1395. - ↑ Kent, Alexander (2014). "A Profession Less Ordinary? Reflections on the Life, Death and Resurrection of Cartography". The Bulletin of the Society of Cartographers 48 (1,2): 7–16. https://www.researchgate.net/publication/282123268. Retrieved 24 September 2015. - ↑ 36.0 36.1 36.2 36.3 36.4 Harley, J. B. (1989). "Deconstructing the Map". Cartographica, Vol. 26, No. 2. pp 1-5 - ↑ Michel Foucault, The Order of Things: An Archaeology of the Human Sciences. A Translation of Les mots et les choses. New York: Vintage Books, 1973. - ↑ Stone, Jeffrey C. (1988). "Imperialism, Colonialism and Cartography". Transactions of the Institute of British Geographers, N.S. 13. Pp 57. - ↑ 39.0 39.1 39.2 Bassett, J. T. (1994). "Cartography and Empire Building in the Nineteenth-Century West Africa". Geographical Review 84 (3): 316–335. doi:10.2307/215456. - ↑ Monmonier, Mark (2004). Rhumb Lines and Map Wars: A Social History of the Mercator Projection p. 152. Chicago: The University of Chicago Press. (Thorough treatment of the social history of the Mercator projection and Gall–Peters projections.) - ↑ Dutton, John. "Cartography and Visualization Part I: Types of Maps". https://www.e-education.psu.edu/geog486/node/1848. - ↑ Kennelly, Patrick (2006). "A Uniform Sky Illumination Model to Enhance Shading of Terrain and Urban Areas". Cartography and Geographic Information Science 33: 21–36. doi:10.1559/152304006777323118. - ↑ Ormeling, F.J. (1986-12-31). "Eduard Imhof (1895–1986)". https://icaci.org/eduard-imhof-1895-1986/. - ↑ Ovenden, Mark (2007). Transit Maps of the World. New York, New York: Penguin Books. pp. 22, 60, 131, 132, 135. ISBN 978-0-14-311265-5. - ↑ Devlin, Keith (2002). The Millennium Problems. New York, New York: Basic Books. pp. 162–163. ISBN 978-0-465-01730-0. https://archive.org/details/millenniumproble00keit. - ↑ "3.1 The Cartographic Process | GEOG 160: Mapping our Changing World". https://www.e-education.psu.edu/geog160/node/1882. - ↑ Albrecht, Jochen. "Maps projections". http://www.geo.hunter.cuny.edu/~jochen/gtech201/lectures/lec6concepts/map%20coordinate%20systems/how%20to%20choose%20a%20projection.htm. - ↑ Jill Saligoe-Simmel,"Using Text on Maps: Typography in Cartography" - ↑ Monmonier, Mark (1996). 2nd.. ed. How to Lie with Maps. Chicago: University of Chicago Press. p. 51. ISBN 978-0-226-53421-3. - ↑ Openstreetmap.org Copyright Easter Eggs Bibliography - Ovenden, Mark (2007). Transit Maps of the World. New York, New York: Penguin Books. ISBN 978-0-14-311265-5. Further reading - Mapmaking - MacEachren, A.M. (1994). Some Truth with Maps: A Primer on Symbolization & Design. University Park: The Pennsylvania State University. ISBN 978-0-89291-214-8. - Monmonier, Mark (1993). Mapping It Out. Chicago: University of Chicago Press. ISBN 978-0-226-53417-6. - Kraak, Menno-Jan; Ormeling, Ferjan (2002). Cartography: Visualization of Spatial Data. Prentice Hall. ISBN 978-0-13-088890-7. - Peterson, Michael P. (1995). Interactive and Animated Cartography. Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-079104-7. - Slocum, T. (2003). Thematic Cartography and Geographic Visualization. Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-035123-4. - History - Ralph E Ehrenberg (October 11, 2005). Mapping the World: An Illustrated History of Cartography. National Geographic. p. 256. ISBN 978-0-7922-6525-2. https://archive.org/details/mappingworldillu00ehre/page/256. - J. B. Harley and David Woodward (eds) (1987). The History of Cartography Volume 1: Cartography in Prehistoric, Ancient, and Medieval Europe and the Mediterranean. Chicago and London: University of Chicago Press. ISBN 978-0-226-31633-8. - J. B. Harley and David Woodward (eds) (1992). The History of Cartography Volume 2, Book 1: Cartography in the Traditional Islamic and South Asian Societies. Chicago and London: University of Chicago Press. ISBN 978-0-226-31635-2. - J. B. Harley and David Woodward (eds) (1994). The History of Cartography Volume 2, Book 2: Cartography in the Traditional East and Southeast Asian Societies. Chicago and London: University of Chicago Press. ISBN 978-0-226-31637-6. - David Woodward and G. Malcolm Lewis (eds) (1998). The History of Cartography Volume 2, Book 3: Cartography in the Traditional African, American, Arctic, Australian, and Pacific Societies [Full text of the Introduction by David Woodward and G. Malcolm Lewis]. Chicago and London: University of Chicago Press. ISBN 978-0-226-90728-4. - David Woodward (ed) (2007). The History of Cartography Volume 3: Cartography in the European Renaissance. Chicago and London: University of Chicago Press. ISBN 978-0-226-90733-8. - Mark Monmonier (ed) (2015). The History of Cartography Volume 6: Cartography in the Twentieth Century. Chicago and London: University of Chicago Press. ISBN 9780226534695. - Matthew Edney and Mary S. Pedley (eds). The History of Cartography Volume 4: Cartography in the European Enlightenment. Chicago and London: University of Chicago Press. ISBN 978-0-226-31633-8. - Roger J. P. Kain et al. (eds). The History of Cartography Volume 5: Cartography in the Nineteenth Century. Chicago and London: University of Chicago Press. - Meanings - Monmonier, Mark (1991). How to Lie with Maps. Chicago: University of Chicago Press. ISBN 978-0-226-53421-3. https://archive.org/details/howtoliewithmaps00monm. - Wood, Denis (1992). The Power of Maps. New York/London: The Guilford Press. ISBN 978-0-89862-493-9. https://archive.org/details/powerofmaps00wood.
https://handwiki.org/wiki/Earth:Cartography
The Nairobi Metropolitan Services (NMS) and KenGen have shelved the construction of the multi-billion Dandora power plant after suffering multiple delays in tender and feasibility studies of the project. Prospective contractors were set to be vetted in December 2020, however, the tendering process was pushed to January 2021 over some undisclosed technicalities. The plan will thus be revived after six months (in June 2021), with KenGen saying that it needed the aforementioned period of time to conduct feasibility studies for the waste plant.Dandora dumpsite in Dandora, NairobiFile It added that it will also release a report that would ascertain whether the project is viable or not and also disclose its target cost. The project implementer will guide NMS to either proceed with the construction or ditch the plan in totality. "We have signed a consultancy contract with a firm to carry out a feasibility study for the Waste-to-Energy project. "The consultant has started work and the feasibility study will be comprehensive and will help to fast track the project’s implementation," KenGen noted. Nairobi MCAs are also cautious of the plan and warned that the multiple delays may culminate into a waste of money and time. NMS Director General, Mohamed Badi had said that the plant would solve the county’s garbage problem and contribute to the renewable energy pool.
https://www.kenyans.co.ke/news/63606-nms-postpones-mult-million-nairobi-project-after-setback
Hannah Kosstrin is a dance historian whose work engages dance, Jewish, and gender studies. Her research and teaching interests include dance histories of the United States, Israel and the Jewish diaspora, Latin America, Europe, South Asia, and the African diaspora; gender and queer theory; nationalism and diaspora studies; Laban movement notation and analysis; and digital humanities. Her monograph, Honest Bodies: Revolutionary Modernism in the Dances of Anna Sokolow (Oxford University Press, 2017), examines the transnational circulation of American modernism through Anna Sokolow’s choreography among communist and Jewish currents of the international Left from the 1930s to the 1960s in the United States, Mexico, and Israel. Kosstrin’s work appears in Dance Research Journal, Dance Chronicle, The International Journal of Screendance, Dance on Its Own Terms: Histories and Methodologies (ed. Bales and Eliot, Oxford UP, 2013), and Queer Dance: Meanings and Makings (ed. Croft, Oxford UP, 2017). She is project director for KineScribe, a Labanotation iPad app supported by the National Endowment for the Humanities, Reed College, and Ohio State. Kosstrin is Treasurer of the Dance Studies Association, and a member of the Society of Dance History Scholars Editorial Board and the Dance Notation Bureau Professional Advisory Committee. From 2004–2007 she worked with Columbus Movement Movement which was named one of Dance Magazine’s “25 to Watch” in 2007. She joined the Ohio State faculty in 2014, and previously taught at Reed College, Wittenberg University, and Ohio University Pickerington Center. She holds BA and MA degrees in dance from Goucher College and Ohio State, a PhD in Dance Studies with a minor field in women’s history from Ohio State, and Labanotation Teacher Certification from the Dance Notation Bureau. Kosstrin is a past recipient of the Samuel M. Melton Graduate Fellowship in Jewish Studies.
https://meltoncenter.osu.edu/people/kosstrin.1
Nigeria lost one of its most iconic, out-spoken and passionate advocates for social and environmental justice last week with the cruelly early death of Oronto Douglas at the age of forty eight. Many of us who had been involved in the campaign against Shell in Nigeria in the early nineties worked closely with Oronto, and got to treasure and value his passion for justice, infectious laugh, cheeky smile and boundless energy. His death leaves a void that is difficult to fill for many who knew him, both in Nigeria and abroad. Nnimmo Bassey, who set up ERA with him, before becoming the chair of FOE international, wrote last week that: “Oronto’s passing hit me in a deeply emotional manner that cannot be captured in words.” He added that “Men like Oronto Douglas do not die. They may no longer be visible, but their ideas, passions and inspiration live on. He lived a truly unforgettable life”. Oronto was one of the leaders of the anti-Shell campaign in Nigeria, fighting against Shell’s blatant double standards, constant pollution and endemic indifference to the plight of the local communities. Although he was not as well known internationally as the Ogoni writer Ken Saro-Wiwa, who was murdered by the Nigerian military in 1995, Oronto has been pivotal to the struggle against Shell since the early nineties. As myself, James Marriott and Lorne Stockman wrote in the book The Next Gulf, which was published a decade ago, “If Saro-Wiwa had been the elder statesman of the Delta protest movement, Douglas, who had been a junior counsel on Saro-Wiwa’s defence team at his trial, was one of the leaders of the next generation. He was committed to non-violence, with a long standing belief that social justice cannot be achieved without ecological justice too”. Douglas was both a lawyer and environmental rights activist and one of the founders of Environmental Rights Action (ERA), the Nigerian affiliate of Friends of the Earth. He was also one of the founders of the pan-Niger Delta Chicoco movement in 1997, named after the organic soil found in the Delta. I interviewed him a month after Chicoco had been formed. This morning I dusted off the old analogue tape of the interview and pressed play. It was both reassuring to hear his voice again, but deeply painful knowing that this would be the last time. In the interview, I asked Oronto what living with Shell had been like for someone who was born in a village near Oloibiri, where Shell first found oil in the late fifties. “To us the last forty years have been forty years of sorrow, forty years of blood, forty years of desecration of our customs and traditions, forty years of the total elimination of our livelihoods that we hold so dear – I mean our land, our air and our water,” Oronto replied. He continued, with his voice getting more animated and emotional: “We are being systematically wiped out by the multinationals corporations in Nigeria, principally Shell.” He called on the oil giant to “directly redress the ecological war they have waged on our land for forty years”. The following year Douglas was one of the leaders of the “Kaiama Declaration” which demanded an end to oil production and military operations in his homeland, Ijawland. It stated: “We are tired of gas flaring, oil spillages, blowouts and being labelled saboteurs and terrorists”. As one of the organisers, Douglas knew he might be signing his own death warrant, and already feared for his safety. In 2002, he founded the Community Defence Law Foundation (CDLF) in Port Harcourt, Rivers State, whose mission remains to “encourage, support, defend and sponsor the growth of intellectual and material capital with a view to democratize human happiness” by providing “formal and non-formal support to disadvantaged individuals and communities, defending their human and environmental rights so as to aid survival”. A year later, Oronto co-authored what is seen by many the seminal book on oil in the region, Where Vultures Feast: Shell, Human Rights, and Oil in the Niger Delta, which was published in 2003. The book was typically hard-hitting: “All available evidence suggests that Shell’s destruction of the Niger Delta is informed by a near-total disregard for the welfare of the local people.” Two years later, Douglas surprised some by taking a position within the government of Bayelsa state, but he remained an outspoken advocate for the people of the Niger Delta. In 2005, I interviewed him again for the book, The Next Gulf and we talked about the politics of oil in the Delta. “We are talking about a resource that destroys the present and destroys the future. The sooner we consign it to the dustbin of the energy future, the better for all of us,” he said. As we pointed out in the book, these were powerful words from a government minister of a state which produced a significant proportion of Nigeria’s oil. For nearly the last decade, Oronto was also an influential advisor to the Goodluck Jonathan, who went on to become Nigerian President. Jonathan recently lost the country’s general election, and accounts are that Oronto was, even in his failing health, a passionate architect of Nigeria’s first truly peaceful transition. But Oronto had also been fighting stomach cancer for years. Many of us believed that the situation was under control, but cancer has a nasty habit of returning time and again. A lingering question that will never be answered is did the oil industry that Oronto fought so hard against cause his untimely death? As one person pointed out last week, the “Niger Delta’s carcinogens consumed a very significant victim: Oronto Natei Douglas”. To put this into context, when UNEP concluded its investigation into the impact of oil exploration in the Niger Delta, especially in Ogoniland, in 2011 they found some levels of cancer causing chemicals, “up to 900 times above World Health Organisation (WHO) standards”. It has long been noted that the routine and chronic pollution caused by blow-outs, spills, and flaring has presented an unusually high cluster of cancers in the region. In the interview back in 1997, Douglas had called on Shell to act immediately to change their ways. He had spoken passionately: “In the next forty years we want Shell to give due regard to ethics, a company which would accept our customs and traditions, a company that would not look down on us as sub-human beings, a company that would not practice double standards and environmental racism on our land.” He had added that “we would sincerely want a relationship that ensures that we live and not die. Our fight today is a fight for survival.” Tragically, last week Oronto lost his own seven year fight for survival. There will be many who blame Shell and the other oil companies with their chronic pollution for his death. Apparently the week before he died, after being told by doctors that they could do no more for him, Oronto said simply “I have nothing to regret.” How many of us will say that on our death beds too? Rest in Peace, Oronto.
https://priceofoil.org/2015/04/13/men-like-oronto-douglas-die/
National Realty Investment Advisors (NRIA), a Secaucus-based developer, kicked off the construction of The Station, a 10-story luxury apartment complex at 4901 Bergenline Ave., with Mayor Gabriel Rodriguez and U.S. Senator Bob Menendez on hand to support the project at a ceremonial groundbreaking. The $49 million mixed-use development will have 97 luxury one- and two- bedroom apartments, lots of amenities, a rooftop deck, and ground floor retail. The ten-story high rise is the tallest building in the Bergenline Avenue area business district, the only residential project of its size there. The developer will construct 80 underground parking spaces and 25 spaces off-site. “NRIA will deliver several multi-family projects on the Gold Coast,” NRIA’s Glenn LaMattina said. “The Station is our second project, with many more planned in the years ahead. It will be the tallest and most luxurious building in West New York, and will be the most accessible to public transportation. The Station will give West New York a product comparable to other Gold Coast towns, like what you’d find across the river but more attractively priced.” Public transit is the key The development will be at the former site of a Duane Reade pharmacy near the Union City border, across the street from the Bergenline Light Rail station. Public officials who spoke at the event were optimistic about its being so close to a light rail station. The state Department of Transportation and New Jersey Transit in the “Transit Village Initiative” call for pedestrian-friendly growth near public transit infrastructure. The goal is to increase public transit ridership and create less reliance on motor vehicles, in order to lift burdens on the state’s transportation sector and the environment. “This is a testament to positive development, and being conscious not only to our community but to the quality of life of our residents, which is the most important part of public service,” Rodriguez said. “To see this come to life today is a pleasure as both a mayor and a lifelong West New York resident.” Menendez, whose offices were once across the street from the development, said, “These apartments are exactly what I envision with transit-oriented development. Families and young people who reside here will be able to roll out of bed and catch a train to other parts of the county, to the metropolitan area, and beyond.” Big residential projects on Bergenline Avenue were inevitable since the creation of a light rail station there in 2006. Other developments near public transit may be on the horizon. For updates on this and more stories check hudsonreporter.com or follow us on Twitter @hudson_reporter. Mike Montemarano can be reached at [email protected].
https://hudsonreporter.com/2019/11/19/ground-broken-on-first-high-rise-in-wnys-retail-district/
OMOWUMI ETIKO is the CEO of Nusafiri, a unique travel concierge company focused solely on the client and making travel dreams a reality. With a background in administration, human resources, learning and development, her focus is to help her clients experience the art of travel as a seamless and stress-free means of learning, expanding their minds and imbibing new cultures, tastes, sights and sounds. She works with a team that includes international partners, event planners, venues and more to provide end to end travel & logistics and bespoke planning for destination events and more from inception till full execution. She is also focused on expanding tourism opportunities in Uncommon places, specifically in uncharted Nigeria and Africa as a whole, and creating experiences which match the appetite and budget of her clients. Find out more about her brand below:
https://tourismvirtualsummit.com/ts18-luxury-travel/
Initial approaches to recognize handwritten text have primarily concentrated on criteria to match the input pattern against reference patterns, and to select that reference symbol which best matched the input pattern as the recognized pattern. This family of matching criteria is often called "shape matching". A wide variety of shape-based recognition systems are known in the art. Representatives of such shape-based recognition systems are described in U.S. Pat. Nos. 3,930,229; 4,284,975 and 4,653,107. Common to all the shape-based systems is that irrespective of the matching criteria used, the information is processed locally, and that such systems are not very accurate in resolving ambiguities among reference patterns which exhibit shape similarity. The use of character context or linguistic rules as additional information for recognizing characters has been extensive in a variety of fields, such as cryptography, natural language processing, etc. Systems introducing global context and syntax criteria have been offered for improving shape-based recognition, in order to distinguish among members of a "confusion set." A system representative of this approach is described in U.S. Pat. No. 4,754,489. The system of this patent uses conditioned probabilities of English characters appearing after a given character sequence, and probability of groups of English characters to suggest syntax rules. Another representative system is described in U.S. Pat. No. 4,718,102, which is directed to ideogram-based languages such as Kanji, in which a shape-based algorithm producing a confusion set is disambiguated by simulating human experience. The disambiguating routines are based on actual studies of particular characters. It appears that the approaches taken heretobefore resulted in systems suffering from the following problems: They are not sufficiently general to cover different languages; require computationally prohibitive time and memory resources; do not include the shape information in a statistical, meaningful fashion; are not adaptive in texts of varying linguistic and syntax content; and operate as post-processes, not contributing to the segmentation of input patterns.
Life has many seasons, with its rise and fall, ups and downs, sunshine, and storms. Each season has its unique qualities, experiences, its imparted growth. Each one stretches and shapes you, some more than others. When I think about the seasons of life, I tend to divide them into the following categories: The Young Years The Elementary Years The Middle School Years High School College-Age Years The Single Years Young(er)/Newly Married Years The Family Years The Empty Nest Years The “Golden” Years (although a patient said to me this week: “Whoever called them the golden years obviously didn’t live them. There’s nothing “golden” about getting old.” 🙂 ) Of course, your divisions may take on a slightly different, more individualized form, but my thought in dividing them this way was focusing on the major growth periods caused by extreme change in life. These fit that definition. If you’re like me, even reading the caption that categorizes the specific time period brings images, thoughts, and emotions to the surface. Each of these major stages of our lives bring large amounts of growth, varying degrees of change, and usually major social, economical, and emotional adjustment. While talking with my mom the other day, she shared how challenging it has been to transition into the empty nesting years. After twenty-five plus years devoted to raising her daughters, she finds herself suddenly in a completely different stage. As we continued to talk, I found myself identifying with some of her feelings even as I have been navigating a different life transition recently from the Single Years to the Young Married years. This change in my personal life season has resulted in job changes, a move across country, learning to live with someone 24/7, and acquiring many new communication skills. As all these changes during a transition between life seasons occur, the resulting emotional turmoil can seem to emulate the seasons of grief. Grief can be such a “dirty” word to some — a word reserved only for a tragic death or devastating diagnosis. But I would argue that grief in and of itself is actually vital, when processed appropriately, in its ability to help us move from one season to another in life. Although typically thought of as a cycle for relatives left behind after a loved one’s death, Elizabeth Kubler Ross’ theory of the “Five Stages of Grief” can be applicable to other circumstances in life as well. Her ideas contain the following: denial, anger, bargaining, depression, acceptance. To preface, I think that each season of life transition may not go through each grief stage exactly, but what I do see is varying shades of each emotion as people transition from one to the other. For example, your transition from high school to college may have been easy breezy as you settled right into campus life; however, for others, they may have fallen into a cycle of grieving which could look like depression (late night crying sessions, coping with food = freshman fifteen) or denial (going home every weekend to keep up high school routines with friends still there) as the adjustment to a whole new way of life overwhelms them. Or for the young married, it may be expressed as anger toward their new spouse as they attempt to process their emotions, or perhaps denial comes into play as selfishness attempts to convince the new spouses that none of their “while single” habits need adjusting and that no compromising is required. How about the older years? People tend to compensate mostly in denial – attempting to do things such as climb ladders to put up Christmas lights, continue doing home maintenance, or postponing doctor’s visits — all in the attempt to delay the inevitable: getting old. Perhaps you will see anger, depression, or bargaining surface as independence is slowly whittled away. As life changes, we change. Growing, stretching, falling down, getting back up. Tragedies strike, businesses fail, career paths suddenly curve a different direction. These transitions aren’t easy, and if we deny our natural reaction to grieve the loss of dreams, ideas, expectations, goals, and hopes just as much as we would broken friendships, destroyed marriages, prodigal children, losses of loved ones, and the seemingly finality of death, we do ourselves such a disservice. I believe investing in our own unique grief process during the seasons of life and cultivating our ability to allow ourselves to truly feel the hurt, the pain, the disappointment, the confusion, the sadness – is absolutely essential to shaping us into the person we are meant to become. So Cry Loudly. Laugh Freely. Rejoice Openly. Be Angry. Acknowledge your Disappointment. Bargain Relentlessly. Grieve Intentionally. And one day: Accept Fully. Accept the loss, the sting, the hurt, the pain, the turmoil and let it mold you, make you into the person you are becoming — the one who is not destroyed by the changing of seasons or by twists and turns in life, but the one who persevered through them, the one who now stands taller, wiser, and more willing to share their own story with others — imparting freely that when the leaves change and winter comes, the promise of spring and summer will forever remain, just as greater purposes and plans lie ahead as you navigate the seasons of grief in life. What do you think? Have you experienced grief symptoms with life transitions?
http://www.jessicaachance.com/2015/12/grief-through-the-seasons-of-life/
1. Introduction {#sec0005} =============== High-resolution computed tomography (HRCT) of the lungs is the best non-invasive method to assess the lung parenchyma \[[@bib0005]\]. Even subtle changes in the lung tissue can be demonstrated in the HRCT images thanks to the thin slices and high spatial frequency reconstruction algorithm. Since its introduction in the 1980´s, the examination technique has continuously evolved. Nowadays, multi-detector computed tomography (MDCT) enables continuous thin slices and multiplanar reconstructions \[[@bib0010],[@bib0015]\]. The publication of the ATS/ERS/JRS/ALAT guideline for diagnosis of idiopathic pulmonary fibrosis (IPF) in 2011 emphasized the importance of HRCT in interstitial lung disease (ILD) \[[@bib0020]\]. The identification of a typical HRCT appearance is sometimes sufficient to provide a certain diagnosis. In the appropriate clinical context, if the HRCT findings meet the criteria for Usual Interstitial Pneumonia (UIP), IPF can be confidently diagnosed, obviating the need for a surgical lung biopsy \[[@bib0020], [@bib0025], [@bib0030]\]. The 2011 guideline with its HRCT criteria for "UIP Pattern", "Possible UIP Pattern" and "Inconsistent with UIP Pattern", was an important milestone for standardizing the assessment of IPF using HRCT \[[@bib0020]\]. The ATS/ERS/JRS/ALAT guideline was updated in 2018. In the 2018 UIP criteria, the HRCT is classified into four categories; "UIP", "Probable UIP", "Indeterminate for UIP" and "Alternative Diagnosis" \[[@bib0030]\]. Interpretation of HRCT -- in IPF and other ILD -- relies on the identification of typical parenchymal patterns and their distribution within the lungs \[[@bib0030], [@bib0035], [@bib0040]\]. However, most patterns are unspecific, there is an overlap in the radiological appearance between different diseases, and the same disease may show many different appearances. This complexity underlines the importance of the multi-disciplinary collaboration for correct interpretation of HRCT in IPF, but also in other ILD \[[@bib0030],[@bib0045]\]. From the radiologists\' perspective, a consistent assessment of the typical HRCT patterns is crucial for accurate interpretation of HRCT. Consequently, several studies have investigated the intra- and interobserver variations in HRCT. Previous studies have focused on specific lung diseases and specific patterns, for example the 2011 UIP criteria, interstitial lung diseases, bronchiectasis and asbestos related changes \[[@bib0050], [@bib0055], [@bib0060], [@bib0065], [@bib0070], [@bib0075], [@bib0080], [@bib0085]\]. However, to the best of our knowledge, there is no study that has simultaneously addressed the interobserver variability for the wide range of typical HRCT patterns. The difference between specific patterns in interobserver variability remains unknown. The purpose of the present study was, therefore, to quantify the interobserver variability among the most frequently encountered parenchymal patterns in HRCT, and to compare the interobserver variability in the application of the 2011 and 2018 UIP criteria. 2. Material and methods {#sec0010} ======================= The study was performed in three phases. The first phase was the creation of an HRCT image databank including several examples each from a predefined list of typical parenchymal patterns. Subsequently, two readings of the databank were performed to assess the interobserver variability for the parenchymal patterns and for the 2011 and 2018 UIP criteria. The regional research ethics board approved the study protocol and waived the informed consent requirement. 2.1. HRCT image databank {#sec0015} ------------------------ Because of the uneven distribution of parenchymal patterns in any clinical cohort, a specially created databank is necessary for the analysis of interobserver variability of the typical HRCT patterns. An anonymous HRCT databank was created for the study, consisting of 126 HRCT examinations with several examples of each of the typical patterns and also examples of examinations demonstrating no pathological parenchymal patterns. The inclusion in the databank followed a predefined list of patterns to be included. The inclusion criterion was continuous slice HRCT examination demonstrating any of the predefined patterns. The patterns that were included in the databank were perilymphatic nodules, tree-in-bud nodules, other centrilobular nodules, ground glass opacicties, thickened interlobular septa, intralobular lines, septations and lines, consolidation with and without air bronchogram, crazy paving, emphysema, honey combing and other cystic patterns. Exclusion criteria were contrast enhanced examination and considerable artifacts, for example artifacts from respiratory motion or metal implants. The inclusion of examinations with each pattern was discontinued when the predefined number of examinations demonstrating the specific pattern was obtained. Examples of patterns included in the databank are shown in [Fig. 1](#fig0005){ref-type="fig"}.Fig. 1Examples of parenchymal patterns. A. Honeycombing. B. Emphysema. C. Reticular pattern. D. Consolidation. E. Tree-in-bud (arrow). F. Ground glass.Fig. 1 One observer (with 4 years of experience of thoracic imaging) performed the inclusion in the databank. The databank was retrospectively created by reviewing all HRCT examinations performed during 39 randomly selected months between 2011 and 2016 in the region of Örebro, Sweden, which consists of one university hospital and two smaller associated hospitals. Inclusion during randomly selected months was a part of the anonymization process. A continuous slice HRCT was defined as a thoracic scan in supine position with breath hold at full inspiration with continuous ≤ 1 mm images, reconstructed using a sharp kernel. All examinations were of normal radiation dose (ref-mAs ∼150, ref-kV 120). The CTDI was not available in the anomymized databank. The studies were acquired with Philips Ingenuity CT (n = 4), Ingenuity Core 128 (n = 13), Brilliance 16 (n = 1), Brilliance 40 (n = 7) or with Siemens Biograph 40 (n = 12), Somatom Definition AS (n = 55), Somatom Definition AS+ (n = 2) and Somatom Definition Flash (n = 32). Reconstruction algorithms were L (n = 25), B70f (n = 68), I70f\\1 (n = 20), I70f\\2 (n = 11) or I70f\\3 (n = 2). The images were acquired at 100 kVp (n = 20), 120 kVp (n = 97) or 140 kVp (n = 9). 2.2. Image analysis {#sec0020} ------------------- Two readers (radiologists with 4 and 6 years of experience of thoracic imaging) independently evaluated the 126 HRCT in the databank in two separate readings. In the first session, the readers noted all identifiable patterns in each HRCT, using a score sheet with the same list of patterns as for the creation of the databank. The readers also assessed whether the HRCT findings met the criteria for "UIP Pattern" according to the 2011 ATS/ERS/JRS/ALAT criteria \[[@bib0020]\]. Scans classified as "Possible UIP Pattern"or "Inconsistent with UIP Pattern" were not separated. In a second reading, separated by more than one year from the first reading, the readers classified the HRCTs according to the 2018 UIP criteria update \[[@bib0030]\]. In this classification, each HRCT was classified in one of the four classes "UIP", "Probable UIP", "Indeterminate for UIP" or "Alternative Diagnosis". The first observer also created the databank. The creation of the databank was separated from the first reading with at least three months. 2.3. Statistical analysis {#sec0025} ------------------------- The interobserver variability was evaluated using Cohen´s kappa with 95 % confidence intervals (CI). The kappa values were computed for each pattern separately with the binary classes "pattern existent" vs. "pattern non-existent" in an examination. Interlobular septations, intralobular lines and the combination septations and lines were analyzed separately and grouped as "reticular pattern". Consolidations with air bronchogram and consolidations without air bronchogram were also analyzed grouped as "consolidation". The kappa values were graded using the classification proposed by Landis and Koch \[[@bib0090]\]; k\<0.00, no agreement; 0.00 \< k ≤ 0.20, slight; 0.21 \< k ≤ 0.40, fair; 0.41 \< k ≤ 0.60, moderate; 0.61 \< k ≤ 0.80, substantial; 0.81 \< k ≤ 1.00, near perfect. The CI of the kappa values regarding the 2011 and 2018 UIP criteria were compared for overlaps. Non-overlapping 95 % CI was considered as statistically significant differences at p = 0.05 level. For each reader, the contingency table of HRCT classified as UIP pattern using 2011 and 2018 criteria was analyzed with McNemar test. 3. Results {#sec0030} ========== 3.1. Interobserver variation in pattern assessment {#sec0035} -------------------------------------------------- In the first reading, two observers independently evaluated the examinations and noted all identifiable patterns in each examination. The interobserver variability for the different patterns are shown in [Table 1](#tbl0005){ref-type="table"}.Table 1Interobserver variability for parenchymal patterns.Table 1Patternn\*Agreement\*\*Kappa (95 % CI)Kappa class**Nodular pattern**Perilymphatic nodules9.5123/126 (98 %)0.83 (0.64--1.02)Near perfectTree-in-bud nodules15.0122/126 (97 %)0.85 (0.70-0.99)Near perfectNon-tree-in-bud nodules6.5119/126 (94 %)0.44 (0.04-0.84)Moderate **Ground glass**28.5101/126 (80 %)0.44 (0.24-0.64)Moderate **Reticular pattern**57.5107/126 (85 %)0.70 (0.57-0.82)SubstantialInterlobular septations9.5121/126 (96 %)0.72 (0.47-0.96)SubstantialIntralobular lines19.5101/126 (80 %)0.28 (0.03-0.53)FairSeptations + lines43.0104/126 (83 %)0.61 (0.46-0.76)Substantial **Crazy paving**13.5113/126 (90 %)0.47 (0.19-0.74)Moderate **Consolidation**43.0110/126 (87 %)0.72 (0.59-0.85)SubstantialWithout air bronchogram27.5107/126 (85 %)0.56 (0.38-0.74)ModerateWith air bronchogram19.5119/126 (94 %)0.79 (0.64-0.94)Substantial **Decreased attenuation**Emphysema24.0110/126 (87 %)0.61 (0.44-0.79)SubstantialHoney combing27.0118/126 (94 %)0.81 (0.69-0.94)Near perfectOther cystic patterns6.0124/126 (98 %)0.83 (0.59--1.07)Near perfect**Normal**16.0124/126 (98 %)0.93 (0.83--1.03)Near perfect[^1] There was a near perfect agreement as to whether the examination was normal or contained one or more patterns, kappa 0.93. For the different parnchymal patterns, there was a large variation in interobserver agreement. Tree-in-bud nodules, perilymphatic nodules, honeycombing and other cystic patterns showed near perfect agreement, while intralobular lines showed the lowest interobserver agreement. 3.2. Interobserver variation in 2011 and 2018 UIP criteria {#sec0040} ---------------------------------------------------------- In addition to the identifiable parenchymal patterns, the observers also evaluated whether the findings met the criteria for "UIP Pattern" according to the 2011 UIP criteria in the first reading. In the second reading, the HRCTs were classified according to the 2018 UIP criteria in the four categories "UIP", "Probable UIP", "Indeterminate for UIP" and "Alternative diagnosis". The kappa value for the four-class interobserver agreement in the 2018 UIP criteria between reader 1 and 2 was 0.62, substantial agreement. The confusion matrix for the four-class classification is shown in [Table 2](#tbl0010){ref-type="table"}.Table 2Confusion matrix 2018 UIP criteria.Table 2Reader 2UIPProbable UIPIndeterminate for UIPAlternative diagnosisReader 1UIP12011Probable UIP1220Indeterminate for UIP5156Alternative diagnosis11286[^2] The kappa values using the 2011 and 2018 UIP criteria were similar, see [Table 3](#tbl0015){ref-type="table"}. In the 2018 criteria assessment, dichotomization at two different levels did not reveal any significant differences in the agreement.Table 3Inter-reader variation in UIP assessment.Table 3Criteria and dichotomizationKappa (95 % CI)2011 criteria (UIP Pattern vs. Possible UIP or Inconsistent with UIP)0.58 (0.32-0.83)[a](#tblfn0005){ref-type="table-fn"}2018 criteria (UIP Pattern vs. probable UIP, indeterminate for UIP or alternative diagnosis)0.69 (0.49-0.88)[a](#tblfn0005){ref-type="table-fn"}2018 criteria (UIP pattern or probable UIP vs. indeterminate for UIP or alternative diagnosis)0.66 (0.47-0.84)[a](#tblfn0005){ref-type="table-fn"}[^3][^4] The 95 % confidence intervals of the kappa values overlapped. The null hypothesis of equal kappa values using the 2011 and 2018 UIP criteria could not be rejected -- there was no statistically significant difference in agreement using the 2011 and 2018 criteria. 3.3. Consistency between 2011 and 2018 UIP criteria {#sec0045} --------------------------------------------------- Reader 1 classified nine scans as "UIP Pattern" using the 2011 criteria and 14 scans as "UIP" using the 2018 criteria. Reader 2 classified 17 scans as "UIP Pattern" using the 2011 criteria and 19 scans as "UIP" using the 2018 criteria. The confusion matrices for reader 1 and 2 are shown in [Tables 4a, 4b](#tbl0020){ref-type="table"}. Using McNemar test, there were no statistically significant differences in the number of HRCT classified as UIP, neither for reader 1 (p = 0.13), nor reader 2 (p = 0.73).Table 4aConfusion matrix 2011 vs. 2018 UIP criteria, reader 1.Table 4a2011 UIP pattern2011 Possible UIP or Inconsistent with UIP pattern2018 UIP862018 Probable UIP, Indeterminate for UIP or Alternative diagnosis1111Table 4bConfusion matrix 2011 vs. 2018 UIP criteria, reader 2.Table 4b2011 UIP pattern2011 Possible UIP or Inconsistent with UIP pattern2018 UIP1452018 Probable UIP, Indeterminate for UIP or Alternative diagnosis3104[^5] 4. Discussion {#sec0050} ============= In the present study, the interobserver variation in HRCT reading was quantified for most commonly encountered parenchymal patterns, and for the UIP criteria published 2011 and 2018. To the best of our knowledge, this is the first study that quantifies the substantial variation in interobserver agreement between the different patterns. The interobserver agreement for the different patterns reached from fair to near perfect. Tree-in-bud nodules, perilymphatic nodules and honeycombing showed near perfect agreement, which suggests that these patterns are more easily identified. Lung diseases that predominantly show these patterns, for example bronchiolitis and sarcoidosis \[[@bib0095],[@bib0100]\], might therefore have a better interobserver agreement than other lung diseases. The lowest interobserver agreement is seen for intralobular lines as an isolated finding (kappa 0.28). A possible explanation is the difficulty in distinguishing between subtle fibrotic changes and normal hypoventilation when only supine images are used. The agreement for reticular pattern including intralobular lines was considerable higher. Kappa values for interobserver agreement cannot be directly compared between different studies using different cohorts. However, the interobserver agreement in the present study is similar to kappa values found in several previous studies. For example, previous studies have shown kappa values for honeycombing between 0.37 and 0.84, compared to 0.81 in the present study \[[@bib0080],[@bib0105]\]. Of particular interest is the interreader variability in the application of the criteria for UIP according to the ATS/ERS/JRS/ALAT guidelines. In the present study, there was no significant difference between the agreement using the 2011 criteria and the 2018 update. The kappa value of 0.58 for the 2011 UIP criteria in the present study is also comparable to the results in several other reports. In a large study, Walsh et al. found interobserver variability with kappa values between 0.36 and 0.41 for the same binary score as in the present study, and between 0.45 and 0.51 for weighted kappa, including the class "Possible UIP" \[[@bib0050]\]. In contrast, a near perfect interreader agreement (kappa 0.92) was seen in one study \[[@bib0105]\]. The presence of honeycombing in HRCT is a necessary, but not sufficient, condition for UIP Pattern; the distribution should have subpleural and basal predominance, and findings suggestive of another diagnosis should not be present \[[@bib0030],[@bib0045]\]. An interesting finding in the present study was that, although the agreement for honeycombing was near perfect (kappa 0.81), the agreement for the UIP criteria was lower, fair-moderate (kappa 0.58-0.69), see [Table 3](#tbl0015){ref-type="table"}. This finding suggests that the interobserver variations in the UIP criteria to a large degree relate to the distribution of the honeycombing within the lungs and to signs suggestive of another diagnosis. The present study underlines that, although diagnostic criteria are clearly stated, the application of these criteria remains an area for subjective image interpretation. An additional challenge is that, besides the 2018 ATS/ERS/JRS/ALAT update \[[@bib0030]\], the Fleischner Society published a white paper on IPF diagnosis the same year \[[@bib0045]\]. Although to a large degree the same, the wordings for the diagnostic categories are not identical, and the different wordings may result in slightly different interpretations. Even more importantly for the clinical management, the two reviews have different conclusions regarding the need for surgical lung biopsy for patients whose HRCT demonstrate "Probable UIP" pattern \[[@bib0030],[@bib0045]\]. It is always necessary to correlate the imaging findings with the clinical findings. Considering the interobserver variations in the present and previous studies, this is especially true in IPF. In IPF, the multi-disciplinary collaboration including radiologists is therefore essential for correct management of the patients \[[@bib0020],[@bib0030],[@bib0045]\]. The inclusion of two readers from the same institution is both a limitation and an advantage in the present study. More readers would have improved the generalizability. On the other hand, it is an advantage that both readers were thoracic radiologists used to reporting HRCT clinically using the same nomenclature. Thereby, the differences in observed kappa values between the patterns are more likely to be caused by inherent characteristics in the patterns, than by local interpretations of the definition of terms. With the exception of UIP, we only evaluated patterns and not the diagnostic interpretation of the HRCTs in the study. From a clinical point of view, the diagnosis is obviously more important than the pattern. However, since pattern description is the first step in the interpretation of HRCT, the interobserver variation in the pattern description remains important. The uneven distribution of parenchymal patterns in HRCT in clinical context, necessitated a separate HRCT databank, created by one of the readers, with selected cases for the evaluation of the interreader variation. The reader variations in clinical reading might be larger, since unclear cases might not be included in the databank. In conclusion, there are relatively large interobserver variations in the HRCT assessment for certain patterns and for the 2011 and 2018 UIP criteria. The interreader variations are important to keep in mind especially when there is discordance between the clinical context and the HRCT report. Declaration of Competing Interest ================================= The authors declare that there is no conflict of interest. This work has been supported by the Örebro County Council \[grant numbers OLL-675951, OLL-713971\]. [^1]: Note: \* Average number of cases are the sum of the two observers findings divided by two. \*\* The number of HRCTs, out of the total 126, in which the two observers agreed on whether a pattern was present or not. [^2]: Note: UIP -- usual interstitial pneumonia. [^3]: Note: UIP -- usual interstitial pneumonia. CI -- confidence interval. [^4]: All confidence intervals overlap indicating no statistically significant differences (p \> 0.05). [^5]: Note: UIP -- usual interstitial pneumonia.
The food to go market was expected to exceed £20bn this year but as a result of coronavirus, the market has contracted by £6bn. Despite this, the sector is better insulated than other parts of the wider eating out market and as such, is set to gain share this year, as ‘to go’ remains more popular than ‘dine in’. Following a decade of strong turnover growth, physical expansion in the FTG sector had peaked before coronavirus. The restricted movement enforced by the pandemic has resulted in a forecast of -28% for the full year 2020. However, FTG did not suffer as deeply as other channels and, as such, a swifter recovery is expected. The Lumina Intelligence Food to Go After Lockdown Report reflects the significant challenges brought about by coronavirus and provides a full revision of forecasted market performance by sub-channel; key operator developments; an update on consumer and shopper trends; as well as key growth opportunities and future outlook for the sector. Executive summary Market update - What is the value of the market in light of coronavirus? - What is the revised growth forecast for 2020? - How does that differ by sub-channel? - What is food to go’s share of the total eating out market? - What is driving the changes? Competitive landscape - Who are the top ten players in the FTG market? - How have they performed? - Who are the key players by sub-channel? - How did FTG operators respond to the coronavirus pandemic? - What NPD has been introduced in 2020? Consumer insight - How are consumer trends changing and how has this been impacted by coronavirus? - Who is the food to go consumer? - Why do they buy FTG? - Where is food to go purchased and where is it consumed? - The food to ‘go home’ opportunity Growth opportunities - What are the seven key trends impacting the food to go market? - How have these trends evolved? - What are the growth opportunities, post pandemic? - How can operators and retailers maximise these trends and stay relevant in 2020 and beyond? Future outlook - How will the FTG sector evolve in the next three years? - What are the Lumina Intelligence growth forecasts to 2023? - How does this vary by sub-channel? - How will food to go bounce back from the 2020 decline? - What will be the key factors to shape future market growth? 72,000 online surveys (6,000 per month) through Lumina Intelligence’s Eating Out Panel, FY 2018- 2019. - Insight from Lumina Intelligence’s Channel Pulse. 1,000 weekly interviews with consumers engage with the UK food and drink market. Surveys conducted between 23 March – 6 September 2020. - Extracts from the Lumina Intelligence Operator Data Index. A comprehensive database of over 700 UK hospitality operators.
https://store.lumina-intelligence.com/product/food-to-go-after-lockdown-downturn-and-recovery/
Base excision repair (BER) may become less effective with ageing resulting in accumulation of DNA lesions, genome instability and altered gene expression that contribute to age-related degenerative diseases. The brain is particularly vulnerable to the accumulation of DNA lesions; hence, proper functioning of DNA repair mechanisms is important for neuronal survival. Although the mechanism of age-related decline in DNA repair capacity is unknown, growing evidence suggests that epigenetic events (e.g., DNA methylation) contribute to the ageing process and may be functionally important through the regulation of the expression of DNA repair genes. We hypothesize that epigenetic mechanisms are involved in mediating the age-related decline in BER in the brain. Brains from male mice were isolated at 3-32 months of age. Pyrosequencing analyses revealed significantly increased Ogg1 methylation with ageing, which correlated inversely with Ogg1 expression. The reduced Ogg1 expression correlated with enhanced expression of methyl-CpG binding protein 2 and ten-eleven translocation enzyme 2. A significant inverse correlation between Neil1 methylation at CpG-site2 and expression was also observed. BER activity was significantly reduced and associated with increased 8-oxo-7,8-dihydro-2-deoxyguanosine levels. These data indicate that Ogg1 and Neil1 expression can be epigenetically regulated, which may mediate the effects of ageing on DNA repair in the brain.
https://cris.maastrichtuniversity.nl/en/publications/the-ageing-brain-effects-on-dna-repair-and-dna-methylation-in-mic
It has already profoundly altered the political structure of society. This social, technological and market changes in the global environment of firms have left Fordism high and dry. Sloan repositioned the car companies to create a five-model product range from Chevrolet to Cadillac and established a radically decentralized administrative control structure. Consequently, along with reduced economies of scale and scope, average firm size has been falling for the last twenty years. You have never eaten chocolates. Kidd Max Koch Hartmut Krauss Heikki Kynäslahti Thomas Lamarche Dieter Läpple Lorenz Lassnigg Dawn Leeder and Raquel Morales Thomas Lemke Alain Lipietz Alan Liu Rubèn Zardoya Loureda Giorgio Lunghini M. The labor has also been feminized. Both were efficient, just for different purposes. During 1970-80s, the mass market which stabilised the Fordist system was breaking up. This meant that there was a strong drive for volume. The Fordism production system has 4 key elements : -The separation of different work tasks between different groups of workers , in witch unskilled workers execute simple , repetitive tasks and skilled workers undertake functions related to research , design , marketinq and quality control. Its legs are made of hand-lathed maple and its surface is oak, with maple inlay. There was no prospect of going up the hierarchy ladder and therefore no motivation. Post-Fordist companies market items to niche markets and do not target mass consumption as they did before. During the 1970s, however, its underlying crisis tendencies became more evident. Index Michael Albert César Altamira Claudio Altisen Wladimir Andreff Luciano Atticciati Richard Barbrook Joachim Becker and Andreas Novy Bernard Billaudot David Boje Matthias Bickenbach and Michael Stolzke Alessandro Bonanno Werner Bonefeld Gilles L. The system is named in honor of Henry Ford, and it is employed in social, economic, and management theory concerning consumption, production, and working conditions and other associated concepts particularly regarding the 20 th century. Work was repetitive and often exhausting. In this system, assemblers were as interchangeable as parts. In its second meaning, Fordism has been analyzed along four dimensions. The enormous impact on social tastes and expectations of placing motor vehicles and other products within the reach of the mass of people has meant that consumers have become much more sophisticated; they are no longer satisfied with standard products. The the hallmark of his system was standardization -- standardized components, standardized manufacturing processes, and a simple, easy to manufacture and repair standard product. Another feature is the existence of political regimes that are keen on innovation and global competitiveness and which adopt market-friendly and flexible forms of economic governance. Post-Fordism The era after Fordism is commonly called Post-Fordist and Neo-Fordist. It also engendered a variety of public policies, institutions, and governance mechanisms intended to mitigate the failures of the market, and to reform modern industrial arrangements and practices Polanyi, 1944. As Fordism extended living standards, it could be argued a growth in the middle class was evident, but not everyone benefited. Taylor's scientific management methods Fordism. Critically examine the differences between Fordism and post- Fordism Answer for question 8 As the economic changed significant over the past years, many people argued that there was a transition from Fordism to post Fordism. Firstly, products and their components were standardised. These innovations were implemented by Alfred P. The rise of computer technology and improved international telecommunications meant that mass production could be completely redesigned. Surprisingly, one of the first was , who coined the term Fordism. Importantly, consumption has become specialized as well. Taylor inspired the notion of scientific management. Thorough definitions of the two types of work production are required before identifiying and analysisng similarities and differences between them. Conc Most important r flexibility and rigidity - comparisons. Taylor emphasized time and motion studies and pay for performance, two concepts which are still with us today. Ford's production system was based on specialization, synchronization, and precision. And how we make things dictates not only how we work but what we buy, how we think, and the way we live. Assembly line work is unpleasant in a mass production environment. Marshall Giovanni Masino Oscar Martinez-Herramienta John M. Ardent pacifist and established the Ford Foundation to provide ongoing grants for research, education, and development Died April 7, 1947 What is Fordism? Although his ideas have become a universal part of the modern management concepts, some writers continue to associate him with Frederick Winslow Taylor. Which means workers are not attached to a single job, which in effect makes labour more flexible. This method works in view of more and more people spend time in it. And, there seemed to be no natural limits to this conclusion. As a consequence, a part of the old working class will be eliminated from the world of work, and perhaps from the world. Although the urban population under the poverty line has fallen from 32. As a national accumulation or growth regime: involves a virtuous cycle of mass production and mass consumption 3. Coordinated wage setting between national associations of employers and national labor organizations, usually led by blue-collar unions, achieved both high wages and considerable income equality, almost without strikes Scharph, 1991. It is physically demanding, requires high levels of concentration, and can be excruciatingly boring. It is transforming not only how we make things, but also how we live and what we consume. This was started when Henry Ford perfected the assembly line. Ford was also one of the first to realize the potential of the electic motor to reconfigure work flow. Fordism was a strategy based on cost reduction. Housekeepers periodically cleaned the work area. Increasingly the demand is for customised products that incorporate quality and design features. Urban poverty alleviation stems from human development factors. I will alos focus on Tayllors scientific mgt principles as they were highly influential to how Fordism developed.
http://roundtaiwanround.com/fordism-vs-post-fordism.html
Differential expression of periostin in the nasal polyp may represent distinct histological features of chronic rhinosinusitis. Chronic rhinosinusitis (CRS) is thought to be a multifactorial disease, and it is classified into a number of subtypes according to clinicohistological features. Periostin, a 90-kDa secreted protein, was reported to exist in nasal polyps (NPs) associated with CRS. We compared the expression of periostin with the degree of eosinophilic infiltration as well as tissue remodeling. Tissue samples were collected from 28 patients of CRS with NPs, and clinicohistological features were evaluated. The pattern of periostin expression was assessed immunohistochemically. Two patterns of periostin expression was observed in nasal polyps: "diffuse type", in which periostin was expressed throughout the lamina propria starting just below the basement membrane, and "superficial type", in which the protein was detected only in the subepithelial layers between the basement membrane and the nasal gland. The average infiltrated eosinophil count in the diffuse type was significantly higher than that in the superficial type (diffuse type 360.5±393.0 vs. superficial type 8.46±13.81, p=0.001). Tissue remodeling was observed in 17 (85.0%) of the 20 diffuse-type nasal polyps, but only in one (12.5%) of the eight superficial-type nasal polyps (p<0.001). At least two distinct patterns of periostin expression were observed in the nasal polyps associated with CRS in accordance with the heterogeneous mechanisms underlying the pathogenesis of CRS with NPs.
What was the Broad Gauge ? Travel by train in Britain today and, on most railways, you will be travelling on rails 4 feet 8½ inches (1435mm) apart. But back in the 1830s, when our railway network was starting to be built, there was no “standard gauge”. Isambard Kingdom Brunel, the young engineer to the Great Western Railway (GWR), decided to use rails that were about 7 feet apart, as he felt this would allow him to build a better railway. With the wheels further apart there would be more space to fit boilers between them. On the 4 feet 8½ inch gauge favoured by other engineers such as George and Robert Stephenson, boilers had to be mounted higher, or kept smaller. The wider gauge would also allow carriages and goods wagons to be built with the body between the wheels, which would result in a lower centre of gravity so they would run more steadily and safely. In practice though, few were built in this fashion, but instead, they were built wider, and so could carry more goods or passengers in shorter – and therefore lighter – trains. Brunel also reasoned that wheels mounted outside of boilers or carriage bodies could be larger which would create less friction and make his trains run more freely. But while Brunel was building his novel railway from London to Bristol, there was a huge network of narrower lines being built in other parts of the country. In 1845, four years after the GWR was completed, a Gauge Commission was created to report on whether all railways should be built to "one uniform gauge". Several witnesses to the Commission thought that a gauge somewhere between the two gauges would be best. Brunel however, stated that he would consider an even wider gauge if he was starting again. He also accepted that there was a place for the narrower gauge for certain railways, indeed he used it for the coal-carrying Taff Vale Railway. He suggested that locomotive trials should be conducted to compare the merits of the two gauges. The broad gauge tests were carried out between London and Didcot, and the locomotives proved to be able to pull heavier loads at higher speeds than their narrow gauge counterparts. They also stayed on the tracks, unlike one of the narrow locomotives ! Despite the outcome of these tests, the Gauge Commission declared that 4 feet 8½ inches would become the standard, mainly because 87% of railways had already been built to that gauge. Standard gauge was also cheaper to construct, and conversion from broad to narrow easier to carry out, as no additional track bed widening would be required.
http://www.broadgauge.org.uk/history/bg_what_is.html
Kashyap, R. Author's Affiliation Abstract In pharma industry Packaging and Labeling plays very important role for improvements of attraction to human beings. This study provides an overview of regulatory requirements and tests for Quality control and suitability of packaging and labeling of prescription and Over-The-Counter (OTC) Products in USA and India. The study had been in formative to understand the need and importance of the labeling requirements of pharmaceuticals to protect the consumers by providing the suitable instructions for the use of the drug product at suitable place and suitable format. Guidance provides the recommendations for submission. Keywords Innovator, Generics, PLR (Physician Labeling Rule), OTC, FPL (Final Printed Labeling), Testing of packages, ANDA, Regulatory requirements, USA, India Market Cite This Article Kashyap, R. (2016). Regulatory Requirements for Packaging and Labeling of Pharmaceuticals in India and USA, International Journal for Pharmaceutical Research Scholars (IJPRS), 5(1), 23-49.
https://www.ijprs.com/article/regulatory-requirements-for-packaging-and-labeling-of-pharmaceuticals-in-india-and-usa/
Responsibilities: - Manage the day to day operational activities of large-scale, high budget projects across various internal functional departments within nfrastructure. - Responsible for a successful delivery in a matrix resource environment for all of the project deliverables. - Set and continually manage project expectations with team members and other stakeholders, including the customer project manager, through regular and proactively scheduled meetings. - Manage project activities through the effective use of project management principles and the specific methodologies adopted by the PMI and nfrastructure to achieve timely project completion and meet the defined project business objectives. - Develop and execute project plans for major nfrastructure initiatives, in a matrix capacity with project team members throughout the organization. - Present recommendations for effective consensus decision making, taking into consideration risk analysis and impact assessment for project milestones. - Develop project resource and cost estimates. - Monitor actual project expenditures against project estimates and budgets. - Provide accurate and timely budget reporting. - Meet or exceed project budget profit targets. - Maintain effective communication and coordination with the project sponsor/business owner to ensure that business objectives are met and any changes are clearly communicated. - Communicate effectively with project team members, senior management and other managers whose staff are involved in projects. - Conform to customer service level agreements (SLA’s) to ensure decreased liability and penalties for not meeting SLA requirements. - Use advanced technical knowledge to perform and apply new technical procedures. - Provide technical direction, guidance and support to Associate Project Managers and Project Coordinators. - Lead and Manage teams of multi skilled resources to ensure goals and deliverables for the projects are being met and or exceeded - Recommend and implement improvements to existing technical procedures based on understanding of new technologies. - Other duties as assigned Requirements: Education: Bachelor’s Degree and a minimum of 10 years project management experience with at least three years managing large, complex IT projects which demonstrate a broad and deep understanding of complex business processes, multiple platforms and language technologies and financial management. A minimum of ten years of hands-on large-scale project management experience will be considered to replace the educational requirement. Experience: Mandatory - Proficient in Budget analysis and understands and has used standard PMI-based financial formulas to analyze earned value - Demonstrated track record for managing complex operational projects, including awareness of business activities. - Demonstrated success at influencing people and initiating changes in business practice - Proven experience effectively interacting with customers, including relationship building. - Ability to work with customer management teams to provide status and set expectations to ensure project success. - PMI member with expert skills using PMI methods in work products/project execution including change management, risk management, and issues management. - Experience delivering presentations to customers as well as the nfrastructure management team - Negotiation skills to achieve coordination and results.
https://www.theladders.com/job/senior-project-manager-nfrastructure-malta-ny_36109515
"Load and unload device drivers" This policy setting allows users to dynamically load a new device driver on a system. An attacker could potentially use this capability to install malicious code that appears to be a device driver. This user right is required for users to add local printers or printer drivers in Windows Vista. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers. Vulnerability: Device drivers run as highly privileged code. A user who has the Load and unload device drivers user right could unintentionally install malicious code that masquerades as a device driver. Administrators should exercise greater care and install only drivers with verified digital signatures. Counter Measure: Do not assign the Load and unload device drivers user right to any user or group other than Administrators on member servers. On domain controllers, do not assign this user tight to any user or group other than Domain Admins. Potential Impact: If you remove the Load and unload device drivers user right from the Print Operators group or other accounts you could limit the abilities of users who are assigned to specific administrative roles in your environment. You should ensure that delegated tasks will not be negatively affected.
http://www.scaprepo.com/control.jsp?search=CCE-45960-2&command=relation&relationId=CCE-45960-2
Full-text links: Download: (license) Current browse context: cs.LG Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Machine Learning Title: Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint (Submitted on 10 Dec 2019) Abstract: Motivated by the gap between theoretical optimal approximation rates of deep neural networks (DNNs) and the accuracy realized in practice, we seek to improve the training of DNNs. The adoption of an adaptive basis viewpoint of DNNs leads to novel initializations and a hybrid least squares/gradient descent optimizer. We provide analysis of these techniques and illustrate via numerical examples dramatic increases in accuracy and convergence rate for benchmarks characterizing scientific applications where DNNs are currently used, including regression problems and physics-informed neural networks for the solution of partial differential equations. Submission historyFrom: Mamikon Gulian [view email] [v1] Tue, 10 Dec 2019 18:04:03 GMT (1915kb,D) Link back to: arXiv, form interface, contact.
http://arxiv-export-lb.library.cornell.edu/abs/1912.04862
3 Technical Controls for Ongoing GDPR Compliance The General Data Protection Regulation (GDPR) is not checkbox compliance exercise. It is an ongoing business activity that must be embedded everywhere. GDPR requires that organisations continuously protect their customers’ data privacy using a combination of people, processes, and technology. A comprehensive security strategy and the right security technologies are ideal for maintaining GDPR compliance. Below are three critical technical controls for maintaining GDPR compliance. Protecting Personal Data The GDPR is a robust set of data privacy regulations that covers consumer rights and organisational responsibilities. The GDPR’s broadest requirement states that organisations must provide “data protection by design and by default.” Many organisations simply don’t know what data they have or what data could be targeted by attackers in a breach. Under the GDPR, it’s become imperative that you know the answers to these questions. Your security team or security provider must know what types of data you have, where it’s used, how it’s used, and who has access. A data mapping or data classification exercise is commonly one of the first steps to GDPR compliance. A Security Information and Event Management (SIEM) solution can help complete this step by gathering your data from multiple devices (firewalls, network, anti-malware, etc.) into one centralized platform. The SIEM allows a security team or security provider to monitor end user and system activity continuously and correlate the data generated up against malicious activity. Under the GDPR, it becomes increasingly important to control how and where personal data is stored and accessed. Your security team or security provider should monitor data, create meaningful alerts in the event of an unauthorized attempt to access data, and encrypt any sensitive personal data. If you have visibility into who has access to your data, you will be able to control who has privileged access. In other words, you must be able to control user access during provisioning and de-provisioning. Privileged accounts and vendor accounts can leave your organisation exposed to a whole host of problems. Need a high-level overview of the GDPR? Grab our comprehensive guide to the GDPR here. Continuous Monitoring & Threat Detection The GDPR puts forth several articles (Article 25 & 32) which require an organisation to implement data protection principles and continuous monitoring. In the same way that SIEM can ingest data, it also correlates your data from devices to alert you of urgent security incidents. It also takes data and identifies trends and aligns those to the cyber attack kill chain. Threat detection and intelligence using a SIEM provides reliable and timely information for when a breach occurs and helps you prioritize the most significant threats to your data. It’s imperative that organisations implement the right technical controls to ensure data security. Continuous monitoring and threat detection can aid greatly in this pursuit. Breach Response & Notification Finally, under Article 33 of the GDPR, your organisation must report a data breach within 72 hours without “undue delay.” You don’t have to report every breach to the Data Protection Authorities (DPAs). Only when the data breach is likely to result in a high-risk to the rights and freedoms of EU data subjects. Organisations are also tasked with notifying the public of any serious breach after notifying the DPAs. Even so, finding out where the breach occurred, what areas have been affected, and how it happened is no easy feat. Therefore, it becomes increasingly important for your organisation to have the processes in place to investigate and report a breach quickly and effectively. Data breaches are inevitable. Organisations must focus on improving their mean-time-to-detect (MTTD) and mean-time-to-respond. These two metrics are vitally important in benchmarking how your organisation is protecting EU subject data. A Managed Security Services Provider (MSSP) can help you leverage the power of a SIEM technology to speed up your monitoring, detection, and alerting workflows under the GDPR.
https://cipher.com/blog/3-technical-controls-for-ongoing-gdpr-compliance/
Axicabtagene ciloleucel is a chimeric antigen receptor T-cell therapy for the treatment of adult patients with relapsed or refractory large B-cell lymphoma. This includes diffuse large B-cell lymphoma (DLBCL), high-grade B-cell lymphoma, and subtypes of DLBCL (e.g., primary mediastinal large B-cell lymphoma and DLBCL arising from follicular lymphoma). CADTH completed a review of manufacturer-submitted materials and published literature to assess the clinical impact, cost-effectiveness, implementation considerations (including patient and caregiver perspectives and experiences, and other factors such as facilities, eligibility, travel, and resource costs), and ethical considerations associated with the provision of axicabtagene ciloleucel therapy in Canada. The project includes the following key components: - protocols - clinical systematic review - economic review - review of implementation considerations, ethics, and patient and caregiver perspectives and experiences - recommendations report.
https://www.cadth.ca/axicabtagene-ciloleucel-adults-relapsed-or-refractory-large-b-cell-lymphoma
Upon completion Estate Planning in South Carolina, Second Edition will be a comprehensive six-volume set authoritative to this complex, challenging, and rewarding area of practice. Each Volume was written as a stand-alone resource; therefore, each Volume will be released and sold as it becomes available. This means that Volumes will not be released sequentially. Volume III of the Second Edition of Estate Planning in South Carolina, edited by Andrew W. Chandler and M. Jean Lee, and co-authored by 11 of South Carolina's most experienced and well-respected Estate Planning and Tax Law Practitioners and Specialists, focuses on practical advice and guidance for the South Carolina practitioner who seeks to undertake the challenges, rewards, and benefits of estate planning. Written with the practicing attorney in mind, these excellent volumes give solid, clear advice, practical insight, and unerring guidance. Topics addressed in Volume III include: planning for closely-held businesses, planning for needs-based government benefits, choosing fiduciaries, transferring assets outside of probate, execution and safekeeping of documents, and terminating the representation.
https://cv.scbar.org/cv/cgi-bin/msascartdll.dll/ProductInfo?productcd=223
NASA's newest mission, ICON, expected to launch tonight Chat with us in Facebook Messenger. Find out what's happening in the world as it unfolds. Photos:Cool unmanned space missions Bright swaths of red in the upper atmosphere, known as airglow, can be seen in this image from the International Space Station. NASA's ICON mission will observe how interactions between terrestrial weather and a layer of charged particles called the ionosphere create the colorful glow. Hide Caption 1 of 21 Photos:Cool unmanned space missions NASA's Global-scale Observations of the Limb and Disk mission -- known as the GOLD mission -- will examine the response of the upper atmosphere to force from the sun, the magnetosphere and the lower atmosphere. Hide Caption 2 of 21 Photos:Cool unmanned space missions This illustration shows NASA's Dragonfly rotorcraft-lander approaching a site on Saturn's exotic moon, Titan. Taking advantage of Titan's dense atmosphere and low gravity, Dragonfly will explore dozens of locations across the icy world, sampling and measuring the compositions of Titan's organic surface materials to characterize the habitability of Titan's environment and investigate the progression of prebiotic chemistry. Hide Caption 3 of 21 Photos:Cool unmanned space missions This is an artist's concept of the Europa Clipper spacecraft. The design is changing as the spacecraft is developed. Hide Caption 4 of 21 Photos:Cool unmanned space missions SPHEREx, the Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer, will study the beginning and evolution of the universe and determine how common the ingredients for life are within the planetary systems found in our galaxy, the Milky Way. It is targeted to launch in 2023. Hide Caption 5 of 21 Photos:Cool unmanned space missions NASA's Transiting Exoplanet Survey Satellite launched in April and is already identifying exoplanets orbiting the brightest stars just outside our solar system. In the first three months since it began surveying the sky in July, it has found three exoplanets, with the promise of many more ahead. Hide Caption 6 of 21 Photos:Cool unmanned space missions This illustration shows the position of NASA's Voyager 1 and Voyager 2 probes outside the heliosphere, a protective bubble created by the sun that extends well past the orbit of Pluto. Hide Caption 7 of 21 Photos:Cool unmanned space missions This is an artist's concept of the Solar Probe Plus spacecraft approaching the sun. In order to unlock the mysteries of the corona, but also to protect a society that is increasingly dependent on technology from the threats of space weather, we will send Solar Probe Plus to touch the sun.
TORONTO, Feb. 17, 2022 /CNW/ - (TSX: LUN) (Nasdaq Stockholm: LUMI) Lundin Mining Corporation (“Lundin Mining” or the “Company”) today announces the retirement of Mr. Lukas Lundin from the Chair of Lundin Mining’s Board of Directors (the “Chair” and the “Board”), effective at the time of the Company’s 2022 Annual Shareholders Meeting. Mr. Ashley Heppenstall, Lead Director of Lundin Mining’ Board, said, “On behalf of the Board, I would like to thank Lukas for the invaluable strategic guidance and perspective he has provided as Chair of Lundin Mining. His strong vision and leadership have been paramount to the Company’s success, growing from a small team with an exploration property in 1994, to a leading base metal producer with five operations, a global workforce of over 11,000 people and significant growth potential, today.” Peter Rockandel, Lundin Mining’s President and CEO, added, “I would like to thank Lukas for his many years of counsel and support provided to the Board and leadership team as Lundin Mining has grown. The Company and our many stakeholders have benefitted immensely from his pursuit of highly prospective opportunities, and his insight and experience shared over a lifetime of success leading many natural resources companies. Lundin Mining is well positioned to continue to deliver on his vision.” Mr. Lundin concluded, “I am very proud of the many successes Lundin Mining has achieved over the past nearly three decades, though there is still much to accomplish. The outlook for base metals is very constructive and I am fully confident that the culture, leadership, people and prospects are in place for Lundin Mining to continue building on its positive legacy.” About Lundin Mining Lundin Mining is a diversified Canadian base metals mining company with operations in Brazil, Chile, Portugal, Sweden and the United States of America, primarily producing copper, zinc, gold and nickel. The information in this release is subject to the disclosure requirements of Lundin Mining under the EU Market Abuse Regulation. The information was submitted for publication, through the agency of the contact persons set out below on February 17, 2022 at 15:00 Eastern Time. Cautionary Statement on Forward-Looking Information Certain of the statements made and information contained herein is “forward-looking information” within the meaning of applicable Canadian securities laws. All statements other than statements of historical facts included in this document constitute forward-looking information, including but not limited to statements regarding the Company’s plans, prospects and business strategies; the Company’s guidance on the timing and amount of future production and its expectations regarding the results of operations; expected costs; permitting requirements and timelines; timing and possible outcome of pending litigation; the results of any Preliminary Economic Assessment, Feasibility Study, or Mineral Resource and Mineral Reserve estimations, life of mine estimates, and mine and mine closure plans; anticipated market prices of metals, currency exchange rates, and interest rates; the development and implementation of the Company’s Responsible Mining Management System; the Company’s ability to comply with contractual and permitting or other regulatory requirements; anticipated exploration and development activities at the Company’s projects; and the Company’s integration of acquisitions and any anticipated benefits thereof. Words such as “believe”, “expect”, “anticipate”, “contemplate”, “target”, “plan”, “goal”, “aim”, “intend”, “continue”, “budget”, “estimate”, “may”, “will”, “can”, “could”, “should”, “schedule” and similar expressions identify forward-looking statements. Forward-looking information is necessarily based upon various estimates and assumptions including, without limitation, the expectations and beliefs of management, including that the Company can access financing, appropriate equipment and sufficient labor; assumed and future price of copper, nickel, zinc, gold and other metals; anticipated costs; ability to achieve goals; the prompt and effective integration of acquisitions; that the political environment in which the Company operates will continue to support the development and operation of mining projects; and assumptions related to the factors set forth below. While these factors and assumptions are considered reasonable by Lundin Mining as at the date of this document in light of management’s experience and perception of current conditions and expected developments, these statements are inherently subject to significant business, economic and competitive uncertainties and contingencies. Known and unknown factors could cause actual results to differ materially from those projected in the forward-looking statements and undue reliance should not be placed on such statements and information. Such factors include, but are not limited to: risks inherent in mining including but not limited to risks to the environment, industrial accidents, catastrophic equipment failures, unusual or unexpected geological formations or unstable ground conditions, and natural phenomena such as earthquakes, flooding or unusually severe weather; uninsurable risks; global financial conditions and inflation; changes in the Company’s share price, and volatility in the equity markets in general; volatility and fluctuations in metal and commodity prices; the threat associated with outbreaks of viruses and infectious diseases, including the COVID-19 virus; changing taxation regimes; reliance on a single asset; delays or the inability to obtain, retain or comply with permits; risks related to negative publicity with respect to the Company or the mining industry in general; health and safety risks; exploration, development or mining results not being consistent with the Company’s expectations; unavailable or inaccessible infrastructure and risks related to ageing infrastructure; actual ore mined and/or metal recoveries varying from Mineral Resource and Mineral Reserve estimates, estimates of grade, tonnage, dilution, mine plans and metallurgical and other characteristics; risks associated with the estimation of Mineral Resources and Mineral Reserves and the geology, grade and continuity of mineral deposits including but not limited to models relating thereto; ore processing efficiency; community and stakeholder opposition; information technology and cybersecurity risks; potential for the allegation of fraud and corruption involving the Company, its customers, suppliers or employees, or the allegation of improper or discriminatory employment practices, or human rights violations; regulatory investigations, enforcement, sanctions and/or related or other litigation; uncertain political and economic environments, including in Brazil and Chile; risks associated with the structural stability of waste rock dumps or tailings storage facilities; estimates of future production and operations; estimates of operating, cash and all-in sustaining cost estimates; civil disruption in Chile; the potential for and effects of labor disputes or other unanticipated difficulties with or shortages of labor or interruptions in production; risks related to the environmental regulation and environmental impact of the Company’s operations and products and management thereof; exchange rate fluctuations; reliance on third parties and consultants in foreign jurisdictions; climate change; risks relating to attracting and retaining of highly skilled employees; compliance with environmental, health and safety laws; counterparty and credit risks and customer concentration; litigation; risks inherent in and/or associated with operating in foreign countries and emerging markets; risks related to mine closure activities and closed and historical sites; changes in laws, regulations or policies including but not limited to those related to mining regimes, permitting and approvals, environmental and tailings management, labor, trade relations, and transportation; internal controls; challenges or defects in title; the estimation of asset carrying values; historical environmental liabilities and ongoing reclamation obligations; the price and availability of key operating supplies or services; competition; indebtedness; compliance with foreign laws; existence of significant shareholders; liquidity risks and limited financial resources; funding requirements and availability of financing; enforcing legal rights in foreign jurisdictions; dilution; risks relating to dividends; risks associated with acquisitions and related integration efforts, including the ability to achieve anticipated benefits, unanticipated difficulties or expenditures relating to integration and diversion of management time on integration; activist shareholders and proxy solicitation matters; and other risks and uncertainties, including but not limited to those described in the “Risk and Uncertainties” section of the Annual Information Form and the “Managing Risks” section of the Company’s MD&A for the year ended December 31, 2020, which are available on SEDAR at www.sedar.com under the Company’s profile. All of the forward-looking statements made in this document are qualified by these cautionary statements. Although the Company has attempted to identify important factors that could cause actual results to differ materially from those contained in forward-looking information, there may be other factors that cause results not to be as anticipated, estimated, forecast or intended and readers are cautioned that the foregoing list is not exhaustive of all factors and assumptions which may have been used. Should one or more of these risks and uncertainties materialize, or should underlying assumptions prove incorrect, actual results may vary materially from those described in forward-looking information. Accordingly, there can be no assurance that forward-looking information will prove to be accurate and forward-looking information is not a guarantee of future performance. Readers are advised not to place undue reliance on forward-looking information. The forward-looking information contained herein speaks only as of the date of this document. The Company disclaims any intention or obligation to update or revise forward–looking information or to explain any material difference between such and subsequent actual events, except as required by applicable law.
https://lundinmining.com/news/lundin-mining-announces-retirement-of-mr-lukas-lu-123062/
Search the Community Showing results for tags 'slow'. Found 18 results - DWG Issues again MartinBlomberg posted a question in TroubleshootingHi, 1. Why is it that 2D dwgs are super slow in VW when they are quite easy to handle in any other CAD software? How can I make them work better in VW? When I try to move around an imported (or referenced, tried both) dwg it takes seconds before it moves or anything happens. And it's a blank VW-document which has no other files in it. The dwg isn't too complicated and works super smooth in softwares like AutoCAD. 2. When I have made a crop of parts of the original dwg I want to use to make my 3D drawing, the crop doesn't "stick" to the dwg. I've changed the actual crop from Screen to Layer but problem stays. Hope this make sense. I have a drawing which have both section and plan view in the same dwg, and just want to split them apart so I can "stand" the section view next to the plan view. Just like Make Doubleday does in this video. Please let me know your thoughts on this 😃 Thanks! // martin - Mojave 2019 SP3.1 Text Editing Very Slow Asemblance posted a question in TroubleshootingHi All, I've just taken the plunge to Mojave, and straight off the bat have run into a major issue. Editing text in sheet view is absolutely crippling my system. I'm working in the same file as I was prior to updating to Mojave, on the same system, on the same SP3.1 Vectorworks. The only change is the system update, and it is severely slower since the update. Is this a known issue? I can record some footage showing the slowdown if it helps. Thanks A - Is this really the performance one could expect? MartinBlomberg posted a question in TroubleshootingHonestly, is this what one can expect from VW? Please have a look at the video attached. LINK TO VIDEO: https://photos.app.goo.gl/vCbW8r4R4SMqHK5d8 I'm doing a seating plan for the arena I'm working at. And the file isn't too complex yet, size around 50mb. I've also shot the Task manager as well, so you can stats while I'm cliking about. I'm not doing to much stuff here as you can see, but still really slow. Please let me know why occure and if there's some kind of fix to it. Many Thanks! INFO: VW 2020, SP5, 64-bit Windows 10 Pro CPU: Intel i7 9700 3,00GHz 16GB RAM NVIDIA GeForce RTX 2060 Super - Zoom pan and orbit very slow dvdv posted a question in TroubleshootingHello to everybody, I have searched for this issue and I am really surprised that I couldn't find posts about it. Don't you think that zoom, pan and orbit commands activation both thru mouse wheel and toolbars is incredibly slow? (VW 2019 and 2020 both Win and Mac) I am talking about more or less 1 second but it is really disturbing and it make working with VW very frustrating; expecially compared with other 3d modeling sw. I have a 9th gen i7 and a RTX 2080 that is kind of a "monster" in 3d navigation in any other software even with realtime rendering and raytracing activated. So how can happens that navigating an OpenGL model, low quality setting, no antialiasing, is so laggy? I think that the problem is not in the model navigation itself, maybe the problem is in the speed of the activation of the commands zoom, pan and orbit. Since when the command is finally (1 second avarage) executed I can zoom/orbit/pan the model very fast. I hope that I have been able to explain my problem and that you will help me to solve it, thank you, Davide - Irrigation Tools...So Slow! ericjhberg posted a topic in Site DesignWhen doing a large irrigation system (I'm working on several with over 70 stations), the connection time is incredibly cumbersome and slow. The file I am currently working on takes up to 15 seconds to make one connection. Additionally, we have already turned off the Auto Calculation Update feature, so that is just processing time. Some simple math to show you how cumbersome this actually is... 830 outlets of just one type (over 2000 total in the project) = 830 connections 830 connections at 15 seconds each = 12,450 seconds = 207.5 minutes = approximately 3.5 hours of drafting to make the connections. When compared to our traditional methods, we could do this drafting in approximately 1 hour or less. So for 2.5 extra hours, what do I get? Some data attached to the pipe network? That would be great, except half the time there is at least 1 error for every 20 outlets, which has to be found, diagnosed, and corrected, adding at least another hour to the mix. I get that we are supposed to be moving into the "smarter, not harder" category, but this is only coming at it from the "smarter" perspective while making it a whole lot harder to meet your targets. Expand your horizons Landmark and start thinking bigger than a single, lot residential project. We need BIG applications here that scale easily! - Missing Geo in open gl A_L_B_O_T posted a question in TroubleshootingDear all, I have recently updated to VW2020 + Catalina 10.15.3 on my IMac (Retina 5K, 27-inch, Late 2015 / AMD Radeon R9 M395X 4 GB / 4 GHz Quad-Core Intel Core i7 / 32 GB 1867 MHz DDR3). I am having an issue with the slowing pace of Vectorworks & several post crash issues. 1. Update issue: Firstly Vectorworks crashes frequently since updating from 2019. The pace of response to simple commands dies to a spinning wheel of death halt over about a 15 min period. I have force quit the program in order to do my job, which at the moment is in short 15 min shifts of grinding fun. I work as economically as possible because I prefer my VW to be responsive. I have my settings on low throughout. I am not using textures or lights. I am only using the model to create viewports for hidden line renders. Quiete simple demands with unreasonable response. Commands that kill. Writing any annotation - Even with default VW fonts. Navigating GEO Updating all Viewports collectively Pasting (cmd + v) The performance has improved slightly but the Wheel of death is never far behind. Things I have already done: I have actively seeked solution to the poor performance of my Imac: I have removed several out of date software form my Mac. Decluttered 600gb of files. Removed live desktop form my mac. Reset the Pram. Safe rebooted. Updated my operating system :-S 2. Post crash issues: Upon loading up files from my back up folder, I notice that geometry on my models has now disappeared completely from view, you can highlight the Geo' & it is visible in wireframe. However not in Open gl or Final RW / Custom RW. Also symbols or flipped Geo has now become more abstract. Various design elements have now flown off to completely random parts of my design layer or are now completely missing. I have tried removing symbols & created generic solids to combat this. But it hasn't tackled these issues. Sometimes if I ungroup the parts of my design, the parts will appera in open gl. Not as they were thou. But this isnt always a soloution to all parts. Thanks in advance. Albert - Hi Guys, Anyone else experiencing issues with SP3.1 update on W10? Basic functions are taking incredibly long to execute. for example, The softgoods tool freezes VW temporarily for about 5min every time I use the tool. Extremely frustrating. Importing 3ds files are also taking forever to execute and sometimes crashes VW. I didn't have this issue with SP3. Any help would be appreciated. - Import PDF workflow going really really slow. Advice? tekbench posted a topic in General DiscussionI've imported a PDF. it is 11.6 MB. 4 pages, but I'm only taking in 3 of them. I've un-grouped the PDF. I've deleted the bitmap and a number of rouge rectangles. I also found that AFTER i've un-grouped the PDF, i'm left with an additional, 'hidden' group inside that group. I double click that group, and enter the group edit window. There, I have to manually pull the additional 'layers' from the PDF out to get to the line bits. Then I exit the group edit window and I have my layer of lines. in fact, I'm left with a really nice bunch of lines and polygons. About 700,000 polygons. I run a 'Compose' command on a bunch of them and that drops the number by almost 100,000. I'm still left with a bunch of polygons that have a fill associated with them. I need that fill gone. I've tried several things here, but the result is the same - It takes forever. If I do a custom select, and only select polygons that have a fill associated with them, I get a more manageable number of like 58,744 polygons. I move to the format pallet and choose the 'no fill' option. I get a beachball spinning for a long time, and the fill state never changes, and I have to apple+option+delete out of Vectorworks. By a long time, I mean hours on hours. If I do a smaller selection (like 15 - 20 polygons by tedious hand selection), or marquee select small sections of the drawing, I can change the fill state and it only takes minutes. Like 10 - 15 minutes. Still, to me, that seems like a really long time. My questions are this: Am I asking too much of the software? Is this 'expected' behavior in real world applications? Is there a better way to remove fills from polygons on a drawing? Is there a good way to 'pre-prep' a PDF for import? This PDF has no layers, and isn't that big. Working on my import PDF - turn it into a model workflow. So far, it's great. Except the underwhelming speed of the fill removal. The 'breaking apart' of the PDF takes a little bit of time, but not nearly the time that the fill removal does. Not by a long shot. Repro steps - Set up new drawing File-import PDF. highlight PDF. Apple+U to un-group Delete the rectangle and the bitmap so I'm left with just a singe 'group' item. Double click that group item. Enter the group editor. delete the various rectangles that represent the 'pages' or white space in the PDF. Exit editor. you're left with several 'blobs' or groups of PDF parts. In my case, I imported 3 PDF pages. Now I have three big groups of lines and Polys. Click the group of lines and Polys. Apple+A (yes, you are doing this a second time). Now you have a bunch of lines and poly's on your layer. Custom select: type = Polygon Fill = (solid black box in my case) Check OIP, verify there are like 60,000 polys selected now. click format pallet. on fill pull-down, select 'No Fill'. leave the office an literally come back in the morning. Might be done. - VWX 2019 Significantly Slower Asemblance posted a question in TroubleshootingWe've been doing some testing on some live projects, with the hopes to move things onto 2019. However disappointingly, we're finding 2019 seems to be significantly slower, with a lot of 'hangups' in between actions. In particular I've been noticing this going through and editing sheets. Doing the same actions in identical files on vwx 2018 (Sp4) is smoother and faster. This is quite the opposite from what we were hoping for! Has anybody else found this? - Super slow file opening! ericjhberg posted a question in TroubleshootingWhen I conceived this post I wanted to do a comparison of file opening times from VW2018 to VW2019. We have been experiencing dramatic lag in a couple of our files, and I thought it was because of VW2019...however... when I converted back to VW2018 and ran a test, it took 13 minutes and 32 seconds for the file to open! Proof, watch video... https://screencast-o-matic.com/watch/cqVbFy3bS8 While I am fairly certain VW2019 is still slower, or at least close, I don't really want to take the time to find out and do the comparison This is just one example of where VW needs to dramatically improve. If you don't want to watch a still 13:32 second video of a file opening, what makes you think I do? - 3D Urban Model too slow chrispolo posted a question in TroubleshootingI've been workin on this model from a while, and it's been difficult since the beginning with constant freeze and crash of the program, I'm actually using VW 2019 SP 2 in windows 10 Lenovo Y50-70 with i7 processor and 960M Nvidia graphics. My problem is that rendering the model to Unshaded Polygon and then export a viewport is taking forever and eventually the program get freeze, I'm using 150 DPI for the sheet layer. Somebody please help me I have no idea what is the problem, the 3D model has multiple extrudes but I expect my laptop can work it. - Push/Pull Tool Impossibly Slow Asemblance posted a question in TroubleshootingHi All, Bit of a problem at the moment - For some reason the 'Push/Pull' tool is impossibly slow in certain files. Generally speaking the file I'm working on runs fine, I'm able to work on it smoothly in Open GL mode and thats how I'm working for the most part. However, for some reason selecting the 'Push/Pull' tool causes my system to all but freeze up (in any rendering mode, including wireframe). If necessary I'll record a video, but it basically shows the following: - 2 Storey building with some simple doors, windows, stairs, a couple of references - Roaming around and spinning about smoothly in 3D in open GL - Find a simple cube which I want to extrude a little - Select 'Push/Pull' tool - Move mouse over to cube, but sit watching 'spinning wheel' of Mac loading for about 5 minutes. - Nothing happens, give up and deselect. After another few minutes, system will unselect the Push/Pull tool and file works fine again. Any ideas? Is this a known issue?? Thanks, A - VW2018 slowing down in Sheet Layers Joshec posted a question in TroubleshootingHi, 2018 keeps driving to a complete unusable halt in my sheet layers after introducing a 3rd or 4th viewport. My drawings are pretty light and are not too complex and only rendering in wireframe (to consider another rendering format is unthinkable) the program then becomes unusable with massive delays, jerky zoom and content disappearing when zoom is halted. The same file in 2017 works fine, is this a known issue with 2018? A quick check on the system shows CPU running @6% and RAM only @ 8gb of 16gb - Vectorworks becomes slow and screen refresh/redraw becomes laggy hughmitchell posted a question in TroubleshootingHi, I am finding that after ~30mins of using Vectorworks 2017 & 2016 the redraw rate and the object info boxes flicker and the program becomes really slow. Changing between tools and moving objects around the screen is laggy to the point where it affects my workflow. Text boxes are almost impossble to work with as i can't select text or move text without waiting a few soconds per operation. I would like to think that i am running powerful enough hardware to not have a problem but clearly something is not right and i can't seem to track it down. I phoned up the Vectorworks support line and they tried to fix the issue but to no avail. The girl was very helpful but basically just changed some of the graphics properties within vectorworks which made no difference. It does not matter how large or small the file is, how complex or how simple the geometry is. I am working only in 2D plan. Sketchup, 3DS Max, Photoshop, Illustrator, Premier Pro etc. all work absolutely fine without any issues but AutoCAD suffers the same issue. I am hoping someone else had had the same problem or knows where i can next look for a solution. My specs: AMD FX 8350 CPU (8 cores overclocked to 4.7Ghz) AMD radeon R9 270x 2gb dedicated RAM (this is a dedicated graphics card) 16GB Corsair DDR3 1333mhz ram both programs are running on an SSD (samsung 840 pro) ASUS TUF Sabertooth 990FX R2 motherboard Windows 10 2 screens @ 1080pi each. i would be hugely grateful to anyone who could shed some light on the problem. I am not sure how to track down if its a hardware issue or a software issue. Many thanks 402-102-B-GroundFloorPlan.vwx - Vwx 2017 Data Visualization bug (& slow) Asemblance posted a question in TroubleshootingHi, We have been attempting to use the new data visualization tool, which clearly has the power to become very useful. However we are having some difficulties. Firstly, even using this tool in a basic way seems to cause extreme slowdown. Secondly (and more critically for us currently), the display for some areas seems to be buggy and 'stuck' on an incorrect hatch colour. Please take a look at the video below which shows the issues. The beginning of the video shows how quickly our systems handle navigation on a normal sheet without data visualization turned on. The second sheet shown is using simple data visualization to show space objects with various coloured hatches applied. You can see how much more slowly this sheet refreshes as zoomed or panned, which is infuriating to work with. But the area which is coming up as as a black hatch at most zoom levels is the most problematic. This type of issue has cropped up on a couple of our drawings, and is still present on pdf's if printed at this scale. - Rendering TimesHello everybody, First time posting on here. I did a quick search but really didn't find too many answers. One of my end users builds a lot of large music festival designs in his VW (currently using 2017). He primarily uses the Hidden Line view for all his stage plots. Each page has a different stage and each stage has about 50-100 lights, a stage, a roof, barricade, video walls, and stage risers. Basically, a lot of stuff haha. He was doing this from his MBP and was working pretty good until the designs got bigger and bigger. Currently we have built him a nice little setup an i7, 32GB of ram, and a GTX 10 series GPU. When he publishes in Hidden Line mode the render times take well over an hour. The system memory has peaked at 28-30GB and the GPU load currently sits a 0%........ What can we do to get some of that work load moved over to the GPU? I understand that the openGL render will help offset that, But the openGL view will not work for what he is trying to accomplish. He needs line mode so each light and device shows up in a nice sketch up view with all the names, labels and positions associated with those devices next to the device. openGL mode takes all of that stuff out of the render and that information is more important than the actual render itself. Is there a way to have the Hidden Line mode render through the GPU? Is there anything we can do to speed up the Hidden Line render mode? When we render via openGL it works very nicely, just baffled how a Hidden Line mode takes longer than an actual full color render takes? There HAS to be some way to the GPU to help the render speed on this, right? Sorry if I'm not cleat enough on all of this. I'm more of a computer tech and our end user is too swamped to reach out so I'm trying my best to help Thanks for any help you guys can pass my way. Have a great week! *On a side note, does VW work better on multi cores, or a very fast single core?* - VW2017 file unresponsive MRD Mark Ridgewell posted a question in TroubleshootingI’ve got a vectorworks file that’s become very unresponsive/ pretty much unusable. It’s 151mb – is that just too big? It's only one floor so no complex stories or layers going on. It’s a 3d model with some renderworks textures, 3d objects (as symbols), and lighting, but even with these classes turned off it makes no difference. It was created in VW2016 originally but I’ve recently set up viewports on about 10 sheets using the ‘create interior elevation viewport’ .There’s about 30 sheet layers – is that too many for one file? What's the most efficient way of working out what is slowing it down so much? Suggestions appreciated so I can get drawings out from this file later in the week!
https://forum.vectorworks.net/index.php?/search/&tags=slow
TECHNICAL FIELD DISCLOSURE OF THE INVENTION DESCRIPTION OF THE DRAWING 0001 This invention relates to cellular radio telecommunication systems, and especially private systems and their adaptation to work with public cellular radio telecommunication systems. 0002 Cell planning and frequency reuse within a cellular network become more and more difficult as the traffic density rises and the cell size falls. This is especially so for outdoor base stations covering indoor users, firstly because of reduced propagation loss (from fourth power to square law) with reduced propagation distance, resulting in increased spillover beyond nominal cell boundaries, and because of the insertion loss of walls, ceilings and other obstructions, which require increased power operation from both base stations and mobiles. These two factors increase the so-called co-channel interference problem, which is to say, the increase in interference from nearby cells and mobiles operating on the same frequency channels. Channels can only be reused at greater and greater distances. Even with private indoor networks supplying indoor coverage, the extremely small size of the cells (less than 50 m diameter) can result in a demand for channels greater than the public operators can provide. 0003 One possible solution to this problem is to use a repeater, which carries the signal into a building where it is most needed. In this way, the power levels for both mobile and basestation can be kept low, and the co-channel interference problem is reduced. The drawback of using repeaters is that they offer no new capacity; they simply bring existing capacity closer to where it is needed. In commonly accepted scenarios where mobile usage will be moving indoors, this approach will not offer the required channel capacity. 0004 Another approach to the problem is to use a technique called Intelligent Underlay-Overlay (IUO), which reuses spectrum differently, depending on its use. In this technique, GSM beacon frequencies (carrying the so-called Basestation Control Channel or BCCH) are reused in a low density pattern, to ensure low interference between beacons, and an extremely low probability of error on these broadcast channels. Traffic channels are reused in a higher density pattern, to provide high capacity at the expense of some interference. The attraction of this scheme is the high spectral efficiency of the telephony traffic. 0005 Although use of a repeater is a viable option for low capacity indoor coverage to ameliorate the co-channel interference problem, the cost of providing this coverage by repeater technology rises unacceptably as the indoor traffic rises. Other micro-cellular techniques using micro- and pico-basestations may be used such as distributed antenna technology; for example, a leaky feeder, such as a length of coaxial cable with openings made in its outer screen to allow RF energy in and out of the cable. Losses in the cable, its high cost and generally high installation overhead restrict this technology to short cable runs. Other examples use optical fibre to transport the RF and modulate the RF on and off the fibre at special RF head units. Though suitable for long cable runs, the high cost of the optical fibre and the modulation and demodulation units restricts the applicability of this technology. Yet other examples, distribute the RF at a lower, intermediate frequency (IF), and heterodyne this up to the required band at special RF head units. Since the distribution is done at IF, the cable runs may be long and the cable cheap, but again the requirement for specialised RF head units adds cost to the technique. 0006 An object of the invention is to provide an improved cellular radio telecommunication system suitable for in-building coverage, compatible with an external public cellular network and existing unmodified mobile terminals. 0007 This is achieved according to the invention by providing a network of base stations and controlling them so that they use a single broadcast synchronised control channel, and separately handle dedicated traffic and signalling channels in their immediate vicinity. 0008 Such a network can achieve the theoretical minimum radio channel consumption, yet provide the maximum space diversity gain in traffic capacity typical of cellular telephony systems. 0009 The invention is particularly applicable to TDMA systems such as GSM systems. 0010 In order that the over-the-air frame structure transmitted (and received) in the coverage area of the network of the invention is time synchronised for all mobile subscriber units, it is necessary to synchronise the base stations to within a few bit periods (each bit period is approximately 4 s in GSM). It is not required to synchronise the base stations more closely than this (though it may be convenient to do so) since mobile subscriber units are designed to deal with signals arriving with timing differences of this order. For example, GSM mobiles have an equaliser which can detect two signals in a multipath channel, with delay spreads of several bit periods. This contrasts with normal GSM and other cellular networks, in which it is not required that base stations should be synchronised with each other. 0011 The base stations are, like normal GSM base stations, equipped with the ability to receive, process and report uplink signals for mobile units transmitting to them. In addition, they also have the ability to receive, process and report signals when they are idle, in order to sense active transmissions which are being handled by nearby base stations. In the GSM case, the base stations have the ability to receive in unused timeslots, in any arbitrary radio channel within the uplink band. This ability is used according to a further feature of the invention to gather information on the usage of radio channels in the close physical proximity to the basestation on a slot-by-slot basis. 0012 The actual measurement parameters (timeslots, RF channels and measurement schedule in the GSM case) are sent by a controlling agent to each basestation, which then reports its results (signal strength, signal quality and a unique identifier, or RSSI, RXQUAL, burst identifier in the GSM case) to the controlling agent. The burst identifier is a code calculated from the burst to allow it to be compared with other measurements in the controlling agent. For instance it might be an n-bit exclusive-OR operation between adjacent n-bit words of the burst payload, delivering an n-bit identifier. Alternatively it may be an n-bit Forward Error Correction (FEC) code derived from the payload. Bit errors in the burst payload will be concentrated in the burst identifier so calculated. However, at limiting sensitivity, the bit error rate (BER) is 2%, so that for a normal burst (with a payload of 116 bits) just over 2 bits on average will be in error. The burst identifier will therefore have approximately 2 bits in error also. Therefore n is chosen in the n-bit identifier construction so that the probability of misidentifying a burst is acceptably small. 0013 The controlling agent can rank the base stations in order of proximity to a particular mobile station, based on correlating the uplink measurements from all the base stations with the burst identifiers. 0014 The controlling agent can route the signalling and data traffic to one or more of the closest base stations to the mobile unit, according to example algorithms described below. More than one basestation may be used to achieve reinforcement of the signal received by the mobile unit, this being possible because the base stations are all synchronised. 0015 Just as the downlink data may be multiply routed, so uplink data may be multiply received. If there are unused radio resources (timeslots in the GSM case) in nearby base stations, they may be tuned to receive uplink data from a nearby mobile unit. The uplink data so received may be routed to the controlling agent, and combined there, further to reduce the error rate in the data. For example, if the same data is received through more than two basestation receivers, then a simple majority voting algorithm can be used to correct individual bits within the data stream. This feature can be used either to increase the quality of the received data, or the quality of the data can be maintained, and the transmit power of the mobile can be decreased so as to decrease interference with any external network. 0016 As the mobile unit moves through the network, the controlling agent can change the routing of the signalling and data traffic, to maintain the connection with the mobile unit, to maximise the traffic throughput of the network, and to minimise interference with the external network, again according to example algorithms described below. 0017 Thus, in contrast with a conventional GSM network system, the mobile subscriber unit is not responsible for signal measurements to identify neighbouring base stations for use in controlling handover, but instead it cannot distinguish between base stations and it is the responsibility of the base stations and controlling agent to track each mobile through the system. 0018 In the foregoing discussion, the term channel may mean a static frequency channel, or a hopping channel with defined hop frequencies and hop sequence. 0019 Preferably, the controlling agent processes the proximity measurements signals over a period of time to build up a control algorithm, which may take the form of a neighbourliness matrix linking each basestation with each of its neighbours in a ranked manner. 0020 The proximity measurement signals may comprise received quality and level measurements at the base stations, and at the mobile subscriber unit, and these measurements may be made in relation to channel request signals or other signalling or traffic signals transmitted by the mobile subscriber unit. These measurements may involve measurement of the carrier-to-interference C/I ratios. 0021 In an alternative embodiment of the invention, soft decision values and a soft decision sum generated at the basestations in decoding received signals such as channel request bursts are used as proximity values in place of or as well as the level or quality measurements of channel request bursts as described above. 0022 The soft decision sum is calculated in the equaliser in the basestation, and it is calculated before the other quality valuesRXQUAL or C/I. In the demodulation process, each bit of the burst is digitised to lie in a range (which is typically, though not necessarily 0 . . . 7 inclusive). In this range, in the example given, the value 0 indicates the most confident binary 0, and the value 7, the most confident binary 1. Intermediate values indicate binary 0 to 7, with varying confidence, so that the value 1 is likely to indicate binary 0, but with less confidence than value 0, value 2 indicates 0, but with less confidence than 1, value 6 indicates 1, but with less confidence than value 7, and so on. This technique is well known in the art, and equalisers employing this technique are known as soft-decision equalisers. When coupled with decoders (such as the well-known Vitetbi decoder) that can make use of the soft-decision values, such equalisers offer superior performance to hard-decision equalisers, where the demodulated bits are unequivocally assigned binary 0 or 1. If the soft-decision value is complemented, based on the value of the most significant bit (MSB), so that all values with a 0 MSB are complemented, and all values with a 1 MSB are not complemented, and then the MSB is removed from the value, then a set of values is derived (for example range 0 . . . 7 given) of 3, 2, 1, 0, 0, 1, 2, 3. The properties of this set are clearhigh values correspond to high levels of confidence in the assigned bit value. If these values are added for every bit in the burst, then a value is obtained called here, the soft-decision sum. The use of this soft-decision sum is attractive in this application since it can be related to signal strength and quality as a single parameter. It therefore folds real-world handover success factors into the statistics captured in the neighbourliness matrix or other control algorithm described below. For instance, handovers into strongly interfered channels are likely to fail, just as they are into channels where the neighbour signal is weak. The calculation of the parameter adds little overhead to the normal operation of the soft-decision equaliser. In the context of the invention, a high soft-decision sum signifies close proximity, and a low values indicate low proximity. 0023 Alternatively, or additionally, the proximity measurements (by any of the methods described above) may be collected by the basestations, by operating on the normal traffic and signalling bursts transmitted by a test mobile in a call as it is moved through the ensemble of basestations. 0024 Alternatively, or additionally, the proximity measurements (by any of the methods described above) may be collected by a test mobile, by operating on the beacon channel signals transmitted by all the basestations as the test mobile is moved through the ensemble of basestations. 0025 The latter two alternative or additional methods using a test mobile may be convenient methods to initialise a neighbourlines s matrix, or to set it up for static use, though this is generally not preferred. Best use is made of the neighbourliness matrix from its dynamic properties, as described below. 0026 Some of the base stations broadcast a basestation control channel on a predetermined beacon frequency so that a mobile subscriber unit anywhere within the radio coverage of the system will receive the same control data. However, those base stations not broadcasting the basestation control channel are free to operate at frequencies other than the beacon frequency, and thus serve to provide increased traffic capacity. 0027 When the system of the invention is considered in the context of an external macro cellular radio network, with which it is to be compatible, then said predetermined beacon frequency must be selected to minimise interference with the external network. However, the frequencies used for the traffic channels can be planned separately, for example, using an IUO scheme. The beacon frequency can be transmitted at a lower power because it is transmitted by multiple base stations within the system, and thus interference with the external macro network is reduced. 0028 It will be appreciated that a mobile subscriber unit moving within the network of base stations will receive time-delayed copies of the control data from each basestation, but that the equaliser within the mobile subscriber unit will treat these as multi-path copies and reconstruct them in the usual manner. The mobile subscriber unit will therefore see the network of base stations as a single cell. 0029 The invention will now be described by way of example with reference to the accompanying schematic drawings of a cellular radio telecommunication system according to the invention as applied to an in-building network. BEST MODE OF CARRYING OUT THE INVENTION 1 3 2 4 0030 Consider the in-building network shown in the drawing. In this example, each basestation BS contains one transceiver (TX/RX). All the base stations are synchronised according to the invention. The network is configured so that a small number of base stations transmit the GSM beacon, so as to cover the whole area of interest at reasonably low power. In this example, we configure BS and BS to transmit a synchronised beacon. BS and BS are therefore free to be used according to the invention for the provision of additional traffic capacity, and radio channel measurement, as required. 0031 If this deployment of base stations were configured as a conventional GSM network, then each basestation would have to broadcast its beacon on a separate channel. The frequency re-use properties of this network in this traditional implementation are problematic, since the BCCHs frequencies must be re-used on a low density pattern to prevent interference, and probably even lower density than for the macro-network, owing to the square law drop-off in power from each basestation. The channel requirements of this in-building network are uncomfortably large, and the interference problems induced by such a network on the external macro network may be unacceptable. 0032 In the illustrated example, four separate BCCH channels would be needed, one for each basestation BS, with normally a guard channel between each one, therefore the system requires 9 RF channels in total. Even though the base stations would be operating at low power, macro network base stations near the building would have to avoid these frequencies to ensure good reception for mobiles in the vicinity. Even in this extremely small example therefore, nearly 15% of the operator's allocation of say 60 channels is devoted to this one installation (assuming two operators in the band). 0033 If, however, all the base stations are synchronised together according to the invention, and broadcast the same BCCH information and each beacon channel is tuned to the same RF channel, then the network will consume only one RF channel for BCCH in the whole in-building network. Mobiles moving within the network will simply receive time-delayed copies of the BCCH information from each basestation, and the mobile equalisers will treat them as multi-path copies, and reconstruct them as normal. 2 4 0034 Such a network, is similar to a repeater network. It provides good coverage at minimum interference, but only 7 channels of traffic capacity for the whole network. In order to increase its traffic capacity extra transceivers (TX, RX pairs) (BS and BS in the drawing) are added at some or all of the base stations, and a controller PC is provided according to the invention which is connected to the base stations via a packet-switched local area network LAN to direct traffic by the least interference route through the network, the controller incorporating a mobility management agent MMA, which gives it this functionality. 0035 The basic function of the MMA is to route the maximum amount of traffic (seen as an ensemble) via channels of acceptable quality determined according to C/I ratios measured at mobile and basestation within the network. It is also a requirement of the network as a whole that it interferes by the least possible amount with the macro network lying beyond its boundaries. This requirement is met by selecting a routing algorithm that minimises the power transmitted by both mobile and basestation for the duration of the call. The algorithms by which we achieve this are described below. 0036 One of the key properties of the network which differentiates it from a repeater network, is that even though the network appears to be a single cell from the point of view of the mobile (and the macro network), it is possible to assign a traffic channel to a single basestation within the network, and for all other base stations to remain unaffected. 0037 For all mobiles in idle mode, the network has no idea where the mobile is, or which basestation is nearest. However, as soon as a mobile makes a channel request (via the random access channel RACH), each basestation can report received level RXLEV and received quality RXQUAL for the RACH burst, and the MMA can select the route of the access grant channel AGCH. Note that since the network is synchronised, the AGCH (and any other channel for that matter) may be sent through any available basestation. Moreover it may be sent through more than one basestation to ensure that the target C/I ratio at the mobile is achieved. 0038 As the mobile moves through the network it is the responsibility of the MMA to direct base stations (both serving and non-serving) to make uplink RXQUAL and RXLEV measurements continuously to help track each mobile through the network. The MMA will automatically build and maintain a neighbourliness matrix linking each basestation in a ranked way, based on uplink measurements made by each of the base stations as traffic builds. Immediately after first switch on, the neighbourliness matrix will be null. On first assignment request by a mobile subscriber unit, each basestation will report the received strength and quality of the RACH burst, and these will be reported to the MMA. The MMA may combine the two measurements and update the neighbourliness matrix with them, or alternatively, it may keep two matrices, one for signal strength, which corresponds to the static physical disposition of the basestation and their surroundings, and one for quality, which corresponds to the instantaneous interference properties of the network. 0039 There are many possible routing algorithms that may be used by the MMA to route call and data traffic through the network. The simplest one may be a nearest neighbour routing, where traffic data are routed through a single basestation with the best RF visibility of the mobile terminal. In order to make best use of possibly unused radio capacity within the local network, and to minimise interference with any exterior network, a minimum power routing algorithm may be used. In this method, downlink interference is minimised by routing the downlink traffic data through several base stations, and the downlink power level of each basestation is controlled by commands from the MMA, so that the target C/I ratio and minimum receive power criteria are achieved at the mobile terminal. By this method the theoretical minimum downlink interference level for any given basestation deployment is achieved. Uplink interference is minimised by nearest neighbour routing of the uplink traffic, and by commanding the mobile to transmit at the minimum power level so as to achieve the target receive signal quality and strength at the basestation receiver. In a busy network, the spare capacity required for minimum power routing may not be available, and so the method reduces to nearest neighbour routing. 0040 There are many possible algorithms for estimating the neighbourliness of base stations one to another. One example algorithm is described here, with reference to access bursts received by an inbuilding network of GSM base stations. th th i 0041 Whenever a mobile station requests a dedicated channel, it sends an access burst on its RACH channel, which is always timeslot zero of the C0 channel. If the measurement made of the kaccess burst, by the ibasestation is m, then for each burst k, there is a set of measurements 0042 These measurements are processed in order to update the neighbourliness matrix. An important prefilter is based on the maximum power observed in the network. If the power is above a certain threshold, then the burst originates nearby, and the measurement set has meaning. <mrow><mi>Ci</mi><mo>,</mo><mrow><mi>j</mi><mo>=</mo><mrow><munder><mo>&amp;Sum;</mo><mi>k</mi></munder><mo>&amp;it;</mo><mrow><mo>(</mo><mrow><msub><mi>P</mi><mi>i</mi></msub><mo>,</mo><msub><mi>P</mi><mi>j</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow> 0043 Having established that the RACH burst originates within the physical coverage area of the network, then the measurements are used to estimate which base stations are nearest to each other. i th 0044 Where Pis the generalised proximity measurement made on the kburst at basestation i. 0045 In this example, the measured burst is access burst, but while this restriction is a convenient one for implementation, measurements can be made on any single burst that is identifiable at the respective basestations as originating from a single mobile the sum is taken over values of k where the signal level at any one basestation i is above a certain threshold. i j 0046 This matrix captures the probability that two base stations i and j are in close RF proximity. Cij will only be large if both Pand Pare large, which will be true only if both base stations i and j are close to the origin of the burst, and therefore to each other. The MMA preferably ages the measurements over which it calculates C and discards the oldest as newer ones are made. This helps to keep track of changes in the physical layout and interference environment of the network, and secondly it aids normalisation to keep the calculation set over a limited number of measurements. 0047 The neighbourliness matrix is maintained by keeping timeslot zero of all base stations as unoccupied as possible. In this way, as many transceivers in the network as possible are able to tune to C0 for timeslot zero and make uplink measurements on any RACHs transmitted during the operation of the network. 0048 The objective of the neighbourliness matrix is to give an a posteriori likelihood measure for the neighbourliness of base stations. By basing this on an average measure made on particular bursts transmitted by particular mobiles, a matrix will eventually be built based on real traffic from real mobiles moving through the network on real physical paths. The matrix in effect incorporates the probabilities of successful handovers between basestations. This is particularly directed at supporting internal handover between base stations where there are no helpful measurement the mobile can make. th 0049 As handover approaches, as detected by power budget, uplink quality or signal strength, or downlink quality or signal strength, the MMA uses the neighbourliness matrix to determine a new route for the traffic. If the currently serving base station is i, then it sorts the icolumn of C, to generate an ordered list of possible neighbour base stations. The first entry in the list should correspond to the most likely neighbour, the second should be the next most likely and so on. It then attempts to find free resources (timeslots in the GSM case) in the candidate neighbour, and having found them, will re-route the traffic to the new neighbour, reassigning the timeslot/hop parameters (intra-cell handover) if necessary. If no resources are free, then the search continues down the list until either free resources are found, a minimum value of neighbourliness is crossed, or the list is exhausted. i 0050 When the system is not busy, it may be possible for the MMA to operate without the neighbourliness matrix, and instead simply rely on the latest m set of measurements from the mobile to choose the best new route, using the nearest neighbour algorithm. 0051 While all the beacon frequencies in the network are set to the same value, which is selected to minimise interference with the macro network, the frequencies to be used by traffic channels in the base stations are planned separately using a conventional IOU scheme. 0052 An important requirement of the network is to synchronise all of the base station to the same GSM timebase. This can be achieved by many methods, for instance providing a single synchronisation signal along with the LAN data connection to each of the base stations.
Welcome to the Translational Research Evaluation TIG Website! Join a TIG Sub-Committee today! Overview The Translation Research Evaluation (TRE) TIG provides a community for all evaluators interested in the evaluation of translational research initiatives. It is hoped that through this community of practice, members will be able to share the specific and unique challenges they face related to all aspects of evaluation related to clinical and translational sciences including (but not limited to) education, frameworks and models, innovative applications, novel methods, data collection techniques and research designs. The TRE TIG offers its members– evaluators, practitioners, program managers and other stakeholders – an opportunity to share mutual interests, evaluation expertise, resources and materials. Purpose The over-arching goal of the TRE TIG is to explore current, state-of-the-art evaluation approaches and applications, foster communication among evaluators and provide opportunities to discuss existing and emerging techniques to evaluate translational research. It is hoped that this TIG, and the community of practice that it fosters, will help members identify and disseminate successful strategies to overcome challenges associated with translational research evaluation. Topics of Interest Join the Translational Research Evaluation e-Group Today! This e-Group will be the primary means of ongoing communication among TIG members. It functions as both a web interface and like a traditional e-mail based listserv. You can set how frequently you would like e-Group messages to be sent to your e-mail (real time, daily archive, etc.), you can respond from your e-mail, and so on. We’ll use this e-Group to send updates about the TIG, discuss potential projects, share thoughts about how to evaluate translational research, and more. Please make sure you join this group. The process only takes a few seconds and it will be your line of communications to our group. 1) Go to the AEA homepage (www.eval.org) and log in 2) Make sure you have joined the TRE TIG (you can do this by selecting the Members Only tab and clicking on Update TIG Selections) 3) Select the Our Community tab 4) Click on Groups, Forums Subscriptions 5) Click on Communities, All Communities 6) In the "20 per page" dropdown box, select "All"
http://comm.eval.org/translationalresearchevaluation/home
Before it April release, fans have already seen numerous fan theories regarding Avengers: Endgame. One similar fan theory suggests that our heroes will find themselves travelling amongst alternate realities to undo the events of Infinity War. Originally posted to /r/Marvelstudios by /u/CaptainKyloStark, the theory states that the surviving Avengers will travel across other realities to recruit the heroes back who got dusted away by the Decimation. The user believes that all those dusted by Thanos’ snap are actually dead. The remaining heroes will now have to find a way to flip between realities, something Dr Strange hinted as he viewed the 14,000,065 potential futures in last year’s Avengers. The heroes would also travel to multiple realities and pick up that universe’s version of the heroes they know of, such as an alternate universe version of Clint Barton, who has become Ronin or a universe where Pepper Potts originally dons the iron suit to become “Iron Woman”. The theory adds that the heroes travelling amongst alternate realities is unnatural and that’s where “the greater threat” comes into play. Since Tony Stark, Cap and company will be travelling the multiverse, the user believes that one of Marvel’s cosmic entities, such as The Living Tribunal or Infinity will make their MCU debut, in order to restore the multiverse to its previous state.
https://www.animatedtimes.com/endgame-fan-theory-suggests-heroes-will-travel-into-alternative-timelines/
What is interesting about this film is the way it plays with the traditional meet-cute formula in act one with a couple that’s a bit more mature. Luuk is divorced with children and Masha has never been in a relationship. The typical romcom plot exhausts most of its plot points in this entertaining, funny and charming first act and then the aforementioned life-changing event alters the path and the genre of the film. This is the film’s strongest and most unique point. The sequence when Luuk is diagnosed and the immediate fallout thereof is the next strongest section of the film and is ultimately what buoys it over the finish line. However, the film does lose some of its momentum as it pulls into its inevitable conclusion the button on the story is strong and well-earned, but it does lose a lot of what if could’ve been in getting there. Much of what slows it down is that the tension amongst makeshift family members ends up being as frustrating for us as it is for Luuk. Which does help us identify with him but it seems that, even as emotional as they are that the ex-wife/girlfriend tension is ill-timed, repetitive and unfortunate. Kim van Kooten is highly effective in this film and her charms are equally evident in both distinct portions of the narrative. Though his character here follows a similar trajectory as his in Time of My Life Koen De Graeve is wonderful here playing a different kind of man in a similar process. In the Heart was released in Netherlands in January. Should it hit other markets it is worth looking into if you are intrigued. While I lamented what it could have been it is still an enjoyable experience with memorable performances.
https://themovierat.com/2015/02/08/
Keto Maple Bacon Donuts Keto Maple Donuts are the best of all possible worlds. Delicious low carb baked donuts with a rich sugar-free maple glaze and crispy bacon bits. Ingredients - Donuts - 3 large eggs room temperature - 1/4 cup Swerve Brown - 1/4 cup Swerve Granular - 1/4 cup almond milk room temperature - 1/4 cup butter, melted but not hot - 1 teaspoon maple extract - 1 cup almond flour - 1/4 cup coconut flour - 2 teaspoons baking powder - 1/4 teaspoon salt Maple Glaze - 1/3 cup powdered Swerve - 2 tablespoons heavy whipping cream - 1 teaspoon maple extract - water to thin - 4 pieces bacon, diced and cooked until crisp Directions - Preheat oven to 325F and grease a donut pan well. (If your donut pan only holds 6, you will need to work in batches.) - In a high powered blender or a food processor, combine the eggs, sweeteners, almond milk, butter, and maple extract. Blend until smooth. - Add the almond flour, coconut flour, baking powder, and salt and blend until well combined and smooth. Spoon the batter into a pastry bag and fill the donut pan slots about 3/4 full. - Bake 15 to 20 minutes, until the donuts are golden brown around the edges, and firm to the touch. Remove and let cool in the pan 10 to 20 minutes, then flip out onto a wire rack to cool completely. - In a medium bowl, whisk together the powdered sweetener, whipping cream, and maple extract. Whisk in a teaspoon of water at a time, until it’s a dippable consistency. - Dip each donut in the glaze and place back on the rack. Sprinkle with the chopped bacon and let set about 30 minutes.
https://low-carb-keto-lifestyle.com/2021/01/25/keto-maple-bacon-donuts/
Immigration Reform “Office Hours” Week Two Momentum is building each day for immigration reform, with a full-throated legislative debate set to start in the coming weeks. Today, America’s Voice Education Fund and fellow immigration experts held the second in a series of weekly press briefings, or Immigration Reform “Office Hours.” Each week, a different and diverse group of speakers will share the latest information on the players, politics, legislation and other developments coming down the pipeline as the debate in Congress moves forward. Moderated by Lia Parada, Legislative Director at America’s Voice Education Fund, today’s call featured discussion and analysis from Cristina Parker, Communications Director for Border Network for Human Rights (BNHR); Donna De La Cruz, Press Secretary at Center for Community Change; Josh Stehlik, Employment Attorney at National Immigration Law Center; and David Leopold, Executive Committee Member, American Immigration Lawyers Association. During the call they discussed the immigration happenings on Capitol Hill and the White House this week as well as other immigration-related events. Speakers touched on the following events from the past week: House Border Subcommittee Hearing on What Makes a Secure Border? – Both Republicans and Democrats agreed that money and/or the amount of personnel working at the border are not accurate ways to assess border security. We need smarter border enforcement that is based on measurable outcomes and metrics. These same metrics show right now that the border is more secure than ever. Witnesses and members of Congress stressed that citizenship for the undocumented, as part of broad immigration reform, is an important way to improve border security—not something that should be held hostage to irrational fears or impossible standards that demand the government all but seal the border before any positive reforms are made. House Immigration Subcommittee Hearing on Agriculture and the H2A Visa Program –– The hearing focused on the importance of immigration reform to American agriculture. While there were important differences between agri-business owners and farm worker advocate Giev Kashkooli (UFW) on what a program for future farm labor should include, witnesses agreed on the contributions that farm workers (up to 90% of whom are undocumented) make to their industry and to the American economy; as well as the importance of allowing these hardworking immigrants to step out of the shadows and achieve the American dream. House Immigration Subcommittee Hearing on “How E-Verify Works and How It Benefits American Employers and Workers”— The hearing made it apparent that expanding E-Verify is yet another proposal that would not work unless coupled with full immigration reform that includes a path to citizenship. Witnesses pointed out errors with the current system that leave American workers vulnerable to being blocked from getting a job without warning or explanation. It was clear that expanding E-Verify would only be worth the risks and vulnerability if coupled with reforms to the program, as well as the passage of comprehensive immigration reform that would allow 8 million undocumented workers to contribute to the economy. POTUS meeting with Senators McCain and Graham – On Tuesday February 26th, Senators McCain and Graham met with President Obama to discuss immigration reform. All three elected officials came out of the meeting praising each other’s commitment to pass reform. The meeting is just one more example of bipartisanship and growing momentum for immigration reform. Border Advocacy Day in Washington, DC – This week elected officials, faith and business leaders, as well as experts from both the northern and southern border joined the Border Network for Human Rights (BNHR) in DC for an advocacy day. Their goal was to talk to Members of Congress and set the story straight about what is really going on in the border and the type of enforcement that is needed. As border residents know from personal experience, we don’t need more enforcement, we need smarter enforcement, meaningful oversight, real accountability, and a path to citizenship for immigrants without papers. At a press conference on Wednesday, leaders joined 200 people that live and work in border areas to explain the reality of life at the border and asked Congress to pass integrated immigration reform that addresses the issues in border communities with fairness and compassion. Keeping Family Together Bus Tour – The tour began with its national launch this past week. It consists of seven regional tours that will travel across the country telling their stories and elevating the importance of human dignity and family in the immigrant rights struggle. Advocates, families and allies will participate throughout the tour and ask congress to pass an immigration reform that includes a path to citizenship and keeps the importance of family unity at its core. For those that can’t join the tour physically, FIRM has created an interactive online tour that allows for participants to join the bus tour through actions. For more information on the Keeping Family together Bus Tour, click here. ICE Release of Immigration Detainees – Earlier this week ICE released hundreds of people from immigration detention centers across the country. According to ICE officials the people released were low priority, non-criminal cases. Officials from the White House explained later that they were not aware of ICE’s decision. This is just one more example of ICE’s rogue behavior and refusal to follow their own department’s directives. According to a DHS’ prosecutorial discretion memo many of these immigrants with low priority cases shouldn’t have been detained in the first place. This week’s events highlight the lack of a functional immigration system and the need to pass immigration reform, not play politics with the issue. Momentum for immigration reform keeps growing. There is clear bipartisan consensus to pass accountable immigration reform that legalizes 11 million undocumented immigrants. Advocacy organizations are planning campaigns around the particular details that will be included in any immigration reform bill and they continue to put pressure on elected officials to pass reform that will address the issues, that is based on facts and that will keep families together. We’ll keep you informed through weekly briefings. For a recording of today’s call, click here. We do not share your personal information with unaffiliated groups without explicit permission. We will send you important updates on this and other important campaigns by email. If at any time you would like to unsubscribe from our email list, you may do so by visiting http://AmericasVoice.org/Unsubscribe.
Introduction: The issue of immigration reform has been a contentious one in the United States for decades. President Joe Biden campaigned on a promise to enact significant reforms to the nation’s immigration system. However, despite biden bidenoremushis promises and early executive actions, achieving comprehensive immigration reform still faces significant challenges. Biden’s Early Actions on Immigration President Biden took several executive actions aimed at reversing some of the Trump administration’s harsh immigration policies. These actions included ending the travel ban on several Muslim-majority countries, halting the construction of the border wall, and reinstating the Deferred Action for Childhood Arrivals (DACA) program. While these actions were a welcome relief for many immigrants and advocates, they were only the first step in what promises to be a long and difficult journey towards comprehensive immigration reform. Challenges to Immigration Reform Despite President Biden’s promises to enact significant immigration reforms, there are several significant challenges standing in the way of progress. One of the most significant obstacles is the deeply entrenched partisan divide on the issue. Republicans have long opposed any efforts to provide a pathway to citizenship for undocumented immigrants, while Democrats have pushed for more lenient policies. With the Senate currently split 50-50, passing any comprehensive immigration reform legislation would require bipartisan support, which has been difficult to achieve. The Role of the Courts Another challenge to achieving comprehensive immigration reform is the role of the courts. The Trump administration enacted several policies that were challenged in court, and many of those cases are still working their way through the legal system. While the Biden administration has reversed some of these policies, court challenges could still delay or derail any efforts to enact significant immigration reforms. The Impact of the Pandemic The COVID-19 pandemic has also complicated efforts to achieve comprehensive immigration reform. The pandemic has forced the Biden administration to focus on more immediate issues related to public health and economic recovery. Additionally, the pandemic has made it more difficult to process and screen immigrants at the border, leading to a backlog of cases and potential delays in processing. Conclusion: Despite President Biden’s promises to enact significant immigration reform, achieving comprehensive reform remains a daunting challenge. The deeply entrenched partisan divide, court challenges, and the ongoing impact of the pandemic all pose significant obstacles. However, the need for reform remains urgent, as millions of immigrants continue to live in uncertainty and fear. It will take continued efforts from advocates, lawmakers, and the public to push for meaningful change and a more just and compassionate immigration system.
https://www.iitsbusiness.com/2023/03/04/despite-biden-bidenoremus/
How Organizational Values Impact Business and Why It Matters? Organizational values are one of those conceptual things that subtly influence every other element of the company. So, in a way, company values define decision-making and how the company handles itself externally and internally. Because of that, clearly defined organizational values are critical for maintaining an effective employer’s brand. In our previous articles, we have explained the importance of company culture and the ways mission, vision, and values define the company. This article will go into detail regarding company values and why they are critical for success. What are Organizational Values? In one way or another, organizational values (aka company values) refer to a set of guiding principles that provide a framework for realizing the company’s mission and vision. - On the one hand, company values describe the manner of interaction with clients; - On the other hand, values outline the ways employees handle themselves and treat each other. Both aspects combine when it comes to attracting candidates. As part of employer branding, company values provide a conceptual backbone for the brand presentation. Clearly defined, well-presented, and relatable organizational values are critical at the consideration and application stages. When done right, they provide a gateway for the candidate into the company’s mindset. This impression is impactful for decision-making. What is the Real Purpose of Organizational Values? You can break down the purpose of organizational values into several integral elements. Let’s take a look at them one by one. Differentiate from The Competition Company values play an instrumental role in shaping a company’s value proposition for potential clients and employees. They make it tangible and relatable. In one way or another, dealing with the company is not just about pragmatic value. It is also about being treated with respect in a mutually beneficial partnership. Organizational values inform and guide that aspect. - After all, any company can write about things in the abstract. For example, how they deliver high-quality service in time and bring the business to the next level. - Or in the case of recruitment: “the company is more than just a job. It is a place for self-realization”. These are good ideas, but to work, they need fleshing out. In contrast, you can emphasize practical things that make the company stand out from the competition. For example: - Can-do approach when it comes to handling challenges; - Team synergy in setting goals, breaking them down into objectives, and gradually achieving results; - Continuous learning – a mindset for taking something valuable from every experience and implementing it into subsequent works. This approach to organizational values is more descriptive from the outside. When you read it – you can understand what the company is about, whether you share such values, and determine whether you want to collaborate with them. Generate Candidate Engagement We’ve touched on this in the previous section, but it is important to reiterate. Employer branding defines the perception of the company as a workplace. This perception is critical for candidate attraction. You can’t just make people want to work for you because you offer a higher salary and better job security. - It matters, but that’s not the only reason candidates want to work for this company and not the other one. - There are always bigger companies who can offer bigger and better compensation and benefits packages. - That’s why emphasizing organizational values help attract suitable candidates who share your values and can potentially fit into the company. Culture fit is one of the most critical aspects of establishing an effective recruitment process and guaranteeing hiring success. But you need the entry point, and company values play that role. That’s where the employer value proposition kicks in. - EVP generates engagement while values perpetuate tit. Here’s an example: - The job posting attracts a candidate. He’s considering applying but is hesitant before he knows more about the company. “What’s in for me?” he asks. - He looks through the website and reads the values section. It clicks, he shares some of the values, and now he thinks he would like to work for this company. So he clicks “apply.” In this scenario, the candidate identifying with the organizational values moved him from consideration to decision. Similarly, clearly defined company values simplify candidate assessment for culture fit. This approach saves time and effort in determining suitable candidates for the roles. Boost Employee Motivation Handling employee motivation is tricky. There is an entire cottage industry dedicated to various methods of managing motivation and recognition. But at its core, employee motivation comes down to the organizational values and how they perpetuate engagement and motivation on a conceptual level. Here’s how it works: - Given that company values set guiding work principles, they also provide a foundation for work drive to manifest its vision and mission. - In a nutshell: the reason for working determines the quality of work. That’s why the practicality of organizational values matters. You can write about “changing the world” and “challenging yourself” all you want, but these are abstract things you can’t really apply in the trenches. However, you can formulate values in more practical terms. For example: - Thinking outside the box – to encourage not to stick to the formula; - Can-do approach – to perpetuate the initiative; - Synergy – to facilitate the collaborative effort. This way, it is more tangible and directly refers to the day-to-day work employees are doing. Guide Decision-Making Organizational values provide a conceptual backbone that expresses the company’s mission and vision in practical terms. As such, values directly manifest themselves in employee’s decision-making. While values are by no means an instruction, they can suggest courses of action in various scenarios. For example, emphasizing rationality as a company value. - The decision-making process is based on logic and necessity in a particular task and the general goals. - Obviously, decision-making needs to be rational, but it is also essential to accentuate it on the level of company values. Thus, its very presence provides a balancing act for decision-making. - So, if there is a stalemate or even conflict in the back of your mind, you know you can roll back and look at things rationally and determine a reasonable solution. In a way, values as decision-making tools are an expression of trust by the company. You can explain with a sentence like this: - “Here are the ground rules. We trust your skills, experience, expertise to make the right judgments within this framework.” In conclusion Organizational values play an essential role in defining the company’s perception both internally and externally. Values illustrate the way company handles things and moves towards its goals. As such, it is critical to put effort into expressing company values in a way that helps to understand what’s the company is about and why it is worth working with. - In our next article, we will showcase how to determine organizational values and make them truly effective. If you need help with fine-tuning your company’s values or want to overhaul the way organizational values are presented – our employer branding consultants can help.
https://cna-it.com/blog/organizational-values-explained/
Bar Harbor, ME I am interested in evaluating how genetic susceptibility and resilience affects regional and cell-type composition in relation to behavioral phenotypes of Alzheimer's disease model mice. Since graduating from Allegheny College in 2018 and entering the Kaczorowski Lab, I have had the opportunity to study the presence of genetic modifiers that influence the onset and progression of Alzheimer's Disease. My work focuses on creating individual brain maps from a large panel of AD-BXD strains, which may elucidate significant contributing factors in determining individual risk or resilience to AD pathology, cognitive symptoms, and non-cognitive symptoms. Allow essential cookies Required for basic site operations. Allow analytics cookies Used to analyze web traffic to improve the user experience. Allow marketing cookies Used to deliver personalized information and tailor communications.
https://www.jax.org/people/brianna-gurdon
With the exhibition Thomas Ruff, the Kunstsammlung Nordrhein-Westfalen presents a comprehensive overview of one of the most important representatives of the Düsseldorf School of Photography. The exhibition ranges from series from the 1990s, which document Ruff’s unique conceptual approach to photography, to a new series that is now being shown for the first time at K20: For Tableaux chinois, Ruff drew on Chinese propaganda photographs. Parallel to Thomas Ruff’s exhibition, the Kun- stsammlung Nordrhein-Westfalen is also presenting highlights from the collection at K20 under the title Technology Transformation. Photography and Video in the Kun- stsammlung, which also deals with artistic photography and technical imaging pro- cesses in art. “With his manipulations of photographs from many different sources, Thomas Ruff comments in an incredibly clever way on how we see images in a digitalized world. Through his virtuoso handling of digital image processing, he confronts us with a critical examination of the image material he uses and its historical, political, and epistemological significance. Some of his most important series are represented in our collection, and we are very proud to dedicate a large-scale exhibition at K20 to this prominent representative of the Düsseldorf School of Photography,” states Su- sanne Gaensheimer, Director of the Kunstsammlung Nordrhein-Westfalen. Thomas Ruff (b. 1958) is one of the internationally most important artists of his generation. Already as a student in the class of the photographers Bernd and Hilla Becher at the Düs- seldorf Academy of Art in the early 1980s, he chose a conceptual approach to photography which is evident in all the workgroups within his multifaceted oeuvre and determines his approach to the most diverse pictorial genres and historical possibilities of photography. In order not to tie his investigations in the field of photography to the individual image found by chance, but rather to examine these in terms of image types and genres, Thomas Ruff works in series: “A photograph,” Ruff explains, “is not only a photograph, but an assertion. In order to verify the correctness of this assertion, one photo is not enough; I have to verify it on several photos.” The exhibition at K20 focuses on series of pictures from two decades in which the artist hardly ever used a camera himself. Instead, he appropriated existing photographic material from a wide variety of sources for his often large-format pictures. Thomas Ruff’s contribution to contemporary photography thus consists in a special way in the development of a form of photography created without a camera. He uses images that have already been taken and that have already been disseminated in other, largely non- artistic contexts and optimized for specific purposes. The modus operandi and the origin of the material first became the subject of Ruff’s own work in the series of newspaper photo- graphs, which were produced as early as 1990. The exhibition focuses precisely on this central aspect of his work. The pictorial sources that Ruff has tapped for these series range from photographic experiments of the nineteenth century to photo taken by space probes. He has questioned the archive processes of large picture agencies and the pictorial politics of the People’s Republic of China. Documentations of museum exhibitions, as well as por- nographic and catastrophic images from the Internet, are starting points for his own series of works, as are the product photographs of a Düsseldorf-based machine factory from the 1930s. They originate from newspapers, magazines, books, archives, and collections or were simply available to everyone on the Internet. In each series, Ruff explores the technical conditions of photography in the confrontation with these different pictorial worlds: the negative, digital image compression, and even rasterization in offset printing. At the same time, he also takes a look at the afterlife of images in publications, archives, databases, and on the Internet. For Tableaux chinois, the latest series, which is being shown for the first time at K20, Ruff drew on Chinese propaganda photographs: products of the Mao era driven to perfection, which he digitally processed. In his artistic treatment of this historical material, the analog and digital spheres overlap; and in this visible overlap, Ruff combines the image of today’s highly digitalized China with the Chinese understanding of the state in the 1960s and its manipulative pictorial politics. From the ma.r.s. series created between 2010 and 2014, there are eight works on view that have never been shown before, for which Ruff used images of a NASA Mars probe. Viewed through 3D glasses, the rugged surface of the red planet folds into the space in front of and behind the surface of the large-format images. Moving through the exhibition space and comprehending how the illusion is broken and tilted, one is introduced to Ruff’s concern to understand photography as a construction of reality that first and foremost represents a surface—a surface that is, however, set in a historical framework of technology, processing, optimization, transmission, and distribution. His hitherto oldest image sources are the paper negatives of Captain Linnaeus Tripe. When Tripe began taking photographs in South India and Burma, today’s Myanmar, for the British East India Company in 1854, he provided the first images of a world that was, for the Brit- ish public, both far away and unknown. Since then, the world has become a world that has always been photographed. It is this already photographed world that interests the artist Thomas Ruff and for which he has also been called a ‘historian of the photographic’ (Herta Wolf). The exhibition therefore not only provides an overview of Ruff’s work over the past decades, but also highlights nearly 170 years of photographic history. In each series, Ruff formulates highly complex perspectives on the photographic medium and the world that has always been photographed. Further series in the exhibition are the two groups of works referring to press photography, Zeitungsfotos (1990/91) and press++ (since 2015), the series nudes (since 1999) and jpeg (since 2004), which refer to the distribution of photographs on the Internet, as well as Foto- gramme (since 2012), Negatives (since 2014), Flower.s (since 2019), Maschinen (2003/04), m.n.o.p. (2013), and w.g.l. (2017)—and, with Retouching (1995), a rarely shown series of unique pieces. Thomas Ruff was born in Zell am Harmersbach in 1958 and studied with Bernd and Hilla Becher at the Düsseldorf Academy of Art from 1977 to 1985. From 2000 to 2005, he was himself Professor of Photography there. He first received international attention in 1987 with his series of larger-than-life portraits of friends and acquaintances who, as in passport photographs, gazed apathetically into the camera. In 1995, he represented Germany at the 46th Venice Biennale, together with Katharina Fritsch and Martin Honert. His works are collected internationally and are represented in numerous institutional collections.
https://loeildelaphotographie.com/en/kunstsammlung-nordrhein-westfalen-thomas-ruff-df/
Amazing Arizonans: Faridodin Lajvardi, the teacher who found global acclaim but returned to the classroom On the stage of the Carl Hayden Community High School auditorium, the comedian George Lopez told students what it was like to play a character partly based on one of their teachers, Faridodin Lajvardi, who stood nearby looking bashful. “I wore Fredi’s watch,” Lopez said, using Lajvardi’s nickname. “I tried to grow my facial hair like Fredi.” But mainly, he said, he wanted the movie “Spare Parts,” which was being premiered at the school in January, to capture the teacher’s devotion to students in his robotics club. The film tells the story of how four Carl Hayden students won an underwater robotics competition against colleges, including the Massachusetts Institute of Technology. “One thing I did honor,” Lopez said, “was the fact that Fredi loved these boys.” Carl Hayden’s 2004 victory seemed made for Hollywood. And in 2015, the fictional version of the tale played in movie theaters nationwide. The film, released by Lionsgate, starred Lopez, Marisa Tomei, Jamie Lee Curtis, Esai Morales and the married former teen stars Carlos and Alexa PenaVega. The DVD edition was released in May. The tale had received national attention before. It was the subject of an article in “Wired” magazine in 2005. That publication’s readers donated money to help the four students, three of whom entered the country illegally from Mexico, attend college. The victory was also the subject of a 2014 documentary, “Underwater Dreams.” A truncated version of that film aired on the MSNBC and Telemundo networks that year. But the sheen of seeing his story told in a major motion picture, even one that box office returns indicate was not widely seen, brought Lajvardi new attention. He spent the year on the lecture circuit, speaking at conferences and summits in Seattle, Las Vegas, Florida and other cities. He also attended a screening of “Underwater Dreams” at the White House in April. And he spoke at a TEDx conference in Hong Kong in October. “The kids I have after school (at the robotics club), that’s all we do is inspire and motivate and push,” Lajvardi said. “If you can inspire and motivate kids, the rest of the education problems will take care of themselves.” Lajvardi said one of the key points he makes in his speeches is that teachers need to find a way to motivate students. Once a student wants to learn, they will. Lajvardi spent much of 2015 putting that theory into practice, just as he has most other years since he started the Carl Hayden robotics club with another teacher, Allan Cameron, who has since retired. In "Spare Parts," Lopez plays a character who is a combination of the two, named Fredi Cameron. 10 years ago they beat MIT. Today, it's complicated At Carl Hayden, Lajvardi leads the robotics club before school and into the early evening after school, and on most weekend days. It is in the robot room where Lajvardi teaches through solving problems. Students build robots to perform various tasks — sometimes for competitions, sometimes for fun. The team still scrapes for funds, despite the Hollywood notoriety. Lajvardi, though, said he's OK with his students still needing to approach donors and still needing to get creative with supplies to stretch available funds. “To some degree, I think it’s good the kids have to struggle a little,” he said. In late December, Lajvardi was helping his students create an underwater robot that could be used to capture images while scuba diving. Lajvardi exchanged Facebook messages with a former student, Eddie Fernandez, who was giving advice on reducing signal loss. Fernandez, who graduated from Arizona State University with an engineering degree in December, said Lajvardi’s robotics program prepared him for the rigors of college. “During his program I learned how to learn,” Fernandez said. “He made obstacles transparent.” Upon graduation, Fernandez was hired by ASU Polytechnic to do research in the lab. Lajvardi also discusses current events with the club, as they tinker with robots. Which is why he brought up the idea expressed by Supreme Court Justice Antonin Scalia in December that minority students might do better at colleges that don’t demand as much.It is success stories like Fernandez's that motivate Lajvardi. He has a collection of photos of former students at his desk and can rattle off the colleges they have attended: Stanford University, New York University, Willamette University, the Air Force Academy. Lajvardi said students in the robot room — there voluntarily to apply science, math and engineering to real-world problems — laughed at the notion. So did Lajvardi, who has shown that students can rise to the challenges he presents. “I think you’re only limited by whether you think you can do something or not,” he said. George Lopez movie debut keeps one family on edge Lajvardi said that even if Scalia’s argument had merit, it should be based on socioeconomic status, not race. But at his school, in inner-city Phoenix, with most kids on free or reduced-price lunch, with low standardized test scores, he finds success.
https://www.azcentral.com/story/news/local/best-reads/2015/12/30/arizona-teacher-faridodin-lajvardi-big-screen-humble-classroom-motivate-students/78045498/
Join Us or Support Us! This year, we will be participating in the Multiple Myeloma Research Foundation (MMRF) Team for Cures 5K Walk/Run for our 10th year! Team Noreen has committed to raising awareness and funds to accelerate finding a cure for multiple myeloma, which is the second most common blood cancer and is incurable. Your generosity truly makes a difference, as funds raised at this event have helped to: Nearly tripled patient survival Deliver ten new treatments in a decade Launched over 80 new clinical trials At this year's Tr-State race our mom will be honored. The Multiple Myeloma Research Foundation (MMRF) is delighted to recognize Noreen Keating as the MMRF Spirit of Hope Honoree at the 2019 MMRF Team for Cures: Tri-State 5K Walk/Run. This award is presented at every 5K Walk/Run to a patient, caregiver, or family who inspires hope through their resilience, perseverance and dedication to the MMRF and its mission. We hope you can be there to cheer for Noreen or support Team Noreen in our effort to raise money for MMRF, an organization that has helped bring the drugs that have helped save our mom.
https://walkrun.themmrf.org/tri-state/Team/View/106008/Team-Noreen
Northwestern enters into many types of research agreements with sponsors, including the federal government, foundations, other universities, and industry. These relationships are governed by agreements between Northwestern and the sponsor that define each party’s rights and obligations in the relationship. Sponsored research agreements, including incoming subcontracts, usually require negotiation between the University and the sponsor, and always require the approval/signature of the Sponsored Research. Non-Funded Agreements Researchers often collaborate on research or share research tools with other scientists or institutions without receiving funding. For many non-funded (or unfunded) collaborations a written agreement is beneficial or necessary. Non-funded agreements set out expectations, terms, and requirements that protect the interests of the investigators and the participating organizations. Often these agreements are incorporated within the funded agreements described above. It is important to note that a non-funded agreement may involve the provision or exchange of something of value, and the relevant department will need to determine whether what Northwestern provides under the agreement is commensurate in value with what we are receiving. Further, non-funded agreements sometimes contain restrictive language that may conflict with basic academic rights, intellectual property rights, and/or other terms that must be negotiated by Sponsored Research. Frequently Used Agreements Information about the most commonly used types of agreements can be found on the pages listed below and in the navigation at left. Funded and nonfunded agreements not listed below are described in a summary of other agreements.
https://osr.northwestern.edu/agreements/types
Funders and the wider research community must avoid the temptation to reduce impact to just things that can be measured, says Liz Allen of the Wellcome Trust. Measurement should not be for measuring’s sake; it must be about contributing to learning. Qualitative descriptors of progress and impact alongside quantitative measurements are essential in order to evaluate whether the research is actually making a difference. Learning about what works and what doesn’t is an important part of being a science funder. Take the Wellcome Trust, for example: we have multiple grant types spread over many different funding schemes and need to make sure our money is well spent. Evaluating what we fund helps us to learn about successes and failures and to identify the impacts of our funded research. Progress reporting while a grant is underway is a core component of the evaluation process. Funders are increasingly taking this reporting to online platforms, which come with the ability to easily quantify outputs, compare funded work and identify trends. As useful as these systems are, they come with an inherent danger: oversimplifying research impact and simply reducing it to things we can count. At the Trust, our attitude recalls a quote attributed to Albert Einstein: “Everything that can be counted does not necessarily count and everything that counts cannot necessarily be counted.” Including qualitative descriptors of progress and impact, alongside quantitative data, is integral to our auditing and reporting. Some quantifiable indicators, such as bibliometrics, do help to tell us whether research is moving in the right direction; the production of knowledge and its use in future research, policy and practice is important. However, it’s not about how many papers have been published or how many patents have been filed – it’s about what’s been discovered. Without narrative and context alongside ‘lists’ of outputs, how can you know whether research is making a difference? While different funders place their emphasis on different things when deciding which research to fund, as a sector we need to be responsible and avoid the creation of perverse incentives that distort the system1. If funders send out the message that what’s important is a lot of papers or collaborations, then those seeking our funding will tell us that they’ve produced a lot of papers or collaborations. The research community needs to be pragmatic in moving the field of impact tracking and evaluation forward. We need to develop better qualitative tools to complement more established indicators of impact – traditional bibliometric indicators, such as citations, can now be complemented with more qualitative tools such as those provided by F1000 Prime. The Trust is also exploring the value that altmetrics can bring. Other channels for the dissemination of research, such as Twitter, are becoming increasingly popular among researchers, and it’s important that we understand their role. Most of all, we should not forget why we fund research: to make a difference. In gathering reports from those we fund, we should encourage openness and the sharing of successes and failures alongside the products and outputs of research. At the core of evaluation is learning: how and when does research lead to impact, and how we might use our funds more effectively? A research impact agenda that encourages funders to merely count is clearly at odds with that. This article was first published on the Wellcome Trust blog. Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. A social scientist by training, Liz Allen leads the Evaluation team at the Wellcome Trust. At Wellcome Liz is responsible for developing methodologies and implementing approaches to support the monitoring and evaluation of the impact of research and funding initiatives. Liz is also a member of the Board of Directors of the ORCID initiative (Open Researcher and Contributor ID) www.orcid.org – a not-for-profit initiative which intends to create an international and open system of unique and persistent researcher ids.
https://blogs.lse.ac.uk/impactofsocialsciences/2013/02/14/measure-for-measurings-sake/
...markets and facilitate capital formation” (SEC. The Investor’s Advocate: How the SEC Protects Investors, Maintains Market Integrity, and Facilitates Capital Formation. From http://www.sec.gov/about/whatwedo.shtml#create). Currently, SEC is responsible for administering Securities Act of 1933 and 1934, The Trust Indenture Act of 1939, The Investment Company Act of 1940, The Investment Advisors Act of 1940, The Sarbanes-Oxley Act of 2002 and Credit Rating Agency Reform Act of 2006. Congress has allowed SEC to bring civil enforcement actions against individual or companies in violation of securities law. It is the primary body to regulate public companies and their activities in United States. SEC has enacted various rules, regulations and releases to pursue its objective of investor protection mission. Out of numerous provisions, section 10(A) –Audit Requirement is fundamental one. “In 1995, with little fanfare, the SEC added a powerful new weapon to its enforcement arsenal, specifically directed at independent auditors ... Section 10A of the Securities Exchange Act of 1934, as amended. Although, to date, the SEC has only made limited use of Section 10A, the recent spate of false financial disclosures ensures more extensive use of this powerful new weapon” (Hecht, Charles. The SEC’s New Weapon: Section 10A. Retrieved July 2002 from http://accounting.smartpros.com/x34666.xml). Section 10(A) has three... Words: 1584 - Pages: 7 ...(ROSC) Cambodia ACCOUNTING AND AUDITING May 15, 2007 Contents Executive Summary Preface Abbreviations and Acronyms I. Introduction II. Institutional Framework III. Accounting Standards as Designed and as Practiced IV. Auditing Standards as Designed and as Practiced V. Perception of the Quality of Financial Reporting VI. Policy Recommendations EXECUTIVE SUMMARY This report provides an assessment of accounting and auditing practices within the corporate sector in Cambodia with reference to the International Financial Reporting Standards (IFRS) issued by the International Accounting Standards Board (IASB), and the International Standards on Auditing (ISA) issued by the International Federation of Accountants (IFAC). This assessment is positioned within the broader context of the Cambodia’s institutional framework and capacity needed to ensure the quality of corporate financial reporting Cambodia is putting in place an institutional framework with regard to accounting, auditing, and financial reporting practices. However, institutional weaknesses in regulation, compliance, and enforcement of standards and rules still exist. The accounting and auditing statutory framework suffers from inconsistencies among different laws. Although the national accounting standards and auditing standards are based on IFRS, and ISA, respectively, they appear outmoded and have gaps in comparison with the international equivalents. There are varying compliance gaps in both accounting and auditing...... Words: 17152 - Pages: 69 ...investment dealings. b. What are the five divisions of the SEC? Briefly describe the purpose of each. The five divisions of the SEC are corporate finance, enforcement, economic and risk analysis, investment management, and trading and markets. The corporate finance division ensures that investors are provided with up to date and accurate financial reporting data of market resources. The enforcement division investigates suspected violations of any activities pertaining to the other four divisions. The economic risk and analysis division analyses aspects of the market and all divisions of the SEC to mitigate risk in major market shifts. The investment management division has oversight of internal corporate investment plans such as mutual funds and exchange traded funds. The trading and markets division ensures that all aspects of the market are fair and orderly. c. What are the responsibilities of the chief accountant? The chief accountant is responsible for establishing and enforcing accounting and audit policies to ensure reporting is conducted accurately and fairly. The chief accountant also maintains the standards used for accounting to ensure transparency and accuracy. 2 – Financial Accounting Standards Board (source: BYP7-5 Kimmel textbook). The FASB is a U.S. private organization established to improve accounting standards and financial reporting. The FASB conducts extensive... Words: 649 - Pages: 3 ...Deloitte AAER-3428 Case Analysis AAER No. 3428 is a report of an enforcement action against Deloitte & Touche (South Africa) in regards to violations of auditor independence and improper professional conduct. Certain names were not disclosed in this case and will be referred to as “Director” and “Company A.” The key players involved in this case are Deloitte & Touche South Africa (“DT-SA”), their wholly owned consulting affiliate Deloitte Consulting (Pty) Ltd. (“DC-SA”), DC-SA’s contracted consultant (“Director”) and DT-SA’s auditing client (“Company A”). In April, 2006, Director was hired by DC-SA as an independent consultant to provide assistance in the energy industry. There were no business conflicts until September 1st, 2007, when Director joined the board of directors of Company A. Because DC-SA is owned by DT-SA, Director’s employment with DC-SA became a prohibited business relationship that impaired auditor independence between DT-SA and their client, Company A. Because of an absence of controls in place for DC-SA, DT-SA was unaware of this prohibited relationship until August 11, 2008. After further review, Director’s employment was effectively terminated on September 30, 2008. DT-SA’s lack of internal controls and continued employment of Director for over a year caused them to violate auditor independence and engage in improper professional conduct. The particular rules that were violated in this case were rules 210.2-01 and 210.2-02(b) of Regulation... Words: 865 - Pages: 4 ...but banks and companies were still loaning money and investing in the market leading people to believe everything was okay. Investors were now unable to invest confidently and banks had no regulations on their financial statements. The accounting profession was pressured to establish more uniform accounting standards after this stock market crash of 1929. Some people felt that misleading or, incomplete if you will, financial statement information made the stock prices inflate contributing to the stock market crash and the depression that followed. The Securities Act of 1933 and the Securities Exchange Act of 1934 were designed to restore that confidence in the investor. The 1933 act sets accounting and disclosure requirements for initial stocks and bonds, while the 1934 act applies to secondary transactions and mandates reporting requirements for companies whose securities are publicly traded. The 1934 act also created the Securities and Exchange Commission (SEC). The 1934 act gave the SEC both the power and responsibility for setting accounting and reporting standards for companies whose securities are publicly traded by Congress. (highered.mcgraw-hill.com) However, the SEC, has delegated the primary responsibility for setting accounting standards to the private sector. The standards for publicly traded companies are now being written by the PCAOB. The SEC delegated the responsibility, but... Words: 1869 - Pages: 8 ...of a new director of corporate enforcement. | |• Implementation of new SEC independence rules. | |• Emergence of a more co-ordinated international approach to audit regulation. | | | |On the 3rd of August the Auditing Practices Board issued a draft practice note providing up-to-date guidance to auditors of | |banks in Ireland. The Institute of Chartered Accountants in Ireland led the project group which drafted the practice note on | |behalf of the Auditing Practices Board and there was extensive consultation with a number of government agencies and in | |particular with the Central Bank. | | | |The report is quite comprehensive and deals with a number of issues raised by the review group on auditing. In outline the | |issues addressed are as follows: | |• Guidance in relation to non-audit work carried... Words: 854 - Pages: 4 ...the Public Company Accounting Oversight Board (PCAOB) and Auditing Standard 5 (AS 5). Due to the increased demand for oversight in auditing standards, this paper also examines the impact of Sarbanes-Oxley (SOX) and the reasons for the creation of the PCAOB, as well as the implementation of the rules and regulations. Additionally, this paper examines the impact of AS 5. Keywords: audit, AS 5, financial statements, PCAOB, SEC, SOX Table of Contents Introduction ………….……………………………………………………..……………………4 Scandals ...…..……………………………………...……………………………………………4 PCAOB Mission and Vision …………………… ……………………………………………….5 Structure ………………………….……………..……………………………………………5, 6 PCAOB's Objective….…….……..…………………………………………………………….6, 7 Duties ………………………….…..………………………………………………….……… 7, 8 Standard Setting………..………………………………………………………………..……..…8 Inspection ………………………………………………………………………………………..8 Enforcement…………..………………………………………………………………..……...8, 9 AS5 .…………………….…………………………………………………...…………….…9, 10 Conclusion………………………………………………………………………….....……. 10 References …………………………………………………………………………………….. 11 History of PCAOB …………………………………………………………………… 13-19 Introduction Sarbanes-Oxley (SOX) was passed in 2002 and as a result brought numerous changes to auditing. The Sarbanes-Oxley was passed in direct response to business failures, allegations of corporate improprieties and financial statement restatements. Prior to the SOX passage, auditors used a risk-based approach to perform audits of a company's...... Words: 4474 - Pages: 18 ...MICHAEL C. KNAPP SEVENTH EDITION MAKE IT YOURS! SELECT JUST THE CASES YOU NEED Through Cengage Learning’s Make It Yours, you can — simply, quickly, and affordably — create a quality auditing text that is tailored to your course. • Pick your coverage and only pay for the cases you use. • Add cases from a prior edition of Knapp’s Contemporary Auditing. • Add your course materials and assignments. • Pick your own unique cover design. We recognize that not every program covers the same cases and topics in your auditing course. Chris Knapp wrote his case book for people to use either as a core e book or as a supplement to an existing book. If you would like to use a custom auditing case book or supplement the South-Western accounting book you are currently using, simply check the cases you want to include, indicate if there are other course materials you would like to add, and click submit. A Cengage Learning representative will contact you to review and confirm your order. G E T S T A R T E D Visit www.custom.cengage.com/makeityours/knapp7e to make your selections and provide details on anything else you would like to include. Prefer to use pen and paper? No problem. Fill out questions 1-4 and fax this form to 1.800.270.3310. A Custom Solutions editor will contact you within 2-3 business days to discuss the options you have selected. 1. Which of the following cases would you like to include? Section 1: Comprehensive...... Words: 20989 - Pages: 84 ...Sarbanes-Oxley Act of 2002 Bus 102 – Dr. Sean D. Jasso John Chi 12/9/2010 Table of Contents - Table of Contents Introduction History of the Act Implementation Impact on Business Policy Analysis Conclusion Appendix References pg. 1 pg. 2 pg. 3 pg. 4 pg. 7 pg. 9 pg. 11 pg. 12 pg. 14 1|P a ge Introduction Corporate Scandals are business scandals that initiate from the misstatement of financial reporting by executives of public companies who are the ones trusted to run these organizations. Corporate scandals are derived in many ways and these misrepresentations happen through overstating revenues and understating expenses, overstating assets and understating liabilities, and use of fictious and fraudulent transactions that gives a misleading impression of the company’s financial status. There were a few corporate scandals that took place in the last decade that forever changed investment policies in corporate America. The companies that are most commonly known for these scandals are Enron, Adelphia, and WorldCom. These companies had hidden their true financial status from creditors and shareholders until they were unable to meet the financial commitments which forced them reveal massive losses instead of the implicated earnings. The ultimate result cost investors billions of dollars when the share prices of the affected companies had collapsed. According to Hopwood, Leiner & Young (2002), pg. 130, “the public outcry from the corporate scandals were...... Words: 4118 - Pages: 17 ...each successive year during this period. In fact, in both 1980 and 1981 the company’s actual net income eclipsed the figure reported by the company. In 1980, Oaks top executives became concerned that the company could not indefinitely sustain its impressive growth rate in annual profits. To help the company maintain this trend, the executives began creating reserves that could be used to boost reported profits in later years. To report a smooth upward earnings trend and to provide a "cushion" of profits to be used in periods of lower actual earnings, Oak implemented a policy during 1980 and 1981 of establishing unneeded reserves to be released (reversed) in later periods, if needed.1 1. Securities and Exchange Commission, Accounting and Auditing Enforcement Release No. 63, 25 June 1985. 80 EXHIBIT 1 Oak Industries, Inc., Selected Financial Data, 1978-1981 Oak Industries, Inc. These “rainy day reserves” included overstatements of the company’s... Words: 1830 - Pages: 8 ...personal enrichment through the deliberate misuse or misapplication of the employing org's resources or assets." Fraud - A generic term that embraces all the multifarious means that human ingenuity can devise, which are resorted to by one individual, to get an advantage over another by false representation. No definite and invariable rule can be laid down as a general proposition in defining fraud, as it includes surprise, trickery, cunning, and unfair ways by that another is cheated. The only boundaries defining it are those that limit human knavery. Financial Statement Fraud - The intentional misstatement of financial statements through omission of critical facts or disclosures, misstatement of amounts, or misapplication of accepted accounting principles. ======================================================================= Types of occupational fraud and abuse: 1. Asset misappropriation (91.5%) - theft or misuse mostly committed by employees where cash is the most targeted asset 2. Corruption (30.8%) 3. Fraudulent statements (10.6%) Six Types of Fraud: 1. Employee Embezzlement (most common, taking company assets) (creating dummy companies and have employers pay for the goods that are not received) a. direct - no middleman (steal cash, inventory, tools, supplies, etc. b. indirect - usually outside of org (ex vendor) (taking bribes from vendors, customers & non-delivery of goods) 2. Management Fraud - top executives manipulating financial...... Words: 3246 - Pages: 13 ...Generally Accepted Accounting Standards Paper Prithvi Shenoy ACC491 October 14, 2013 Michael Milkonian Abstract The importance of Auditing has gained considerable attention ever since its introduction in the mid-1800s. According to FASB, Statement of Financial Accounting Concepts No. 2, relevance and reliability are central to making accounting information useful for decision makers. To achieve this, auditors are required to obtain “reasonable assurance that those financial statement are presented fairly in all material respects” (Boynton & Johnson, 2006). This paper aims to explain the nature and types of auditing which is governed by the Generally Accepted Auditing Standards (GAAS). The paper then discusses the effects of Sarbanes-Oxley (SOX) Act 2002 and the Public Company Accounting Oversight Board (PCAOB) on publicly traded companies (issuers). Finally, the changes in the auditing environment with respect to the additional responsibilities placed on auditors and PCAOB as a result of the enforcement of SOX Act 2002 is discussed in the paper. Generally Accepted Accounting Standards Paper The elements of GAAS In response to the McKesson & Robbins’ accounting scandal in 1938, the AICPA introduced the 10 Generally Accepted Auditing Standards (GAAS) in 1939 to provide “guidance on the conduct of an audit” as well as an “overview of the timing of the different phases of an audit engagement” (Louwers and Ramsay et al., 2007). The 10 basic standards can be... Words: 1276 - Pages: 6 ...Abstract This research paper explores the creation of the Sarbanes-Oxley Act (SOX) and the role Enron played in its enactment. Specifically, this paper will explore and discuss the Enron crisis, emphasizing the legal and ethical accounting breaches committed by the company. The purpose of SOX and the methods used to address those breaches. A discussion of the major provisions of the act including: (1) Establishment of the Oversight Board commonly referred to as the Public Company Accounting Oversight Board (PCAOB) (2) Restrictions on non-audit services (3) Rotation of audit partners (4) Auditor reports to audit committees (5) conflicts of interests (6) CEO and CFO certification of annual and quarterly reports and (7) Internal control report and auditor attestation. The necessary requirements concerning internal control for public companies. A discussion of the types of services considered unlawful if provided to a publicly held company by its auditor. A discussion of the broader impact of the act on auditors. Lastly, a discussion from the legal and ethical viewpoint of the level of success the act has had in preventing cases such as Enron. The Sarbanes-Oxley Act and Enron In any contemporary discussion of corporate governance and the erosion of trust in business, one name is unavoidable: Enron. Enron has become an icon for corporate fraud on a massive scale going to the top of the corporate hierarchy. In any attempt to restore trust, two points will have to be...... Words: 2205 - Pages: 9 ...is this Board? Enforcement Trends The Public Company Accounting Oversight Board, otherwise known as the PCAOB, was created by the Sarbanes-Oxley Act of 2002. This board is created to make sure that all CPA firms are in compliance with the standards that have been set by SOX. The PCAOB has developed throughout the years and their primary focus is on high-risk clients and detecting fraud. When the board decides to investigate a firm, they have two types of investigations to choose from: informal and formal investigations. Generally an informal investigation starts when there is a complaint alleging or indicating that there is a violation taking place. If there is enough information and evidence obtained, then this leads to the formal investigation. After the investigation, the PCAOB has to determine whether a disciplinary proceeding will be warranted. This results when there is a violation of any rule of the board, any provision of the Sarbanes-Oxley Act, the provisions of the securities laws, or any professional standard. It is being noticed that the PCAOB has a very positive effect in the financial world since being established after huge fiscal scandals. It has been found that the PCAOB inspection process causes an improvement in the quality of audits along with significant reduction in abnormal accruals. This board has a strong enforcement process when dealing with violations. It is also safe to say that the PCAOB is an effective part of proper accounting and honest...... Words: 500 - Pages: 2 ...considered by Deloitte and other audit firms when assessing engagement risk? How, if at all, are auditors’ professional responsibilities affected when a client proposes a higher than normal degree of engagement risk? I believe that the term “engagement risk” implies that inherent client-specific risks face an auditor throughout the course of an audit, thus creating a risk that the auditor will be unable to successfully assess and manage these risks in the performance of the engagement and properly issue an appropriate opinion. The auditor must understand these client-specific risks, which include, but are not limited to, significant events that affect the operations of the client, business risks facing the client, high-risk areas that require complex or subjective accounting treatments, and timely completion of the audit. (Louwers 112) When a client proposes a higher than normal degree of engagement risk, the focus on the auditors’ professional responsibilities becomes even more imperative, as it is critical that the auditor perform at the highest level to provide the greatest possible assurance that the financial statements are presented fairly, in all material respects. What quality control mechanisms should major accounting firms have in place to ensure that audit partners have the proper training and experience to supervise audit engagements? In any major accounting firm, ensuring that audit partners are qualified to supervise and audit engagement begins at the......
https://www.termpaperwarehouse.com/essay-on/Accounting-and-Audit-Enforcement/477698
Overview of Vitiligo Vitiligo is a chronic (long-lasting) autoimmune disorder that causes patches of skin to lose pigment or color. This happens when melanocytes – skin cells that make pigment – are attacked and destroyed, causing the skin to turn a milky-white color. In vitiligo, the white patches usually appear symmetrically on both sides of your body, such as on both hands or both knees. Sometimes, there can be a rapid loss of color or pigment and even cover a large area. The segmental subtype of vitiligo is much less common and happens when the white patches are only on one segment or side of your body, such as a leg, one side of the face, or arm. This type of vitiligo often begins at an early age and progresses for 6 to 12 months and then usually stops. Vitiligo is an autoimmune disease. Normally, the immune system works throughout your body to fight off and defend your body from viruses, bacteria, and infection. In people with autoimmune diseases, the immune cells attack the body’s own healthy tissues by mistake. People with vitiligo may be more likely to develop other autoimmune disorders as well. A person with vitiligo occasionally may have family members who also have the disease. Although there is no cure for vitiligo, treatments can be very effective at stopping the progression and reversing its effects, which may help skin tone appear more even. Who Gets Vitiligo? Anyone can get vitiligo, and it can develop at any age. However, for many people with vitiligo, the white patches begin to appear before age 20, and can start in early childhood. Vitiligo seems to be more common in people who have a family history of the disorder or who have certain autoimmune diseases, including: - Addison’s disease. - Pernicious anemia. - Psoriasis. - Rheumatoid arthritis. - Systemic lupus erythematosus. - Thyroid disease. - Type 1 diabetes. Symptoms of Vitiligo The main symptom of vitiligo is loss of natural color or pigment, called depigmentation. The depigmented patches can appear anywhere on your body and can affect: - Skin, which develops milky-white patches, often on the hands, feet, arms, and face. However, the patches can appear anywhere. - Hair, which can turn white in areas where the skin is losing pigment. This can happen on the scalp, eyebrow, eyelash, beard, and body hair. - Mucous membranes, such as the inside of your mouth or nose. People with vitiligo can also develop: - Low self-esteem or a poor self-image from concerns about appearance, which can affect quality of life. - Uveitis, a general term that describes inflammation or swelling in the eye. - Inflammation in the ear. Causes of Vitiligo Scientists believe that vitiligo is an autoimmune disease in which the body’s immune system attacks and destroys the melanocytes. In addition, researchers continue to study how family history and genes may play a role in causing vitiligo. Sometimes an event – such as a sunburn, emotional distress, or exposure to a chemical – can trigger vitiligo or make it worse.
https://www.niams.nih.gov/health-topics/vitiligo
When I was young, I always questioned my father on why I needed to go to school when nobody teaches the sun to rise every morning. Nobody teaches the tree to grow but it still grows and its fruit feeds the hungry all over the world. Who teaches them to do things when they do not go to school? My father’s response to these questions was that our environment and surroundings are the best teachers in the world. Language is not something that teachers teach and students learn, but it is a natural development at every age. | | As long as English is a second language, it is an option for people to learn the language and effectively use it. It does not mean a student who “learns” the language will be able to “use” the language efficiently. An experiment was done in which a baby was left with a herd of goats, with a nanny goat as its surrogate mother. Throughout the child’s growth, researchers realised that the boy was bleating and communicating with the goats. That is how language is picked up. This is how “natural” language is picked up. There is no way to “make” the process natural. The mere fact that you are “making” something makes it unnatural altogether. The environment has to change for language proficiency to change in Malaysia. The first environmental change that needs to take place is that English is not a second language in Malaysia. English should be the medium or language of communication with Bahasa Malaysia being our mother tongue. Then, we can have Mandarin, Tamil and other languages as second languages or cultural languages. This makes it a natural process for everyone to use these languages at home. As long as English is a second language, it is an option for people to learn the language and effectively use it. The natural process of using English effectively is also deterred because learning it would be a choice. A natural learning curve starts with the question “why”. Children start early by irritating parents with the question “Why do I have to school?” They are looking for reasoning. How many students know the reason for learning the language? By students not knowing this, the natural learning process is disturbed. Why should I do something when I do not know what I am doing? Language comes with culture, history, fun and change. And, this can be achieved through a language and literature integrated curriculum. This is because the literature component answers the why and the language part gets all technical aspects of the language sorted. When they are exposed with this balance, naturally proficiency of English will increase because they know why they are doing something. Not everyone can be teachers and not everyone can teach. It is not the training institutes that need to be checked. It is the fact that in Malaysia, everyone can be teachers. We have engineers, bio- technicians, doctors, etc, becoming teachers. It doesn’t mean that just because you speak English, you can teach the language. Why is it that English teachers cannot become engineers or doctors? The pride of the teaching profession has decreased because people are not doing things that come naturally to them. So, when you have an English language teacher who was a trained physician, how is the natural process of learning going to happen for students? When the deterioration of language happens, it comes with the deterioration of culture, moral, ethics and character because these are the elements that come with a language. It was never a standalone subject to be learnt. Other unnatural elements need to change to regain normality. Improve language and everything culturally, morally, ethically will naturally be changed.
https://kheru2006.livejournal.com/1701734.html
The Bible ascribes the characteristics of deity to Jesus Christ. He is described as eternal, omnipresent, omniscient, omnipotent and immutable. Jesus Christ is equal with God the Father. He is worshiped as God. Who is God according to the Bible? God in Christianity is the eternal Supreme Being who created and preserves all things. Who is Jesus and who is God? In Christianity, Jesus is the Son of God and in many mainstream Christian denominations he is God the Son, the second Person in the Trinity. He is believed to be the Jewish messiah (Christ) who is prophesied in the Hebrew Bible, which is called the Old Testament in Christianity. Who is God in Christianity? Christianity Beliefs Christians are monotheistic, i.e., they believe there’s only one God, and he created the heavens and the earth. This divine Godhead consists of three parts: the father (God himself), the son (Jesus Christ) and the Holy Spirit. Who is God in simple words? The definition of a god is an image, person or thing that is worshiped, honored or believed to be all-powerful or the creator and ruler of the universe. An example of a god is Ganesha, a Hindu diety. Who is Jesus the son of God? Jesus is called the “son of God,” and followers of Jesus are called, “sons of God.” As applied to Jesus, the term is a reference to his role as the Messiah, or Christ, the King chosen by God (Matthew 26:63). Who was Jesus father? Summary of Jesus’ life He was born to Joseph and Mary sometime between 6 bce and shortly before the death of Herod the Great (Matthew 2; Luke 1:5) in 4 bce. According to Matthew and Luke, however, Joseph was only legally his father. When did Jesus become God? Different views would be debated for centuries by Christians and finally settled on the idea that he was both fully human and fully divine by the middle of the 5th century in the Council of Ephesus. Who created God? We ask, “If all things have a creator, then who created God?” Actually, only created things have a creator, so it’s improper to lump God with his creation. God has revealed himself to us in the Bible as having always existed. Atheists counter that there is no reason to assume the universe was created. Should we pray God or Jesus? We don’t go wrong when we pray directly to God the Father. He is our Creator and the one we should worship. Through Jesus, we have direct access to God. He’s not limited to just priests and prophets, but is accessible to each of us. Who is the only true God? Jesus upheld Biblical monotheism when he prayed to his Father, “This is eternal life, that they may know you, the only true God, and Jesus Christ whom you have sent.” What is God’s real name? Yahweh, name for the God of the Israelites, representing the biblical pronunciation of “YHWH,” the Hebrew name revealed to Moses in the book of Exodus. The name YHWH, consisting of the sequence of consonants Yod, Heh, Waw, and Heh, is known as the tetragrammaton. Is God a woman? Others interpret God as neither male nor female. The Catechism of the Catholic Church, Book 239, states that God is called “Father”, while his love for man may also be depicted as motherhood. However, God ultimately transcends the human concept of sex, and “is neither man nor woman: he is God.” What makes a God a God? In monotheistic thought, God is conceived of as the supreme being, creator, and principal object of faith. God is usually conceived of as being omnipotent, omniscient, omnipresent and omnibenevolent as well as having an eternal and necessary existence. How do you define Jesus? (dʒiːzəs ) 1. proper noun. Jesus or Jesus Christ is the name of the man who Christians believe was the son of God, and whose teachings are the basis of Christianity. What does Jesus literally mean?
https://allsaintsvabeach.org/catholicism/best-answer-who-is-god-according-to-jesus.html
Services provided to contracted districts may include: - Regional professional development opportunities with a focus on equity, cultural competence, and social justice. - Supporting district efforts for closing achievement gaps and addressing inequity for a diverse student population: homeless, race, gender, poverty, LGTBQ, and native language. - Providing support to districts in federal equity audits and analysis. - Resources to support reducing bias and discrimination. - Consultant support to help with emerging equity and social justice issues. Educational equity creates a culture of fairness and social justice for all students regarding opportunity, access, and respect for differences. The CESA #11 Promoting Equity for Every Student service engages school personnel in building cultural competency so every student has the means to achieve at high levels and is afforded an equitable educational experience. The service will assist districts in disaggregating student data to target key strategies and interventions that will increase student performance and close achievement gaps. AMERICAN INDIAN STUDIES: IMPLEMENTING WISCONSIN ACT 31 March 10, 2023 9:00 AM - 3:00 PM | @ CESA 11 EQUITY & SOCIAL JUSTICE INSTITUTE April 24, 2023 9:00 AM - 3:00 PM | @ CESA 11 The equity leadership strand includes support with examining, monitoring, and celebrating staff, student and community equity work. Focus will be on monitoring stakeholder engagement, ensuring deliberate awareness of whose story is being included (and how it is being included) in all education aspects, creating a system of belonging for everyone, and communicating effectively. The equity classroom strand includes support with examining curriculum, assessment, materials and teaching strategies to provide appropriate opportunities and cultural knowledge for all students. Focus will be on assisting districts with building the capacity of teachers and support staff with the needed cultural knowledge, ensuring a safe and equitable environment, and a willingness and ability to put these into deliberate teaching and learning practices. The equity systems strand includes support for school and district leaders in examining policies/procedures, systems of support, data collection and use for program decisions, professional learning, and necessary protocols for all district staff (leaders, specialists, teachers, paraprofessionals, office personnel, food service workers, building maintenance, transportation workers, school board members, etc.). Focus will be on conducting a district equity audit that can provide a road map for technical and adaptive systems change and sustained learning for all stakeholders. STRANDS of SUPPORT ”Inclusion is a mindset. It is a way of thinking. It is not a program that we run or a classroom in our school or a favor we do for someone. Inclusion is who we are. It is who we must strive to be.” ~Lisa Friedman; Removing the Stumbling Block For more information, contact us:
https://www.cesa11.k12.wi.us/promoting-equity-for-every-student
Green and brown pods a large pile of beans. - 0554F893.jpg Garden string beans. - 0887A003.jpg Green beans. - 0621A481.JPG Coffee farm worker rakes green Kona coffee... - 0224C006.jpg Farmer on tractor planting soy beans - 0621A484.JPG Coffee farm; green Kona coffee beans drying in... - 0621A483.JPG Farmworker rakes green Kona coffee beans inside... - 0767A034.jpg Cup of coffee and coffee beans. - 0641A755.jpg Coffee in white cup on brown background - 0837C172.jpg Soybean field and building site, Iow - 0837C171.jpg Soybean field and building site, Iowa - 10082C352.jpg Sabah, Borneo, Malaysia. Worker picking cocoa... - 0154C027.jpg A Lady Chef Holding Roasted Rack of Lamb Entree... - 10534C137.jpg A shopkeeper in the village of Salt, Jordan,... - 0641C851.jpg Row of coffee beans on white - 0641C833.jpg Group or Coffee Beans - 0814A056.jpg Coffee beans close-up. - 0621A488.JPG Chief coffee roaster at Bayview Farm examines... - 0808A023.jpg Purple pole beans, trellis #5948. Virginia. - 1121C077.jpg Toddler playing in a cart of beans. - 0697A039.jpg Holiday Dinner with Turkey and all the... - 0972A355.jpg Agricultural view of green rows of beans... - 0531A424.jpg Market stall featuring bulk bags of assorted... - 0779A012.jpg Smiling woman holds scales and gathers string... - 0041A110.jpg Amish grandmother, mother and children prepare...
https://miraimages.photoshelter.com/search?I_DSC=%C2%A0beans&_ACT=search&I_DSC_AND=t
Were any of you surprised? When I read that the University had been misrepresenting its “need-blind” policy for years, I surely wasn’t. It was the same kind of hat-in-hand story we’ve heard before: GW had, in fact, been taking into account financial need when looking at the bottom 10 percent of its qualified applicants. This was a blatant case of false advertising. But all I could do was sigh and shake my head as though my dog had just pissed on the carpet again. The University and its inability to tell the truth doesn’t surprise me anymore. It shouldn’t lie to students, but time after time, it does. It wasn’t just one person. Kathryn Napper, the former admissions dean, insisted that GW was need-blind for years. And everyone from top administrators too admissions officers stood by her, repeating the “need-blind” lie at information sessions and maintaining that information on the admissions website. It’s disappointing. But we should come to expect it. After manipulating its U.S. News & World Report ranking data last year. After University President Knapp told students a housing mandate had no financial motivation. After four deans were either fired or left abruptly over the past two years. Clearly, there are huge communication and accountability issues within the administration. There’s an overall lack of transparency among those who make decisions at this nearly $2 billion-a-year institution. We don’t expect administrators to be perfect, but we expect them to be the kind of people who tell students, faculty and alumni the truth. Those are the stakeholders. The ones who fall in love with GW and apply. The ones who spend careers here. At the very least, it’s reassuring to see honesty from Laurie Koehler, the University’s newly hired associate provost for enrollment management, who is in the process of reshaping GW’s admissions policies. She was the one who confirmed last week that financial need has been and will be a determining factor at the end of the admissions process. But it’s unbelievable that the University didn’t think that calling GW “need-blind” was “intentionally misleading,” as spokeswoman Candace Smith told The Hatchet. The University is saying that it didn’t steal the cookies, and all the while its mouth is covered in crumbs. Those kind of statements make the lying worse. When you look at the actual issue – why GW needs to be need-aware – it starts to make sense. People will understand. So why lie? GW’s endowment of $1.3 billion hardly rivals those of need-blind colleges Northwestern and New York universities, with endowments that stand at about $7 and $3 billion respectively. Administrators are searching for ways to bring in as much money as they can, and being need-aware is an obvious way to increase tuition revenue. But administrators shouldn’t be lying about our admissions policies. If the administration is going to assign more importance to its financial bottom line than its repeated commitments to creating the most socioeconomically diverse and intellectually elite class of students, then they should own up to it. I am not surprised that the University lied to me once again. But I am surprised that it is lying just to fill its pockets. That false advertising hurts us all.
https://www.gwhatchet.com/2013/10/22/jacob-garber-a-disturbing-pattern-of-false-advertising/
After tooth extraction, recovery of the extraction socket starts immediately. Bleeding occurs in the socket and nourishes the tooth socket. To control the bleeding, simple pressure is applied by the gauze. It also helps in formation of blood clot in the socket. Blood clot promotes the healing process. After 1-2 days, the socket is filled by soft tissue. Growth of the bone surrounding the socket occurs later and the socket is filled completely. In cases of simple tooth extraction, recovery occurs in 7-10 days whereas in case of surgical extraction, it may take 3 weeks to 3 months for recovery to take place depending on the degree of damage to the dental tissues. In cases of simple extraction, healing after 7-10 days is good enough that a person can eat hard and crusted food without any pain or discomfort. Healing in the oral cavity is faster as compared to the skin because of rich blood supply of the area. If a cut is there on the skin, then it takes longer time to heal than a cut in the oral cavity. Healing of the extraction socket There are 5stages of healing of the extraction socket: Immediately after the extraction, bleeding occurs in the extraction socket and there is clot formation inside the socket. Clot refers to the thick, viscous, coagulated mass of blood. There is vasodilatation of blood vessels of periodontal ligament and migration of leucocytes in and around the clot. As the clot contracts, the gum tissue which is unsupported after the extraction, cover and place the clot at its position. The hours after the Tooth Extraction are critical, for if the blood clot is dislodged, the Tooth Extraction recovery may be greatly delayed and may be extremely painful. Within a week, granulation tissue is seen around the clot and there is proliferation of cells around the socket. There is organization of clot and alveolar socket margins are resorbed. Healing is the body's response to injury in an attempt to restore normal structure and function. Healing occurs by 2 ways: In case of healing by primary intention, there is not much loss of cells and tissues. The ends of the flap will approximate in some time and the tooth extraction recovery will occur in some time whereas in case of healing by secondary intention, there is extensive loss of cells and tissues. The ends of the flap don't approximate and the healing occurs from bottom to the top and from margins inwards. Healing by secondary intention is slow as compared to faster healing by primary intention. Healed Extraction Socket Dry socket: It is a post-operative socket which lacks the physiological clot both in quality and quantity in which the blood clot disintegrates exposing an infected necrotic socket wall. The condition derives its name from the fact that after the clot is lost the socket has dry appearance because of exposed bone. It is also called as Alveolitis sicca dolorosa, Alveolalgia, Alveolar osteitis, Post-operative osteitis, Localized acute alveolar osteomyelitis. It may occur due to frequent and forceful spitting after extraction, smoking or excessive traumatic extraction. Disintegration of clot may be due to infection of the wound. Bacterial enzymes hyaluronidase and fibrinolysin causes lysis of clot. The bone of the socket becomes necrosed, grayish bone is seen from the socket and bad odor is present at the socket and pus is minimal or not at all. For the treatment of dry socket, dressing of Zinc oxide Eugenol is placed in the socket and repeated after few days. Antibiotics and analgesics are not effective if used alone because of poor vascularity of the necrosed bone. Fibrous healing of an extraction wound is an uncommon complication, usually following a difficult, complicated or surgical extraction of a tooth. It is found commonly when there is loss of periosteum along with loss of labial, buccal and lingual cortical plates. This loss of cortical periosteum causes improper healing and scar tissues are found at the site. These fibrous connective tissue may ossify a little or not at all. For the treatment, excision of the lesion for the purpose of establishing a diagnosis will sometimes result in normal healing and subsequent bony repair of the fibrous defect. Smoking: Smoking decreases extraction socket wound recovery. it decreases the blood supply to that area and brings toxic products to the area. Due to negative pressure because of smoking, the clot may get dislodged and cause dry socket occurs. That is why it is advised to avoid smoking for few days after the tooth extraction. Smoking after Tooth Extraction and Delayed Healing Alcohol consumption: Alcohol causes delay in the healing process. Alcohol consumption should be avoided by the patient few days after the tooth removal. Diet of the patient: Protein, vitamins and minerals deficiency slows down the healing process. General health of the patient: In cases of patients with diabetes, anemia, ischemia etc, the healing of the extraction socket will take place slowly than a normal healthy person. Age: Healing is faster in young but is normal in old age unless associated with diabetes or ischemia. Use of birth control pills: If a woman is taking birth control pills and gets her tooth extracted, then the chances of dry socket are more due to high level of estrogens. Dry socket delays the healing process. Infection: In cases of infection like that in dry socket, delayed secondary healing occurs and it takes longer time for healing than the normal extraction socket healing. Length of the surgery: The longer the surgery is, the more is the irritation to the gums and the surrounding tissues and more will be the healing time. Extraction socket with tooth removed with very little trauma to the surrounding tissues. Tooth Removed with Little Trauma to Soft Tissue Expertise of the surgeon: The aim of the dentist should be to cause as little laceration of the gums as possible. There should be minimum of trauma to the gums during the tooth removal. More is the trauma, more will be the time taken by the socket to recover after tooth extraction. Oral hygiene maintenance: After the tooth extraction, the socket area should be kept clean. If there are food deposits around the socket, it will take longer time to heal. It is advised to maintain a good oral hygiene after the tooth removal, eat from the other side of the socket and keep the socket clean. Medicaments: Certain medicaments like corticosteroids delay the healing process of the socket. To promote faster healing and to avoid any complications, the patient should follow the instructions given by the dentist or the oral surgeon. Pain and discomfort occurs when the mouth heals. Following the instructions given by the dentist is all that is needed. The dentist should be informed if there is excess of bleeding, swelling, or persistent and severe pain or there is any reaction of the medications given by the dentist. The dentist should schedule a follow up examination to ensure that the socket is healing properly. I have had a bottom back molar removed and now have dry socket . I am in excruciating pain after 3 days . My dentist put zinc packing in and was great but after 4 hours it came out . Now in agony again . I had a molar extracted about 1 week ago and although there isn't a whole lot of pain there is like a black or dark red substance inside the extraction site. Is this a blood cot or is this something else like food? Also when is possible to go back to eating regular food without having to worry about the blood clot becoming dislodged? 5 weeks after extraction although I have no pain it seems like there is a bulbous skin flap in the socket which moves about with the tongue or when eating food. It seems only to be anchored to the base of the socket not to the sides.I am type 2 diabetic. Should I be concerned or will it gradually disappear? GREAT ARTICLE, INCLUDING ALL THE REQUIRED CRITERIA'S TO BE TAKEN INTO CONSIDERATION FOR EXTRACTION. Dear KATE, If the problem isn't subsiding then immediately report to the doctor who did your extraction and also consult a physician as delayed healing can be due to certain metabolic disorders. And you need to extra-precautious regarding your oral habits in case of infection post extraction.
https://www.identalhub.com/blog/874/healing-of-the-extraction-socket
Future of Open Banking: Evolution, Not Revolution During the Digital Freedom Festival, the discussion about the future of Open Banking took place gathering experts from the banking, fintech, payments and policy sectors. Eleven months have passed after PSD2 (Payment Service Directive) introduction that promised a revolution in the financial sector – what has really changed and what to expect in the future? “A lot of sensational headlines in the media talking about the revolution of finance and that everything will change, but we have not seen drastic changes in a way we manage our finances. There is a reason why PSD2 is called an invisible reform of finance, but it does not mean that there is nothing happening to shape the future of banking,” said Mārīte Aleksandra Silava, Fintech Community Manager at Swedbank in the opening remarks. Many Initiatives on the Policy Side European Commission (EC) has set up an internal task force on financial technology that has been looking to relevant stakeholders and considering the case for a coordinated European response to enable financial technologies and innovations in the European Union (EU), informed Pēteris Zilgalvis, Head of Unit, Digital Innovation & Blockchain; Co-Chair, Fintech at European Commission. “EC is tech neutral in a sense that we try to make sure that our strength in the deep-tech areas is built upon in the Europe. Industrial policy has to push the innovation,” said Pēteris Zilgalvis. He also marked out that the research and innovation of AI, cloud and blockchain are the technologies supported by EC trying to make regulatory frameworks to fit for the take up. Standardised API Interface is planned to be adopted to continue smooth implementation of PSD2. However, the EU infrastructure of blockchain is planned to enter the implementation mode already in 2019. The EU countries and Norway hope to make cross-border services, including those regarding regulatory reporting and logistics, more efficient and safer. Evolution of Open Banking and APIs Oleg Marofejev, Head of Open Banking in Nordic and Baltic region at Swedbank during the discussion marked that Open Banking is an important social and legal change. Before, you had to have a contract to interact with the bank. It liberalises the financial data and also opens up the legal way to access the bank and to get the data with customers’ acceptance. He also noted that as digital citizens, we are entering a very interesting phase of the digital world. Financial data is one of the most sensitive data we have, and it becomes available to a wider group and that definitely ignites the innovation and disrupts monolith business models of banking industry. Fintechs have a possibility to jump in that value chain, take a customer, innovate and create some niche products or services. “Three years ago when we started to speak about PSD2, it seemed as a paradise. Those who really work with Open Banking and APIs know that the devil is in the details and there are so many details. This policy does not have standards. We are trying to standardise, but there are so many different legacies in the Europe. Standardisation is a huge challenge for the Europe and it will definitely take a couple of years,” said Oleg Marofejev, Head of Open Banking in Nordic and Baltic region at Swedbank. Oleg Marofejev noted that there are different ways for fintechs to access the bank. One is an illegal way or so called screen scraping and there is the legal way – API, that is more structured way allowing the customer to understand what the data will be used for. Consumer and Industry Responsibility “It is important to act with our data in a very responsible way. It is crucial for customer to understand which company they give their credentials to. It is not always easy, as the company may be found in the regulator’s register with one name, but its service operates with another,” said Marofejev during discussion. “The world is changing and becoming more digitalised. There are two things we have to think about when driving these changes. One is the balance between convenience and security. We cannot negotiate the security when we are trying to make things more convenient. And the other is the question of social inclusion – to take care of the consumer segments who are not really into digitalisation. Industry also has to have a responsibility to take care of these people as well,” said Majda Nogo, Head of Business Development and Baltics at VISA Europe. Fintechs – a Lot of Things are Happening Behind the Scenes Mārtiņs Šulte, CEO and Co-Founder of Mintos said that Open Banking means that trusted 3rd parties will have an access to the customer data. Customers are the ones who will definitely benefit from the Open Banking. At the moment a lot things are happening behind the scenes and many banks are very keen on Open Banking. Some of them have already real projects, however, it will take some time for several reasons starting from security issues, API development to the changes of mindset. “In the future financial services will not be associated only with banks. There are definitely some challenges for all sides of the players – banks, fintechs and also regulators. Banks have to change their mindset that they don’t owe the customers’ data and they have to share it. Banks are getting there and putting their infrastructure in place but it takes time. The fintechs have to see the APIs and how they work. Regulators have to have the confidence that there is enough security,” said Mārtiņs Šulte, CEO and Co-Founder of Mintos. Evolution of Open Banking and the Road Ahead APIs and PSD2 will continue to dominate the banking agenda also in 2019 and beyond. It will increase competition and, therefore, greater consumer choice. The new sharing culture in financial sector is on its way with the Open Banking standardisation opening the doors to innovation. ________________________________________________________________ About the Digital Freedom Festival The Digital Freedom Festival is a global technology, startup, policy and lifestyle festival, which will be held on November 30th and December 1st in Riga. It will gather technology and startup entrepreneurs, experts, policymakers, investors, journalists and motivational speakers from all over the world to look for answers to how entrepreneurs, society and and policymakers can better cooperate and use the benefits provided by technology. In 2017 the Digital Freedom Festival gathered 1300 technology and startup entrepreneurs, experts, policymakers, investors, journalists and students from 36 countries. The conference livestream was viewed more than 9000 times. The conference is organised by the Digital Freedom Festival group, the DFF NGO, as well as the Lejiņa & Šleiers reputation management agency. Festival official bank – PNB Banka, startup competition partner 500 Startups, Rockstart, European Commission, USA Embassy, Dutch Embassy etc. Media partners – Delfi, Dienas Bizness, LETA, Diena, SestDiena, Kapitāls, The Baltic Times, LabsofLatvia, bel.biz, BerlinBalticNordic.net, la.lv, inbox.lv etc. Two-day festival tickets are available online. Follow the latest Digital Freedom Festival news on Facebook, Twitter and Instagram accounts (#DFFestival).
https://www.digitalfreedomfestival.com/news/future-of-open-banking-evolution-not-revolution/
Saffron Couscous with Roasted Almonds Couscous is a relatively new addition to Western diets, but it has been a culinary staple in other parts of the world for more than a millennium. Originating in North Africa as early as the 7th century, couscous is made by rolling semolina with flour and just enough water to allow small clumps to form. These tiny balls are cooked until they are light and fluffy, then loaded with vegetables, meat, or any number of other tasty toppings. There has been some debate in the culinary world as to whether couscous is a grain or a type of pasta. The answer may lie somewhere in the middle: it is not purely a grain, because there is a process involved in making couscous. Despite being made from semolina, a true dough is not formed, meaning it’s not quite a pasta either. Regardless of how you classify it, couscous is an exceptionally versatile food that loves to absorb the flavor of whatever it’s cooked with. One of our favorite preparations is couscous with saffron and roasted almonds. The earthy flavors of the saffron mingle with garlic and onions to create a fragrant and delicious dish that’s elegantly exotic, but easy enough to throw together for a weeknight dinner. Saffron Couscous with Roasted Almonds Recipe Yields 3-4 servings Ingredients: - 10 ounces couscous - 2 cups boiling water - 2 tablespoons olive oil - ¼ teaspoon red pepper flakes - 2 cloves garlic, minced - ½ onion, diced - 9-12 threads saffron - ½ cup toasted sliced almonds - Salt Directions: Add couscous to boiling water, remove from heat and cover. Let stand 5 minutes. Heat olive oil in a large sauté pan. Add red pepper flakes, garlic, onions, and salt. Cook until tender, about 5 minutes. Remove from the heat. Add contents of pan, along with saffron and almonds, to couscous, fluffing with a fork.
https://www.spicejungle.com/the-jungle/saffron-couscous-roasted-almonds
Are you planning to travel to Ecuador? You will surely miss out on incredible experiences and things to do in Quito if you skip this gem of a city. Source: 19 Excellent Things To Do in Quito, Ecuador on Vacation Are you planning to travel to Ecuador? You will surely miss out on incredible experiences and things to do in Quito if you skip this gem of a city. Source: 19 Excellent Things To Do in Quito, Ecuador on Vacation “If the place you live in doesn’t delight and amaze you every day, you’re doing it wrong. And that’s how I feel about Cuenca,” says Saralee Squires. Source: This Ecuadorean city in the Andes has perfect weather – and you can retire there for as little as $1,500 a month The endangered Galapagos penguin is the only penguin species found north of the equator. Learn about the threats faced by these small flightless birds. Source: Why Is the Galapagos Penguin Endangered? Threats and How You Can Help The Galapagos Islands, a UNESCO World Heritage Site 600 miles from mainland Ecuador, is home to many rare species, such as marine iguanas, giant tortoises and Darwin’s finches, and tops the bucket lists of many travelers. Source: Ecuador expands ‘underwater superhighway’ for migrating species in the Galapagos – The Points Guy New “ocean highway” extends through Colombia and Panama to Costa Rica… Source: Ecuador Extends Protected Galapagos Marine Reserve An open-air museum whose walls and narrow streets have witnessed countless historical events and developments dating back centuries, the Ecuadorian capital’s Old Town boasts a range of architectural, archaeological and artistic treasures among its tourist offerings. Those riches are apparent in the buildings of that colonial center but also in its museums, some located within imposing complexes and others in more modest structures whose small doors are gateways to more unknown aspects of its past. Source: Quito’s historic center: Open-air museum home to host of treasures – La Prensa Latina Media Abundant sea life and giant tortoises are the stars of the Galapagos Islands, a volcanic archipelago off the coast of Ecuador. Source: Discover the Natural Marvels of the Galapagos Islands for Yourself Many animals in the Galapagos Islands are endemic only to the islands as part of a unique ecosystem. Enjoy these Galapagos animals in photos… Source: Unique Galapagos Islands Animals in Photos | The Planet D A Galapagos itinerary from Jordan Harvey, a member of Travel + Leisure’s A-List, a collection of the top travel advisors in the world. Source: Jordan Harvey’s Ultimate 7-Day Galapagos Islands Itinerary With as many activities available as islets to explore, the Galapagos Islands offer a trip you won’t forget anytime soon. Renowned diving sites, brilliant hiking, utterly unique flora and fauna and scenic bays offer just a few reasons for you to spend a week or two chartering through the islands, with onshore fine dining offering you the opportunity to mingle with locals and other adventurous souls – the Galapagos really has it all. The Galapagos Islands are the home to incredible wildlife, tropical beaches, green highlands, and tons of attractions. It does not come as a surprise that these UNESCO World Heritage Site destinations have continued to welcome innumerable tourists, scientists, birdwatchers and wildlife lovers every day of the year. Source: The Ultimate Guide to Exploring the Galapagos Islands Planning a trip to the Galapagos Islands? Read on to find out when to visit, where to stay, what to do, and much more to ace your vacation. Ecuador’s capital blends its colonialist history with modern flourishes. Source: 19 things to do in Quito if you love architecture and design A UNESCO World Heritage Site, the Ecuadorian capital offers rich history and a budding food scene, pretty parks and plazas, and a mother lode of history and culture. Source: Great Escapes: Quito, a Colorful, Cultured City in the Heart of the Andes From neo-gothic architecture to aristinal ice cream, there are tons of reasons to take a four-day Quito vacation in Ecuador. Source: Church Hopping, Volcano Hikes, and Equatorial Experiments: The 4-Day Weekend in Quito It’s no wonder then that over 5,000 expats call Cuenca home. And while many of them are happy to fully embrace their overseas retirement, plenty more are taking the opportunity to explore their entrepreneurial side. Source: Tap Into the Growing “Expat Economy” in Cuenca, Ecuador – International Living The Galápagos Islands promote sustainable tourism by sourcing local guides, managing species’ populations, and limiting tourists. Welcome tourists! In this edition, Tom Sawyer will talk about the capital of the South American country that is the main exporter of bananas in the world, besides being one of the smallest countries in the American continent. Tom Sawyer is talking about San Francisco de Quito, Ecuador. Its formal foundation dates to 1534 and it has a population of 2.7 million people while its metropolitan area exceeds 3 million people. Source: Wonderful World: Discover Quito – Dos Mundos Bilingual Newspaper Cuenca, Ecuador… Full information on retiring, cost of living and real estate options for US citizens. Cuenca, Ecuador’s third-largest city.
https://allaboutworldheritage.com/category/ecuador/
[Quality of life with intensive insulin therapy: a prospective comparison of insulin pen and pump]. The purpose of the following-study was to identify aspects of quality of life that are particularly affected by the mode of insulin therapy. 55 patients with insulin-dependent diabetes mellitus, who volunteered for a change of their intensive insulin therapy with pen injections to continuous subcutaneous insulin infusion (CSII) were studied 1 month before, and 6 months after, changing to CSII. The DCCT questionnaire was applied, measuring quality of life in the 4 subscales: satisfaction, impact, social/vocational worries, and diabetes related worries, respectively. The results demonstrate that the "satisfaction" subscale was scored significantly higher (p < 0.02), and the "impact" subscale was scored lower (p < 0.02) with CSII therapy. Single items showed that this was due to greater flexibility with leisure-time activities and with diet, and to significantly less problems with hypoglycaemia. The subscales "social/vocational worries" and "diabetes-related worries" were scored unchanged, HbA1c changed only slightly from 7.5% (SD 1.2) to 6.9% (SD 0.9): (p < 0.05). It is concluded that disease-related deficiencies in quality of life (satisfaction, impact) improve considerably in insulin-dependent diabetic patients after changing voluntarily from intensive insulin therapy with pen injections to continuous subcutaneous insulin infusion.
Open Forum: Build new housing where jobs are, not just near transit A sign for affordable housing is seen outside Google’s headquarters in Mountain View, Calif. on Wednesday, June 6, 2018. Building housing near jobs makes more sense than requiring communities to build dense housing near transit centers. Photo: James Tensuan / Special to The Chronicle 2018 The Legislature’s ham-handed, one-size-fits-all approach to the housing crisis would end up doing the Bay Area more harm than good. The Legislature’s approach is flawed in several fundamental ways: It is planning to remove zoning authority from local communities, thereby irreparably weakening the ability of local leaders to help shape the environments of their communities in accordance with the wishes of their constituencies. Instead, it would transfer zoning authority to a powerful regional agency and then tap municipal resources to help pay for a new program to incentivize more housing and more affordable housing. It is fixated on making certain that every city bear its share of the needed new housing, including in areas where the cost of new housing would be exorbitant. A better and more practical approach would be to add housing near the region’s recently expanded job centers and in places where real estate and development costs are modest. In all cases, the new residential communities should have easy access to commercial and institutional facilities, job centers and public transit. Making the new communities compactly and relatively self-contained would reduce the need to drive long distances, thereby addressing both the region’s housing deficit and its transportation malaise. There are still parts of the inner Bay Area that could accommodate relatively low-cost, infill housing. The advent of electric bicycles, electric scooters, and the transportation services such as Uber and Lyft would make it easier to travel short distances without the need of a private automobile. The San Francisco Bay Area’s growing housing and transportation agonies have stemmed, in large part, from the inflow of highly paid, or soon to be highly paid, high-tech and other skilled workers. This problem has been brought about by the benefiting Bay Area companies recruiting workers and by certain towns and cities interested in attracting low-cost, high-tax paying commercial facilities but not interested in encouraging new residential construction. Little or no thought has been given to either where the newcomers were going to live or how they were going to get to work. As a result, the region’s housing and transportation systems are overwhelmed and the needs of those systems are not being met. The region’s housing stock has fallen far behind need and the cost of housing has skyrocketed, thereby forcing many residents to relocate to lower-cost areas, often many miles from their places of employment. The poorest residents have ended up living on the streets. Now there are indications that the expanding companies are aware of the damage they are causing. Some have already stepped up to the plate and pitched in. Salesforce CEO Mark Benioff has become a generous benefactor of San Francisco’s homeless programs. Facebook has underwritten a $1 million study of the feasibility of returning a long-needed passenger rail link between Redwood City and Union City, and is expected to play an even stronger role in the later phases of the project. Other opportunities for private leadership abound. For example, downtown San Francisco firms needing better passenger rail access from the South Bay could help get the long-stalled Caltrain extension project off dead center. An efficient and well-run regional bus system operating on high-occupancy-vehicle lanes in areas not serviced by BART would provide attractive transit choices to harried commuters now trapped in perpetual highway gridlock. Also needed is a faster and better rail link between San Joaquin County and Silicon Valley. With fresh corporate leadership and selective pump-priming, private companies could help their employees, the region and themselves. Pushed by eager residential developers desiring free rein, the CASA Compact (an advisory, compromise proposal to address the Bay Area’s housing crunch) and state Senate Bill 50 are taking over the long-held functions of local government in order to impose high-density housing on established neighborhoods. Instead, the Legislature should look to the companies and the municipalities that have caused the crisis. Instead of squandering public funds by jamming housing units into established residential areas, the Legislature should look to areas near burgeoning job centers and to places where complete and self-contained communities could be created at moderate cost. Gerald Cauthen, a professional engineer, is president of the Bay Area Transportation Working Group (https://batwgblog.com/).
"Station artwork is not identified with labels or other interpretive signage, which contributes to the lack of public awareness and appreciation of the art program," they report. "Artwork is often damaged due to surface penetration by ads, kiosks, conduit, and other objects; damage also occurs when cleaning equipment, waste receptacles, tools, bike racks, and other objects are placed against art surfaces." It had been decided, early on, that BART stations would have a varied look. "Diversity of design will be a key feature of the Bay Area's Rapid Transit system," the San Francisco Chamber of Commerce's magazine headlined in 1968. "They were meant to be distinctive and expressive of their communities," Radulovich says. Stations would be parceled out, individually or in small groups, to architectural teams. It was the job of Tallie Maule, the chief architect for BART, to oversee the individual design architects. Several architects who designed the stations—Joe Esherick, Gerald McCue, Ernest Born, and Wurster, Bernardi & Emmons—were Bay Tradition modernists who designed post-and-beam or other open-plan houses not unlike Eichlers. BART's art program began in late 1966, after the district's general manager B.R. Stokes visited the new rapid rail in Montreal, which added art to its stations funded by major corporations. "The inclusion of such art could add a new dimension of visual excitement to our underground spaces," Stokes wrote in a memo. The Art on BART program "wasn't unusual. It was an era when public art was just emerging nationally," says Almaguer. The art "must be enthusiastically desired by the architect of the station," one BART executive wrote. The architect "must have a hand in the selection of the artist but not the exclusive hand." BART appointed an art committee—it included chief curators from three local museums, an art critic, a planner, and an architect—but it's unclear what they did. They may have planned what sort of art would go into each station.
https://www.eichlernetwork.com/article/how-bart-got-art?page=0,2
It was a disturbing and horrifying sight: Louisville's Kevin Ware suffering a broken leg during the NCAA Midwest Regional Final. It happened in front of tens-of-thousands of fans in the arena, and millions more watching the live broadcast on CBS. Ware's leg visibly shattered, broken in two places, his teammates in tears, and fans clearly upset at the unfortunate scene they were witnessing. Watching the broadcast, a fan's immediate reaction may be sadness and concern for the player, while in the broadcast truck, the question becomes - "how do we cover this?" Such injuries create ethical dilemmas for sports broadcasters, whose job it is to bring the action to you in your living room in an immediate, entertaining fashion. That often includes several replays, up close and at various speeds. So, the same technology that provides multiple angles of a possible fumble on the football field can also capture gruesome injuries in graphically slow motion on the basketball court. With a national media now at the mercy of the 24-hour news cycle, where breaking news is repeated ad nauseam, it would be easy to fall into the common practice of playing and re-playing a major story, which in this case, was a defining moment in the NCAA tournament. It brought to mind the famously-awful Joe Theisman leg injury on Monday Night Football in 1985. That situation is somewhat different in that it happened in professional athletics during a sporting event that is inherently physical and even violent. During Sunday's broadcast, CBS wisely and compassionately chose not to broadcast the fluke basketball injury over and over. While medical personnel attended to Ware, the story on the floor quickly became the star guard's motivating message to his teammates, encouraging them to win the game, even has he was being carted off the floor. Of course, the Cardinals did win, defeating Duke, 85-63, to earn a spot in the Final Four which coincidentally will take place in Ware's hometown of Atlanta. During that Final Four homecoming, we can expect more emotional coverage of the Ware injury, and its impact on the Louisville program, but hopefully that coverage won't include the video of the actual fracture. Beyond its graphic nature, it's an injury that will mean a great deal in the life of a young student athlete, impacting not only his immediate health, but also his career, his family, his team and his future. But in that one shining moment, Kevin Ware turned the attention from his personal situation to his team. He spurred his comrades to a dramatic win, encouraging and inspiring them in spite of his own disappointment. What CBS captured was the agony of injury, the inspiration of unselfish leadership and the thrill of victory, all in a family-friendly format. Kellie Goodman Shaffer can be reached at [email protected]. Her column appears on Tuesdays.
http://www.altoonamirror.com/page/content.detail/id/570265/Drawing-inspiration-from-adversity.html?nav=752
What is Circular Breathing? Circular breathing is the ability to maintain a sound for long periods of time by filling your cheeks with air when you start to run low on the air in your lungs. It differs from connected breathing or conscious connected breathing, that is used in transformational breathwork sessions for emotional release/catharsis and altered states of consciousness. When you circular breathe, you use the air in your cheeks to power whatever sound-generating source you have, you inhale through your nose and at least partially fill up your lungs with air to maintain a constant sound. Wind instrument players use it such as saxophone, trumpet, and didgeridoo. Circular Breathing is an Ancient Art Glassblowers used a very similar method for centuries; they couldn’t stop and inhale, so they used air from their cheeks to keep the glass bubble at a constant pressure while they inhaled through the nose. Inhaling While You’re Exhaling It’s a really strange feeling, inhaling while you’re exhaling. You’re not really exhaling, you’re using the air in your cheeks, but the biggest obstacle is just achieving the feel of what it takes to inhale while you’re forcing air out. You almost have to divorce the front of your mouth from the back of your mouth. Once you get the feeling, it’s actually quite easy to do. I read about a man who was very effective in teaching young students to circular breath. He told them to puff their cheeks and maintain their sound. While they were puffing their cheeks he would yell, “Inhale now!” and he would squeeze their cheeks with his hand in order to force the air out while they were inhaling. So there are a number of ways to get that feeling initially, but that’s the biggest hurdle to overcome. The other is that it’s a different set of muscles you’re using to maintain the sound, and you find that you have to fine-tune those muscles. To begin with, you really have to understand the four distinct steps of circular breathing: 4 Distinct Steps of Circular Breathing - As your lungs begin to lose air, you puff your cheeks. - Air from the cheeks is pushed with the cheek muscles through the instrument and used to maintain the sound while you inhale through your nose. - As sufficient air is brought in, you begin exhaling through the lungs again. - The cheeks are brought back to their normal position. You have to understand that this isn’t something that happens right away. What I find particularly with older students is that they want it to develop immediately because they’re quite advanced in other areas. But the reality is that it takes a while for all that to develop, just like it took a while for them to learn to play at the beginning. Practicing Circular Breathing Those who are looking to improve their breathing should focus on improving the volume, and that includes musicians. Breathing techniques and support from types of equipment like Oxygen Concentrator help you achieve just that, without any circus tricks or gimmickry! If you must practice circular breathing you need to make sure you have an adequate healthy microbiome (Try Prescript Assist) to help offset the stress to your gut and nervous system and to help recover what you lost. Developing Circular Breathing Skills - Try This Exercise: - Puff your cheeks out an breathe normally in and out through your nose. - Do the same thing again but create a very small hole in your lips. As you breathe in and out through your nose, allow the air to escape through your lips. The first time it all goes out at once. Then you learn to hold back and let it escape a little bit at a time. - Get a straw and squeeze it almost all the way shut. While your cheeks are puffed, work the straw into your mouth and put the other end in a glass of water. While inhaling in and out of your nose like before, see if you can get bubbles to come out of the end of the straw in the water. The reason for squeezing the straw is that the first time you inhale through your nose the natural thing is to inhale through your mouth and this way you’re not going to drown. - As you’re breathing in and out through your nose, once you get a big breath, force yourself to exhale through your mouth, and that’s circular breathing. Fundamentally, you’re doing it at this point. - Actually get involved with your instrument as soon as possible. A lot of people end up being able to do it with a straw but they can’t do it any other way because they’ve neglected to spend the time doing it with the instrument in their mouth. The whole notion of having something going on besides just blowing bubbles through a McDonald’s straw really does stop a lot of people. Mask "The Bump" There’s always a little bump when you switch from the air in your cheeks back to the air in your lungs. Everybody wants to get rid of it, but there’s always going to be a slight hitch or bump. Always. What you need to do is find exercises that will help mask the bump as much as possible. What I have read about is that with most students is that the bump isn’t noticeable if you’re wiggling your fingers or doing some kind of technical pattern while you’re going through the circular breathing. And the concentration for listeners will be on the notes being changed as opposed to the variation in sound. One has to practice circular breathing every day. Just include it as part of your warm-up. It’s something you have to do on a daily basis, otherwise, it just doesn’t develop. What Does Circular Breathing Have to Do with Natural Breathing? Circular breathing may strengthen the diaphragm somewhat but other than that, not much. The primary key to breathing is the size of the diaphragm - not its strength. Circular breathing is mostly a cute trick with severe limitations to meaningful wind instrument expression or performance. Most people want to breathe better due to the desire or need to take in more oxygen. An oxygen concentrator comes only second to breathing development when it comes to creating more oxygen. With a large reservoir bag connected to the oxygen concentrator one can breathe in many more times the oxygen (perhaps 100 times more depending on how fast one is breathing) of anyone who tries to breathe without one regardless of how well their breathing is - either circular or well developed. Recent Comment from a Saxophone Player I view circular breathing as a circus trick, quite divorced from the art of wind instruments. In my view, the most profound thing about wind instruments is how they give the listener a deep tour of the emotional life of the artist, through a raw and completely naked revelation of the artist's breathing: diaphragm, posture, ribs...and every little emotional nuance along each millimeter of the exhale and inhale. Circular breathing completely circumvents this connection between listener and artist's emotional/breathing life, allowing the listener into the artist's body only as deep as the cheeks....a very superficial tour of the artist's inner life. Furthermore, the artist's muse (his "inner composer") is highly sensitive to the freedoms and limitations of the artist's breathing mechanism....and will "serve up" musical phrases to the artist in accordance with these freedoms and limitations. In other words, if the artist has a profoundly coordinated and supple breathing mechanism, the artist's muse will create phrases to take advantage of this. This connection is automatic, and out of the artist's control. Therefore, in my view, an artist who disconnects their wind instrument playing from their diaphragm, by circular breathing and consequently providing air only from the cheeks, will inadvertently signal the inner muse to provide phrases limited to revealing the limited expressiveness of cheek breathing. I should add that great wind art reveals the entire "story" of each exhale, with the shifting emotions and sensations that correspond with each fraction of the exhale....e.g., there is a very different emotional feeling to the beginning of the exhale than there is to the end of the exhale. Always. Circular breathing is designed to circumvent that "story"...a strategy that I personally have no interest in participating in as an artist nor as a listener. While I can't prove any of the above, I note that _none_ of the great jazz artists employed circular breathing. Not one. In short: circular breathing is based on the incorrect belief that the age-old "story of the exhale" (a story that has moved the human spirit deeply for millions of years) has run it's course, and is now an artistic limitation....i.e., that great wind art is defined by the duration of the exhale, rather than how naked and alive the exhale is. - Paul A. From Mike: What about Dizzy Gillespie with his huge jowls? From Paul: ..it's difficult to discuss him with others rationally because he's taken on mythical proportions. However, in my view and to my ears, and this is a view that contradicts popular wisdom, Dizzy's main fault is indeed his cheek breathing, and the depth of his sound and "exhale story" suffer tremendously because of it. From Mike: Any thoughts on that. There is a little quickie breath involving the lower breath that singers can develop that is hardly noticeable. From Paul: I don't have a clear theory about that. Again, what I wrote to you is off the top of my head. But I gather that the "quickie breath" feeds a little more air to the diaphragm mechanism in a moment of need. That's entirely different than the circular breath, which feeds air to the cheek breathing, a hopelessly superficial and emotionally disconnected breathing style, regardless of whether the fuel for it comes in big inhales or in quickie breaths. From Mike: Does Dizzy have a rep for slightly longer passages? From Paul: Considering my background, I should know the answer to that, but I don't. But even if he does have such a rep, I wouldn't count on it being deserved. One should not lose sight of the fact that the true diaphragm breathers achieved _remarkably_ long phrases, but all perfectly shaped to tell a profound emotional story. In other words, circular breathing may attract a "fast food" crowd of students that don't want to do what it would take (both mechanically and characterologically) to truly expand the breath. And more importantly, there may be some confusion over priorities....do the great diaphragm breathers have the capability of playing tremendously long phrases because they got connected with the suppleness of their breath or vice versa? I suspect the former, that "long exhale" is a side-effect (not a cause) of great and alive exhaling...but I'll bet many of the circular breathers would guess the latter and believe that if one can lengthen the breath, that a side-effect of that would be great art. Rahsaan Roland Kirk and Kenny G. are also purported to do this. Conclusion? Circular breathing can be Dangerous. The lack of a feeling of security, when aiming for a continuous flow of sound, can easily lead you to breathe very quickly and with shallow breaths. Thus you hyperventilate and become dizzy. There have been instances where people have fainted. I read of an extreme instance where someone once died of hyperventilating when circular breathing! Another disadvantage of circular breathing is that it enables you to play continuously, with no phrase ending or spaces in your music. The music has no full stops or commas. It has no punctuation a place to breathe, hence commas are marked on sheets of lyrics. Music needs these breaks. Why spend hours perfecting a skill which enables you to spoil the music? Those who are looking to improve their breathing should focus on improving the volume, and that includes musicians. Breathing techniques and support from types of equipment like Oxygen Concentrator help you achieve just that, without any circus tricks or gimmickry!
https://breathing.com/blogs/breathing-methods-and-breathing-work/circular-breathing?rfsn=2542488.21660d
Integrated risk reduction framework to improve railway hazardous materials transportation safety. Rail transportation plays a critical role to safely and efficiently transport hazardous materials. A number of strategies have been implemented or are being developed to reduce the risk of hazardous materials release from train accidents. Each of these risk reduction strategies has its safety benefit and corresponding implementation cost. However, the cost effectiveness of the integration of different risk reduction strategies is not well understood. Meanwhile, there has been growing interest in the U.S. rail industry and government to best allocate resources for improving hazardous materials transportation safety. This paper presents an optimization model that considers the combination of two types of risk reduction strategies, broken rail prevention and tank car safety design enhancement. A Pareto-optimality technique is used to maximize risk reduction at a given level of investment. The framework presented in this paper can be adapted to address a broader set of risk reduction strategies and is intended to assist decision makers for local, regional and system-wide risk management of rail hazardous materials transportation.
Oneka is 12 years old. He is also a former child soldier, forced to kill and fight by rebels who came upon Oneka and his grandmother on a walk one day, brutally killed her and kidnapped the young boy. Oneka manages to escape and returns to his village. But he has changed from his time as a child soldier, where he learned to deaden himself inside to commit barbaric atrocities. He wrestles with the trauma of his past, retreating into dreams and fantasies of a normal, more comfortable life. But his fantasies begin to turn into nightmares, as his trauma bleeds into them. His dreamscapes become increasingly disturbing — and soon twist back into reality, as the rebels come back to his village and force him to stand in front of his parents and sister, offering him a choice between life and death. Inspired by the true stories of former child soldiers, writer-director Yangzom Brauen’s powerful drama has much in common with fantasy and horror films as it does with social and political dramas. Its form and style play with the malleability of time and place, blending disturbing naturalistic scenes of violence with flights of grounded fantasy, rendered with luminous images and lyrical camerawork. There is also the harrowing emotional intensity inherent to horror — only here, the horror is based on the real-life experiences of child soldiers in Africa. Many of these experiences are not visually dramatized in the film, but they’re depicted in Oneka’s monologue, which runs as a voiceover throughout the film. Voiceover can be a narrative crutch, but here it plays lays out just how Oneka was inculcated and brainwashed into a traumatic, horrific culture of violence. Many of the atrocities mentioned by Oneka are mercifully not visualized, but their details bleed into the fantasies he escapes into, whether blood is running out of a faucet or the severed limbs of people he was forced to kill. The “intrusions” of Oneka’s violent past increase and young actor Roland Kilumbu captures Oneka’s haunted heaviness with gravity well beyond his years. As his past catches up to Oneka, the suspense and pacing intensifies, until fantasy collides into reality in a climax devastating in its starkness and brutality. “Born In Battle” is about an almost unbearable subject matter, one that violates all norms of human decency. Yet child soldiers deserve to have their stories witnessed and heard, often for their healing. The storytelling takes us inside the emotional and imaginative landscape of a child, where they often naturally retreat to, even in the best of circumstances. But even in this sacred space, trauma and violence finds Oneka, showing just how damaged he is from having innocence wrenched away from him, only to be thrown into the darkest impulses of humanity. The final images of the film offer some consolation in their cinematic transcendence, but any resolution won — by Oneka and by the viewer — is as fragile as peace.
http://m.omeleto.com/254845/
Modern buildings are expected to be not only energy efficient but also energy flexible to facilitate reliable integration of intermittent renewable energy sources into smart grids. Estimating the aggregate energy-flexibility potential at a cluster level plays a key role in assessing financial benefits and service area for energy-flexibility services at design stage and determining real-time pricings at operating stage. However, most existing studies focused on the energy flexibility of individual buildings rather than building clusters. In addition, due to the intrinsic uncertainty in building envelope parameters, performance of building energy systems, and occupancy and occupant behavior, it is necessary to quantify the uncertainty in aggregate energy flexibility. In this study, we developed an approach to quantifying the uncertainty in the aggregate energy flexibility of residential building clusters using a data-driven stochastic occupancy model that can capture the stochasticity of occupancy patterns. A questionnaire survey was carried out to collect occupancy time-series data in Hong Kong for occupancy model identification. Aggregation analysis was conducted considering various building archetypes and occupancy patterns. The uncertainty in aggregate energy flexibility was then quantified based on the proposed performance indices using Monte Carlo technique. With the scaling up of building clusters, the estimated energy-flexibility potential became steady and the weekly energy flexibility stayed around 12.40%. However, the weekly uncertainty of aggregated energy flexibility exponentially decreased from 19.12% for 8 households to 0.74% for 5,120 households, which means that the estimate of a building cluster’s energy flexibility is more reliable than that of a single building. Related Publication Maomao Hu, Fu Xiao (2020). Quantifying uncertainty in the aggregate energy flexibility of high-rise residential building clusters considering stochastic occupancy and occupant behavior. Energy.
https://maomaohu.net/project/2_uncertainty/
(Following the announcement made by Prime Minister, YAB Tan Sri Muhyiddin Mohd Yassin on 25 March 2020, in relation to the extension of Movement Control Order to 14 April 2020 in order to contain the COVID-19 outbreak, public listed companies have postponed their general meetings to a later date. Hence, there will be no Weekly Watch featured on newsletter). At this trying time, let us all continue to stay home, stay safe and stay healthy! Be patient, as better days are on their way! 03.04.2020 Desperate Times Need Desperate Measures There is a triple whammy affecting global capital markets; the trade wars, the oil prices and the COVID-19 pandemic. Recently, Bursa Malaysia and Securities Commission made certain decisions regarding short selling, margin financing rules and continuous trading. The rule changes and decisions are due to the desperate times that the investors, brokers and the Exchange are facing. After all, desperate times need desperate measures. Short-selling In a pronounced continuous downspin in the market, it is wise to stop short selling temporarily, (despite there being an uptick rule). Our market saw wild continuous downward pressure recently; at times our FBMKLCI index was down over 90 points during the day. Short selling creates both actual and psychological downward pressure on the market in such a scenario. The actual downward pressure arises from the huge short-selling sell orders that tend to drive the price downwards. This is exacerbated by the psychological (or even real) fear on other investors who jump onto the bandwagon to also dump their shares. Thus, there is a downward spiral on the share prices and the indices. Short selling was abandoned to help mitigate the downward spiral in share prices. Margin Accounts The downward spiral in the share prices results in the forced-selling of margin accounts kicking-in and this adds further fuel to the downward pressure on share prices. Margin accounts may provide useful leverage in a bull market but can become a nightmare in a bear market. Investors who utilize margin accounts must maintain a certain level of collateral against their margin loans. Under the Rules of the Exchange, once the collateral drops to a certain percentage of the loan (150% of the loan), the margin-loan provider will call the margin account holder to maintain the collateral level at above 150%. However, if the collateral value drops to a lower prescribed level (130% of the loan), the loan provider, which often is the stockbroker, can under the rules of the Exchange force-sell the collateral shares; this in turn creates further downward pressure in an already downward-trending market; further fuelling other instances of margin force-selling – a painfully vicious cycle. It must be understood that the stockbroker is required under the rules of the Exchange to force-sell the collateral shares when they reach below 130%; otherwise, the stockbroker will end up holding receivables (margin loans given out) which are below the collateral value and this will in turn, create a systemic risk amongst the stock brokers. Margin Accounts – Abandoning the Rules The above rules, which seek to centralise risk management at the Exchange level, were abandoned on 26 March as additional relief measures to alleviate the impact of COVID-19 on capital market players. The Exchange decided to give more flexibility and discretion to brokers by removing the requirement to automatically liquidate their client’s margin account if the equity in the margin account falls below 130% of the outstanding balance. Brokers will also not be required to make additional margin calls or impose haircuts on any collateral and securities purchased and carried in margin accounts due to an unusually volatile market. What the Exchange is saying is that we are suspending the abovementioned rules on the premise that the brokers know their customers best and thus leaving it to the brokers to manage the margin risks. The question then is that should not this approach also apply in good times i.e. that the broker knows their customers best. This relaxation of rules may also seem somewhat paradoxical in that in good times, the Exchange wants to manage margin risks, but in bad times, when arguably the Exchange should be managing the risks, the brokers are asked to manage their own margin risks. Another change in the Rules is that the Exchange will allow brokers to accept other collaterals, such as bonds, collective investment schemes, unit trusts, gold and immovable properties for purposes of maintaining their clients’ margin account if such collaterals are valued as per the broker’s credit policy. This is a relaxation as the Exchange previously prescribes the kinds of collateral that are acceptable and the haircuts, if any, for the various types of collateral. Now, it is up to the broker, both as to the type of acceptable collateral and the haircuts that need to be given. Another risk is systemic risk to the capital market; as some brokers may be ‘gung-ho’ and end up with substantial margin loans with inadequate collateral. These brokers may end up posing a systemic risk as they may become bankrupt. Desperate times need desperate measures; and once we are beyond the COVID-19 pandemic, we need to evaluate our approach to rule making and implementation; we seem to making centralised rules to be abided by in good times and abandoning these centralized rules in bad times; when arguably it should be the other way around. Continuous Trading There was a call to suspend the stock exchange (as was done in the Philippines). But this would be akin to imposing ‘capital controls’ on the investors in the stock market as they will not be able to liquidate (cash-out) by selling their shares. Reportedly, the Philippine Stock Exchange (PSE) Index plunged as much as 24% on 19 March after it resumed trading due to an unprecedented two-day closure over fears of Covid-19. PSE Index ended the day with a 13% decline. Analysts have suggested the sell-down was due to the loss of confidence in the PSE resulting in a rush to safer assets. It is the role of an exchange to provide continuous trading to enable investors to manage their risks (which can change in seconds) in today’s globalized world, and to capitalize on opportunities. Suspending the exchange is akin to changing the goalposts in the middle of the game. If an exchange suspends the market, the investors will not be able to sell their shares and get the cash (by T+2). The provision of continuous trading is the responsibility of a responsible stock exchange. Devanesan Evanson Chief Executive Officer MSWG TEAM Devanesan Evanson, Chief Executive Officer, ([email protected]) Linnert Hoo, Head, Research & Development, ([email protected]) Norhisam Sidek, Manager, Corporate Monitoring, ([email protected]) Lee Chee Meng, Manager, Corporate Monitoring, ([email protected]) Elaine Choo Yi Ling, Manager, Corporate Monitoring, ([email protected]) Lim Cian Yai, Manager, Corporate Monitoring, ([email protected]) Nor Khalidah Mohd Khalil, Executive, Corporate Monitoring, ([email protected]) DISCLOSURE OF INTERESTS •With regard to the companies mentioned, MSWG holds a minimum number of shares in all these companies covered in this newsletter. DISCLAIMER This newsletter and the contents thereof and all rights relating thereto including all copyright is owned by the Badan Pengawas Pemegang Saham Minoriti Berhad, also known as the Minority Shareholders Watch Group (MSWG). The contents and the opinions expressed in this newsletter are based on information in the public domain and are intended to provide the user with general information and for reference only. Best efforts have been made to ensure that the information contained in this newsletter is accurate and current as at the date of publication. However, MSWG makes no express or implied warranty as to the accuracy or completeness of any such information and opinions contained in this newsletter. No information in this newsletter is intended to be or should be construed as a recommendation to buy or sell or an invitation to subscribe for any, of the subject securities, related investments or other financial instruments thereof. MSWG must be acknowledged for any part of this newsletter which is reproduced. MSWG bears no responsibility or liability for any reliance on any information or comments appearing herein or for reproduction of the same by third parties. All readers or investors are advised to obtain legal or other professional advice before taking any action based on this newsletter.
https://www.mswg.org.my/newsletters/mswg-weekly-newsletter-03-april-2020-english
Safety culture has been described as a critical factor that setting up the importance of safety within an organization. The prevention of work-related accidents is an important measure taken by any company to assure the well-being of their employees and maintaining good working environment. Therefore, health, safety, and (work) environment (HSE) security is a relevant issue to be discussed in terms of employee retention. The objective of this study is to determine the effect of HSE security towards employee retention, with pay satisfaction will be the mediator. This study was adopting a survey research design and a sample of 200 employees of Asian Supply Base Sdn. Bhd. participated as respondents. The collection of data was analysed using Smart-PLS. The results showed that HSE security is positively affecting employee retention, and pay satisfaction have mediating effect on the relationship between HSE security and employee retention. Metadata |Item Type:||Article| |Creators:|| | Creators Email / ID Num.
https://ir.uitm.edu.my/id/eprint/56453/
For the past number of years Tree Week has been growing from strength to strength in the County of Cork, supported by Cork County Council. National Tree Week is an initiative of the Tree Council of Ireland with the support of Coillte, and this year takes place from March 21st to 27th. It is a week to be proud of our trees; to learn about their folklore and their practical applications and to appreciate how important a healthy and diverse tree stock is as we peer ever deeper into the impact that Climate Change may have. To be part of National Tree Week, Community groups and organisations, schools and people everywhere are invited to organise or participate in one or more events for the week. As well as tree planting ceremonies, the range of events can include forest and woodland walks, nature trails, workshops, woodturning displays, listening to the trees and what lives in the trees. Talks, tree climbing, broadcasts, launches, poetry readings, exhibitions and dramas, and other similar ideas and events are all welcome. Thanks to the support of the Tree Council of Ireland, Cork County Council, through its Environmental Awareness Office and Heritage Unit, will have a number of native trees to give out to local schools, community groups and organisations on a first come first served basis, which can be planted during local Tree Week Events. Cork County Council takes pride in the natural heritage of the County and has been supporting National Tree Week for a number of years. All proposed Tree Week events can be registered on the Tree Council’s website https://treecouncil.ie/initiatives/tree-week/ and to ensure maximum exposure and promotion it is advised that event details are registered on the website as early as possible. Over 30 groups from throughout County Cork were allocated trees as part of Tree Week 2019 by Cork County Council and interest in Tree Week 2020 suggests it will be the benchmark for years going forward. For groups looking to avail of trees for planting and or simply to find out more about our trees and Tree Week itself, email [email protected] or phone 021 4285905.
https://www.thecork.ie/2020/03/05/get-involved-in-national-tree-week-march-21st-to-27th-2020/
Paul Holthus is the Founding President & CEO of the World Ocean Council (WOC), a non-profit organisation responsible for industry leadership and collaboration in ocean sustainable development, science and stewardship. Paul's experience ranges from working with the global industry associations or directors of UN agencies to working with fishermen in small island villages, and has been involved in coastal and marine resource sustainable development and conservation work in over 30 countries. The benefit from information, analysis and intelligence on ocean industry challenges and the potential to shape the agenda, develop synergies, and create economies of scale in reducing risk and accessing opportunities. Related to this we have launched the Young Ocean Professionals initiative, which is bringing together the up and coming generation of ocean industry leaders from around the world to focus on sustainable development. What would you say is a competitive advantage of WOC? Our competitive advantage is, first, that the WOC is a unique, unprecedented high level and cross-sectoral forum for ocean sustainable development science and stewardship. Secondly, we facilitate the interaction of this collective, cross-sectoral ocean business community with other ocean industry stakeholders at the highest levels, e.g. at the UN, with the science community, with the environment NGO community. We are creating new and innovative ways to address sustainability challenges. For example, the WOC Ocean Investment Platform is bringing together investors with the maritime industry and the entrepreneurs who are developing technology solutions. What three changes would you recommend in shipping that would help improve the marine environment? The shipping industry is making considerable effort and progress in addressing its marine environmental footprint. Moving forward there are several areas where attention can well be focused. What emerging issues are on the horizon over the next 12-18 months? One of the key issues right now and in the next few years will be the development of a new legally binding instrument to expand the Law of the Sea to regulate the conservation and sustainable use of marine biodiversity in areas beyond national jurisdiction (BBNJ). This agreement will have significant implications for shipping and other industries operating in international waters. For example, through the requirements for environmental impact assessments. The final round of preparatory discussions is taking place in 2017. The WOC has been providing the only consistent ocean industry presence in this process over the last many years, working closely with the International Chamber of Shipping. In 2018, the draft treaty will be put to the UN General Assembly for formal negotiations. It is critical that the shipping industry engages in this process. The WOC will continue to develop and lead a coalition of industry leadership companies and organisations in participating in the development of the new legally binding BBNJ agreement. What is the Blue-Action project about? The WOC has been selected as the only international business organisation to participate in the European project Blue-Action. The project aims to improve understanding of the processes and impacts of climate change in the Arctic and to construct better long-term forecast systems for the increasingly extreme weather of the Arctic and the wider northern hemisphere. Blue-Action is a four-year research and innovations project funded by the European Union’s Horizon 2020 program. It brings together 116 experts from 40 organisations in 17 countries on three continents working in academia, local authorities and industry. The project will be working to: improve long range forecast skill for hazardous weather and climate events, enhance the ocean predictive capacity in the Arctic and the northern hemisphere, quantify the impact of recent rapid changes in the Arctic on northern hemisphere climate and weather extremes, optimise observation of Arctic conditions and trends, reduce uncertainty in prediction, foster the capacity of key stakeholders to adapt and respond to climate change and boosting their economic growth and transfer knowledge to a wide range of interested key stakeholders. What are the common challenges relating to Marine Spatial Planning (MSP)? MSP is designed to create a framework for multiple use of marine areas, including uses that are associated with environment and conservation. However the MSP process is largely defined, driven and implemented by governments with a lot of input from the ocean environment and science community. It is critical that shipping and other ocean economic activities participate actively in the design of the MSP process where it is proposed and in the implementation of the planning. Without comprehensive industry input, the probability is reduced that MSP will result in an output and plans that integrate the current and future needs for responsible, critical economic activity in the ocean. Please tell us more about the "Sustainable Ocean Summit 2017" scheduled to take place later this year? Since 2010, the WOC Sustainable Ocean Summit (SOS) has been the unique gathering of the world’s ocean industries focused on sustainable development, science and stewardship of the global ocean. The international ocean business community will gather again this year to advance leadership and collaboration in developing industry-driven solutions to ocean sustainability challenges. The SOS 2017 (Halifax, 29 Nov-1 Dec) will focus on ocean business community leadership in achieving the UN “Ocean” Sustainable Development Goal - SDG 14, develop business growth and investment opportunities in ocean sustainable development, and ensure continuity and follow through with the themes, discussions, and outputs from previous SOS events. The SOS 2017 theme recognises the growth of the ocean economy and its contribution to food, energy, transport, communications and other needs of society as part of the UN SDG process/Agenda 2030 and the role of the ocean business community over the next 15 years, and beyond, in ensuring ocean sustainable development. Can you name a memorable shipping experience and your favourite ship? I grew up on several navy bases mostly overseas, and have quite a few great memories running around the ships when they came into port at Subic Bay in the sixties. That being said, my favourite ship memory would also have to be when I was young and we sailed on the “SS Lurline” when we moved from Hawaii to California in the late sixties, and had a wonderful time on that trip. | | PAul González-Morgan Editor, Marine Strategy INTERVIEWS All Disclaimer: The Editor is not responsible for the opinions expressed by Interviewees. Interviews are pre-approved by the Interviewee before public release.
http://www.marinestrategy.com/key-interviews-english/category/woc-paul-holthus
Boracite(redirected from Stassfurtite) Also found in: Dictionary. boracite[′bȯr·ə‚sīt] (mineralogy) Mg3B7O13Cl A white, yellow, green, or blue orthorhombic borate mineral occurring in crystals which appear isometric in external form; it is strongly pyroelectric, has a hardness of 7 on Mohs scale, and a specific gravity of 2.9. McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc. The following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased. Boracite a mineral from the group of anhydrous borates. Chemical composition Mg3[B7O12]OCl. It contains 62.5 percent B2O3. It crystallizes into a rhombic (pseudocubic) sys-tem. The structure is of the complex shell type and contains [BO3]3- and (BO4)5-. Part of the Mg may be replaced by Fe2+. Boracite forms shiny crystals of cubic, tetrahedral, or dodecahedral appearance. The color is bluish-gray; frequently, it is colorless. Its hardness on the mineralogical scale is 7.5; its density is 2,930–2,950 kg /m3. Boracite is rare. It is formed in sedimentary deposits of gypsum, anhydrite, and potassium and rock salts. The Great Soviet Encyclopedia, 3rd Edition (1970-1979). © 2010 The Gale Group, Inc. All rights reserved.
https://encyclopedia2.thefreedictionary.com/Stassfurtite
Guests to my home, the one hubby and I have carefully decorated together, invariably make one of two comments: a) they find it very relaxing, and b) it reminds them of Indiana Jones’ house in the famous movies. Both of those reactions are exactly what we were going for. When we bought our house early in our marriage, neither of us had a really strong sense of style. The house is a pretty standard 1960s raised bungalow; what we liked about it was all the large windows and flowing spaces that give it a feeling of airiness. But how to put our mark on it? After months of waffling, I decided to cut photos of rooms that I liked out of decorating magazines, using only my gut response without analysis. When I’d assembled enough of them, I could see that they all had one thing in common: they were all decorated in earthy tones with natural textures. There were two other influences after that. The first was a visit to the home of friends of a friend. It came about when hubby and I were deciding where to travel to celebrate our 10th wedding anniversary. My long-standing dream was to visit Egypt, which became possible that year while the political situation there was relatively quiet. Hubby was kind of on-board, but still had some reservations, so a good friend of ours suggested we go and talk to good friends of his, who’d not only been to Egypt but had travelled to many countries and could give us a broad perspective. Their house was wonderful, full of artifacts from their travels. Walking inside it immediately made one want to pack bags and set off on an adventure; we loved it so much that we decided to bring the same feeling to our own home. After that visit, we did book a tour of Egypt and had a sensational time. But it was Indy’s house in the third movie, Indiana Jones and the Last Crusade, that put the final touches on the feel of our home. We buy a piece of artwork in every place that we visit – a brass hookah in a market in Cairo, a heavy bell from a small antique shop in Bangkok, a carved mask in Bali, a hand-woven basket in Botswana – and although these pieces don’t have the archeological weight of Indy’s collection, when visitors to our home make those comments, we feel we’ve achieved what we set out to. I bring this up because just this morning I read an article about how we use more than just five senses when we react to different environments. In 5 senses? In fact, architects say there are 7 ways we perceive our environments, we learn that architects design buildings that appeal to more than just sight, sound, smell, taste and feel. They also take into account our unconscious response to a place’s environment – its setting (wide open, as in a desert landscape, or tucked away inside, say, a forest) and ambience. Small spaces with lower ceilings tend to feel cozy, for example, while cavernous spaces can be overwhelming. On a personal level, I find very noisy, busy spaces really tiring. Here’s an example from several years ago that struck me on the spot. Hubby and I were Christmas shopping at our large local mall, which was full of people bumping into each and a lot of general hubbub. We stuck it out to get the last of our gifts, but on the way home we decided to stop at a Harvey’s joint and pick up some hamburgers. There was hardly anyone in there (probably all at the mall!), so it was nice and quiet, and the interior was quite cozy on a cold December night, with lower ceilings and a few holiday decorations, and I noticed how quickly I relaxed inside – so much so that it felt like the perfect soft wrap-up to a hard, crazy day. I use the words ‘soft’ and ‘hard’ deliberately. Have you ever noticed how very soft clothing, like a cozy sweater or hoodie, can instantly relax you, as compared to something stiff or scratchy? My home is decorated with furniture and colours that make me feel the same as putting on a soft sweater. It seems to resonate with our guests as well. On our travels, hubby and I have encountered all kinds of ‘spaces’, some that are awe-inspiring, some that are soothing, and everything in between. We had the great fortune to be able to stay in an over-water bungalow in Tahiti several years ago. Air Tahiti Nui was offering a fantastic promotion, with flights to Tahiti and New Zealand as well as three free nights accommodation in Tahiti, and for a fairly low price I was able to upgrade us to a hotel with those bungalows you see in exotic photos. It was a remarkable experience. The sound of water gently lapping against the pylons supporting the bungalow was so soothing, we’d shut off the air and open the windows at night, and just drift off into the best sleeps we’ve ever had. British pubs are the epitome of coziness, with lots of wood, homey decor, and often fireplaces that burn warmly during chilly weather. The food is always comforting, the beer and tea always hit the spot, and the ambience is always welcoming when you need to rest your weary feet after several hours of touring. For a sense of awe, it’s hard to beat Victoria Falls, on the border between Zambia and Zimbabwe, during high water season. Since we live within easy drive of Niagara Falls, to be honest I was wondering how much we’d be impressed with Vic Falls, but it’s famous and we went to see it. You can hear Mosi-oa-Tunya, the ‘Smoke that Thunders’ in the local language, well before you can see it, but as we walked along the stone-paved pathway to the Falls and got our first sight of them, my jaw quite literally fell open, just like you read about. We were there in April, right after the rainy season, when every second millions of gallons of the Zambezi River cascade 330 feet down into a snaking chasm, sending a thick mist over 1,000 feet into the air and making so much noise you can’t hear each other speak. Recently, we were awed by several places in New Mexico – the Big Room in Carlsbad Caverns, the striated rock walls in the wide open desert landscape, the massive and spiritual ruins of Pueblo Bonito in Chaco Culture National Park, and the enormous radio telescopes of the Very Large Array (you may have seen those in the movie Contact, with Jodie Foster). Being in nature tends to be very soothing and refreshing. There are numerous theories why, but as I’ve mentioned in other posts, whenever I need to decompress I go for a walk in one of our local gardens or wooded areas, and I’m certainly not alone in doing that. The architecture article even mentions our perceptions of time as being a factor. Driving across wide-open spaces tends to feel longer because our destination always seems to be so far away, or while flying across an ocean I’d add, while crossing denser spaces feels shorter, presumably because we have frames of reference that indicate movement. That may also be why rooms crowded with stuff feel smaller and less relaxing than rooms with less clutter. It’s a fascinating perspective on how and why different people and cultures live, now and far back in time, the way that they do. I remember visiting a church in Austria that was so crusted with gold inside that it felt anti-spiritual, more about the excess of money thrown at it than the religious experience. Hubby and I drive past the mammoth, ostentatious homes built along the Niagara River that are clearly more about showing off than living comfortably. Next time you go out and about, notice your reactions; they may guide you in making your home a sanctuary from the chaos of our modern times.
https://liontailmagic.com/tag/victoria-falls/
In this reporting period, USM focused on the development of awareness raising materials for local communities in the MY3 pilot site depicting the importance and conservation of dugongs and seagrass. T-shirts, posters, A4 size stickers were designed, using also volunteers’ support. The text for the posters and the A4 size stickers was also translated to Malay. The text for the information sheets were prepared, however they need approval from the Department of Fisheries Malaysia. The designs of the dugong T-shirt, posters, and A4 stickers underwent peer review from colleagues at USM and James Cook University. Preparation for the first part of the education programme, including English learning on a marine conservation topic (dugong and seagrass protection) was completed with the development and introduction to the community of an environmental dugong storybook entitled ‘The Adventures of Karum the Dugong’. The syllabus for teaching English and conservation using the storybook were prepared by the MY 3 Partner. USM also drafted Guidelines for Good Practices for dugongs and seagrasses in Tinggi and Sibu Island, Johor. The Guidelines document provides recommendations on preventing dugong hurting/mortality and seagrass beds degradation; they contain a set of actions guiding how to react when a dugong is found alive or dead and how to give first aid, if possible.
https://www.dugongconservation.org/news/learning-english-adventures-karum-dugong-malaysia/