content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
FIELD OF THE INVENTION The present invention relates to a compressor and a hydrogen station including the compressor. BACKGROUND ART Conventionally, a reciprocating compressor configured to reciprocate a piston in a cylinder to compress gas in a compression chamber of the cylinder has been known. In this compressor, a plurality of piston rings is installed on a piston outer peripheral surface such that the piston rings are aligned in an axial direction of the cylinder. This prevents the compressed gas obtained in the compression chamber from leaking through a gap between the piston outer peripheral surface and the cylinder inner peripheral surface. For example, in a reciprocating compressor disclosed in Japanese Patent No. 5435245, a large number of piston rings divided into two piston ring groups are installed on a piston outer peripheral surface. The compressor disclosed in Japanese Patent No. 5435245 is provided with a gas introduction unit connected to an intermediate part between the two piston ring groups to introduce gas. By this gas introduction unit, a gas having predetermined pressure is introduced into a gap between the piston outer peripheral surface and the cylinder inner peripheral surface. Japanese Patent No. 5435245 has a structure to allow gas to escape from the intermediate part of the cylinder in order to extend the life of the piston rings. It is considered that damage to the piston rings is cause by an increase in a load applied to the piston rings. That is, every time the gas that has passed through the first piston ring flows downstream and passes through each piston ring, the pressure of the gas decreases. Accordingly, the volume of the gas expands and the flow speed increases. This increases the load applied to the piston rings and causes damage. Therefore, by allowing the gas that has passed through the upper piston ring group to escape through a leak line provided in the cylinder intermediate part, the flow speed through the lower piston ring group is reduced and the lower rings are protected. In the compressor disclosed in Japanese Patent No. 5435245, the flow speed of the leaked gas fluctuates with the reciprocating sliding of the piston. Along with this fluctuation, the load on the piston ring groups may increase and wear of the piston rings may be accelerated. SUMMARY OF THE INVENTION An object of the present invention is to inhibit wear of piston rings in a piston provided with two piston ring groups. A compressor according to one aspect of the present invention is a compressor for compressing a hydrogen gas, and includes: a plurality of compression stages; and a drive mechanism configured to drive the plurality of compression stages. At least one compression stage out of the plurality of compression stages includes: a cylinder; a piston inserted into the cylinder; a first piston ring group installed on the piston; and a second piston ring group installed on the piston on a side of the drive mechanism of the first piston ring group. The cylinder is provided with: a first cooling channel through which a cooling fluid for absorbing heat generated between the cylinder and the first piston ring group flows; a second cooling channel through which the cooling fluid for absorbing heat generated between the cylinder and the second piston ring group flows; and a through hole penetrating a side wall of the cylinder from an inner surface to an outer surface of the cylinder in an intermediate part between the first cooling channel and the second cooling channel. The compressor further includes a leak line connected to the through hole. The hydrogen gas leaking from a compression chamber in the cylinder into the intermediate part through between the cylinder and the piston and then guided into the leak line through the through hole. The leak line includes a piping part and a volume expansion unit in which volume within a predetermined distance range is larger than volume of the piping part at a distance identical to the predetermined distance. A hydrogen station according to another aspect of the present invention includes: the compressor; an accumulator for storing the hydrogen gas discharged from the compressor; and a dispenser for receiving supply of the hydrogen gas from the accumulator. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram schematically showing a configuration of a hydrogen station according to a first embodiment; FIG. 2 is a diagram schematically showing appearance of a compressor according to the first embodiment; FIG. 3 is a diagram schematically showing a configuration of a first block part in the above compressor; FIG. 4 is a diagram schematically showing a configuration of a fifth compression stage in the above compressor; FIG. 5 is a diagram schematically showing a configuration of a volume expansion unit in a second embodiment; FIG. 6 is a diagram schematically showing a configuration of a volume expansion unit in a third embodiment; FIG. 7 is a diagram schematically showing a configuration of a volume expansion unit in a fourth embodiment; FIG. 8 is a diagram schematically showing part of a compressor of a fifth embodiment; FIG. 9 is a diagram schematically showing part of a compressor of a sixth embodiment; FIG. 10 is a diagram schematically showing part of a compressor of a seventh embodiment; and FIG. 11 is a diagram schematically showing a compressor of an eighth embodiment. DESCRIPTION OF EMBODIMENTS First Embodiment Second Embodiment Third Embodiment Fourth Embodiment Fifth Embodiment Sixth Embodiment Seventh Embodiment Eighth Embodiment A compressor and a hydrogen station according to embodiments of the present invention will be described in detail below with reference to the drawings. 100 100 100 1 2 1 1 3 2 FIG. 1 FIG. 1 To begin with, a configuration of a hydrogen station according to a first embodiment will be described with reference to . The hydrogen station is a facility for replenishing a fuel cell vehicle (FCV) with a hydrogen gas as fuel. As shown in , the hydrogen station includes a compressor for compressing a hydrogen gas, an accumulator for storing the high-pressure hydrogen gas compressed by the compressor and then discharged from the compressor , and a dispenser for receiving supply of the high-pressure hydrogen gas from the accumulator and supplies the high-pressure hydrogen gas to a demand destination such as a fuel cell vehicle. 1 1 11 15 5 11 15 11 15 11 13 15 6 12 14 7 6 FIGS. 2 to 4 FIG. 2 Next, a configuration of the compressor will be described with reference to . As shown in , the compressor includes a plurality of compression stages (first to fifth compression stages to ) and a drive mechanism that drives the plurality of compression stages to . Each of the five compression stages to sequentially compresses and delivers a hydrogen gas. Of the five compression stages, the first compression stage , the third compression stage , and the fifth compression stage constitute a first block part . The second compression stage and the fourth compression stage are coupled with each other and constitute a second block part provided separately from the first block part . 6 13 11 15 13 7 14 12 6 7 5 5 11 15 6 7 In the first block part , the third compression stage is placed on the first compression stage , and the fifth compression stage is placed on the third compression stage . Meanwhile, in the second block part , the fourth compression stage is placed on the second compression stage . The first block part and the second block part are placed on the drive mechanism . Rotation of a crankshaft (not shown) of the drive mechanism causes compression of the hydrogen gas in each of the compression stages to . In each of the first block part and the second block part , a so-called tandem structure compressor is constructed in which a plurality of pistons is connected in series to one piston rod. 1 9 9 9 9 9 9 9 11 9 11 12 9 12 13 9 13 14 9 14 15 9 15 9 9 9 9 a b c d e f a b c d e f a b e f The compressor includes a gas introduction pipe , a first connection pipe , a second connection pipe , a third connection pipe , a fourth connection pipe , and a gas discharge pipe . The gas introduction pipe is connected to a suction port of the first compression stage . The first connection pipe connects the first compression stage to the second compression stage . The second connection pipe connects the second compression stage to the third compression stage . The third connection pipe connects the third compression stage to the fourth compression stage . The fourth connection pipe connects the fourth compression stage to the fifth compression stage . The gas discharge pipe is connected to a discharge port of the fifth compression stage . The gas introduction pipe , the first connection pipe to the fourth connection pipe , and the gas discharge pipe form a channel for flowing a hydrogen gas. FIG. 3 13 15 13 23 33 23 15 25 23 35 25 13 15 23 25 33 35 shows the third compression stage and the fifth compression stage in a simplified manner. The third compression stage includes a third cylinder and a third piston inserted into the third cylinder . The fifth compression stage includes a fifth cylinder placed on the third cylinder and a fifth piston inserted in the fifth cylinder . The third compression stage is a compression stage preceding to the fifth compression stage . The third cylinder is a cylinder on a low pressure side of the fifth cylinder . The third piston is a piston on a low pressure side of the fifth piston . 23 23 23 33 25 25 25 35 33 35 33 35 37 Inside the third cylinder , a third compression chamber S is formed by the third cylinder and the third piston . Inside the fifth cylinder , a fifth compression chamber S is formed by the fifth cylinder and the fifth piston . A diameter of the third piston is larger than a diameter of the fifth piston . The third piston and the fifth piston are connected to each other by a connecting rod . 35 41 42 42 35 5 41 41 42 33 43 A plurality of piston rings is installed on the outer peripheral surface of the fifth piston . The plurality of piston rings constitutes a first piston ring group and a second piston ring group . The second piston ring group is installed on the outer peripheral surface of the fifth piston on the drive mechanism side of the first piston ring group . That is, the first piston ring group and the second piston ring group are disposed at a distance larger than the distance between adjacent piston rings. A plurality of piston rings is installed on the outer peripheral surface of the third piston . The plurality of piston rings constitutes a third piston ring group . 11 23 33 5 12 14 Although not shown, the first compression stage includes a first cylinder and a first piston inserted into the first cylinder. The third cylinder is placed on the first cylinder. The first piston and the third piston are connected to each other by a connecting rod, and a piston rod is connected to the first piston. The piston rod converts rotational motion of the crankshaft of the drive mechanism into reciprocating motion of the first piston via a crosshead. Furthermore, the second compression stage and the fourth compression stage have a configuration in which a piston is disposed inside a cylinder, and the fourth cylinder is placed on the second cylinder. FIG. 4 FIG. 4 FIG. 3 15 15 25 15 51 52 53 54 55 56 is a cross-sectional view schematically showing the fifth compression stage . illustrates the fifth compression stage in more detail than . The fifth cylinder of the fifth compression stage includes a cylinder body , a cylinder head , a suction side joint member , a discharge side joint member , an upper jacket member , and a lower jacket member . 51 51 51 51 51 51 a a b The cylinder body has a long shape in one direction (vertical direction in the illustrated example). A columnar space extending in the one direction is formed in the center thereof. The columnar space penetrates the cylinder body in the vertical direction. An opening is formed on the upper surface of the cylinder body . 51 61 62 63 64 51 61 64 61 51 62 61 61 51 51 a b b The cylinder body includes a body head , an upper tube part , an intermediate part , and a lower tube part . Note that in the cylinder body , these parts to are integrally formed. The body head is located at the upper end of the cylinder body and protrudes laterally (direction orthogonal to the one direction) from the upper tube part . On the upper surface of the body head , an upper surface recess , which has a circular shape when viewed from above and shares a center point with the opening and has an outer diameter larger than the outer diameter of the opening , is formed to be recessed downward. 61 61 61 61 51 61 61 51 51 61 61 61 61 b c b a c a a b c b. A suction hole and a discharge hole are formed in the body head . The suction hole is a space communicating with the columnar space and extending in a direction orthogonal to the one direction, and is open on a side surface of the body head . The discharge hole is a space communicating with the columnar space and extending from the columnar space toward the opposite side of the suction hole . The discharge hole opens on a side surface of the body head on a side surface opposite to the opening of the suction hole 62 61 62 61 63 63 62 61 62 63 51 51 51 62 55 c c The upper tube part has a tubular shape extending in the vertical direction, and is a portion having a constant outer diameter and disposed under the body head . The outer diameter of the upper tube part is smaller than the outer diameter of the body head and the intermediate part . The intermediate part is disposed under the upper tube part . Therefore, the lower surface of the body head , the outer peripheral surface of the upper tube part , and the upper surface of the intermediate part form an upper recess in the cylinder body . That is, the upper recess is formed in a ring shape to surround the outer peripheral surface of the upper tube part . The upper recess Mc is covered with the upper jacket member . 64 63 64 63 51 64 63 63 64 51 51 51 51 64 51 56 d d d The lower tube part has a tubular shape extending in the vertical direction, and is a portion having a constant outer diameter and disposed under the intermediate part . The outer diameter of the lower tube part is smaller than the outer diameter of the intermediate part . Note that the lower end of the cylinder body located under the lower tube part also has the same outer diameter as the intermediate part . Therefore, the lower surface of the intermediate part , the outer peripheral surface of the lower tube part , and the upper surface at the lower end of the cylinder body form a lower recess in the cylinder body . That is, the lower recess is formed in a ring shape to surround the outer peripheral surface of the lower tube part . The lower recess is covered with the lower jacket member . 52 52 52 52 52 61 52 61 51 a b a b a The cylinder head includes a cylinder head body and a protrusion protruding downward from the lower surface of the cylinder head body . The cylinder head is disposed on the upper surface of the body head with the protrusion fitted to the upper surface recess of the cylinder body . 53 61 61 53 61 61 b b. The suction side joint member is used to hold a check valve (not shown) provided in the suction hole of the body head . The suction side joint member is attached to the body head to close the opening of the suction hole 54 61 61 54 61 61 c c. The discharge side joint member is used to hold a check valve (not shown) provided in the discharge hole of the body head . The discharge side joint member is attached to the body head to close the opening of the discharge hole 61 53 9 9 61 25 51 25 25 b e e b a A through hole that allows the suction hole to communicate with the outside is formed in the suction side joint member . The fourth connection pipe is inserted into the through hole. The fourth connection pipe and the suction hole function as a suction-side channel of the fifth cylinder that leads to the columnar space in the fifth cylinder and causes the fifth compression chamber S, which will be described later, to suction a hydrogen gas. 61 54 9 9 61 25 51 25 25 c f f c a A through hole that allows the discharge hole to communicate with the outside is formed in the discharge side joint member . The gas discharge pipe is inserted into the through hole. The gas discharge pipe and the discharge hole function as a discharge channel of the fifth cylinder that leads to the columnar space in the fifth cylinder and discharges a hydrogen gas from the fifth compression chamber S described later. 55 51 55 62 71 41 71 41 25 51 41 71 c The upper jacket member is disposed to cover the upper recess . With this configuration, a closed space is formed between the upper jacket member and the outer peripheral surface of the upper tube part . This space functions as a first cooling channel for cooling the first piston ring group . The first cooling channel has a size that covers the range in which the first piston ring group reciprocates. A cooling fluid for absorbing heat generated between the fifth cylinder (inner surface of the cylinder body ) and the first piston ring group flows through the first cooling channel . 56 51 56 64 72 42 72 42 25 51 42 72 d The lower jacket member is disposed to cover the lower recess . With this configuration, a closed space is formed between the lower jacket member and the outer peripheral surface of the lower tube part . This space functions as a second cooling channel for cooling the second piston ring group . The second cooling channel has a size that covers the range in which the second piston ring group reciprocates. A cooling fluid for absorbing heat generated between the fifth cylinder (inner surface of the cylinder body ) and the second piston ring group flows through the second cooling channel . 55 57 71 56 58 72 57 58 57 56 58 55 FIG. 4 The upper jacket member is provided with an introduction part for introducing the cooling fluid into the first cooling channel . The lower jacket member is provided with a discharge part for discharging the cooling fluid from the second cooling channel . Note that the introduction part and the discharge part are provided not only at the positions shown in . For example, the introduction part may be provided in the lower jacket member , and the discharge part may be provided in the upper jacket member . 35 51 51 35 51 52 52 25 1 35 51 25 a b The fifth piston has a long cylindrical shape in one direction (vertical direction in the illustrated example), and is vertically slidably disposed in the columnar space of the cylinder body . The tip surface (upper surface) of the fifth piston , the inner peripheral surface of the cylinder body , and the lower surface of the protrusion of the cylinder head define the fifth compression chamber S. A micro gap C is formed between the outer peripheral surface of the fifth piston and the inner peripheral surface of the cylinder body (fifth cylinder ). 63 71 72 35 63 51 1 63 63 1 63 63 25 25 a a a a The intermediate part is located between the first cooling channel and the second cooling channel in the vertical direction in which the fifth piston reciprocates. A through hole that allows the columnar space (micro gap C) to communicate with the outside is formed in the intermediate part . One end of the through hole opens to the micro gap C and the other end opens to the outer peripheral surface of the intermediate part . That is, the through hole penetrates the side wall of the fifth cylinder from the inner surface to the outer surface of the fifth cylinder . 71 72 63 63 57 71 63 72 58 71 72 25 b b The first cooling channel and the second cooling channel communicate with each other through a communication passage formed to penetrate the intermediate part . That is, a channel for flowing the cooling fluid is formed by the introduction part , the first cooling channel , the communication passage , the second cooling channel , and the discharge part . By allowing the first cooling channel to communicate with the second cooling channel , the cooling structure in the fifth cylinder can be simplified. 63 63 63 63 63 63 b a b a The communication passage is provided at a different position from the through hole in the circumferential direction in the intermediate part . Specifically, in the present embodiment, the communication passage is provided on the side opposite to the through hole in the circumferential direction of the intermediate part . 71 72 63 71 72 57 58 71 72 b Note that the first cooling channel and the second cooling channel do not have to communicate with each other through the communication passage . In that case, the first cooling channel and the second cooling channel are each configured as an independent channel. The introduction part for introducing the cooling fluid and the discharge part for discharging the cooling fluid are provided in each of the first cooling channel and the second cooling channel . 81 63 63 51 81 25 63 25 1 81 63 63 9 15 1 25 9 81 9 81 9 9 81 15 a a e e e d c FIG. 4 A leak line is connected to the through hole provided in the intermediate part of the cylinder body . The leak line is a part that guides a hydrogen gas to the outside of the fifth cylinder after leaking into the intermediate part from the fifth compression chamber S through the micro gap C. As shown in , in the present embodiment, one end of the leak line is connected to the through hole of the intermediate part , and the other end is connected to the fourth connection pipe (channel on the suction side of the fifth compression stage ). Therefore, the hydrogen gas flowing out from the micro gap C to the outside of the fifth cylinder can be returned to the fourth connection pipe . Note that the other end of the leak line is not always connected to the fourth connection pipe . For example, the other end of the leak line may be connected to the third connection pipe or the second connection pipe . The leak line may be connected to a low pressure tank instead of being connected to a channel through which a hydrogen gas introduced in the fifth compression stage flows. 81 82 83 82 83 82 83 82 83 82 83 82 83 82 83 82 83 The leak line includes a piping part and a volume expansion unit with the volume existing in a predetermined distance range larger than the volume of the piping part in the same distance range as the predetermined distance range. In the present embodiment, the cross-sectional area of the channel part where a hydrogen gas flows in the volume expansion unit is larger than the cross-sectional area of the part where a hydrogen gas flows in the piping part . Therefore, the volume of the volume expansion unit existing in the predetermined distance range is larger than the volume of the linear piping part in the same distance range. That is, when the volume expansion unit is compared with the piping part , the volume expansion unit is larger than the piping part in volume included in the predetermined length range. For example, the volume expansion unit is thicker (wider) than the piping part . The volume expansion unit of the present embodiment is configured as a hollow filter connected to the piping part , and is configured to remove, from a hydrogen gas, metal powder derived from the cylinder, resin powder derived from the piston ring, and the like. Note that the volume expansion unit may not be formed as a hollow filter, but may be formed as a hollow tank. 82 82 82 83 82 82 82 63 82 9 83 82 82 82 25 83 82 9 83 a b a b a a b e a b a b e The piping part includes a first piping part and a second piping part . The volume expansion unit is disposed between the first piping part and the second piping part . One end of the first piping part is connected to the through hole . One end of the second piping part is connected to the fourth connection pipe . The volume expansion unit (filter) is provided with two circulating ports. The other end of the first piping part and the other end of the second piping part are connected to the circulating ports. Note that the length of the first piping part connecting the fifth cylinder to the volume expansion unit is shorter than the length of the second piping part connecting the fourth connection pipe to the volume expansion unit . 82 81 81 As a material for the piping part in the leak line , austenitic stainless steel having excellent corrosion resistance is preferably used. Examples of the material include Japanese Industrial Standards austenitic stainless steel pipes (JIS-G3459) SUS316LTP or SUS316TP, American Society of Mechanical Engineers austenitic stainless steel standard (ASME-Section 2 PART-A 1998 SA-479) XM-19, and the American Society of Mechanical Engineers austenitic stainless steel pipes standard (ASME-Section 2 PART-A 1998 SA-312) TPXM-19. Using the above-described materials for the leak line provides sufficient strength even in an environment where a high-pressure hydrogen gas flows, and hydrogen embrittlement is unlikely to occur. 81 1 41 35 81 81 The volume of the leak line is preferably larger than the volume of the micro gap C in the section corresponding to the first piston ring group when the fifth piston is stationary. By giving the leak line predetermined volume in this way, the leak line can easily function as a buffer space for inhibiting pressure fluctuations of a leaked gas. 1 71 41 41 63 25 1 81 63 81 82 83 82 63 35 42 42 a As described above, the compressor according to the present embodiment cools, by the first cooling channel , the leaked gas leaking with expansion of volume and an increase in flow speed in the first piston ring group . This makes it possible to inhibit the expansion of the volume and the increase in the flow speed of the leaked gas, and to inhibit the wear of each piston ring of the first piston ring group more than when the leaked gas is not cooled. Then, the gas leaking into the intermediate part from the fifth compression chamber S through the micro gap C is guided to the leak line through the through hole . The leak line includes the piping part and the volume expansion unit (filter) in which the volume existing in a predetermined distance range is larger than the volume of the piping part in the same range as the predetermined distance range. For this reason, the fluctuation of the flow speed of the leaked gas in the intermediate part is inhibited during reciprocating sliding of the fifth piston . Therefore, the load on the second piston ring group by the leaked gas is reduced, and the wear of the second piston ring group can be inhibited. 1 Next, a compressor according to a second embodiment will be described. The compressor according to the second embodiment is basically similar to the compressor according to the first embodiment, but differs in a configuration of a volume expansion unit. Only differences from the first embodiment will be described below. FIG. 5 FIG. 5 81 81 83 82 83 83 84 82 84 84 82 84 83 82 83 82 84 82 83 82 83 a a a a a a a b a. is a diagram schematically showing a configuration of a leak line in the second embodiment. As shown in , the leak line includes a volume expansion unit connected to a piping part , and the volume expansion unit includes a meandering pipe. The volume expansion unit has a length in a range of a predetermined distance longer than the length of the linear piping part in the same distance range . Therefore, the volume in the range is larger than the volume of the piping part in the same range . That is, when the volume expansion unit is compared with the piping part , the volume expansion unit is larger than the piping part in volume included in the predetermined length range . A first piping part is connected to one end of the volume expansion unit , and a second piping part is connected to the other end of the volume expansion unit 1 Next, a compressor according to a third embodiment will be described. The compressor according to the third embodiment is basically similar to the compressor according to the first embodiment, but differs in a configuration of a volume expansion unit. Only differences from the first embodiment will be described below. FIG. 6 FIG. 6 81 81 83 82 83 83 84 82 84 84 82 84 82 83 82 83 b b b b b a b b b. is a diagram schematically showing a configuration of a leak line in the third embodiment. As shown in , the leak line includes a volume expansion unit connected to a piping part , and the volume expansion unit includes a pipe formed in a spiral shape. The volume expansion unit has a length in a range of a predetermined distance longer than the length of the linear piping part in the same distance range . Therefore, the volume in the range is larger than the volume of the piping part in the same range . A first piping part is connected to one end of the volume expansion unit , and a second piping part is connected to the other end of the volume expansion unit 1 Next, a compressor according to a fourth embodiment will be described. The compressor according to the fourth embodiment is basically similar to the compressor according to the first embodiment, but differs in a configuration of a volume expansion unit. Only differences from the first embodiment will be described below. FIG. 7 FIG. 7 81 81 83 82 83 83 84 82 84 84 82 84 82 83 82 83 c c c c c a c b c. is a diagram schematically showing a configuration of a leak line in the fourth embodiment. As shown in , the leak line includes a volume expansion unit connected to a piping part , and the volume expansion unit includes a pipe formed in a helical shape. The volume expansion unit has a length in a range of a predetermined distance longer than the length of the linear piping part in the same distance range . Therefore, the volume in the range is larger than the volume of the piping part in the same range . A first piping part is connected to one end of the volume expansion unit , and a second piping part is connected to the other end of the volume expansion unit FIG. 8 15 8 8 25 8 8 37 35 8 8 1 41 42 8 23 5 a b Next, a compressor according to a fifth embodiment will be described. As shown in , the compressor according to the fifth embodiment differs from the compressor according to the first embodiment in that a fifth compression stage includes a distance piece . The distance piece is adjacently disposed under a fifth cylinder . In the distance piece , a penetrating part for penetrating a connecting rod connected to a fifth piston is formed. In the distance piece , a space is formed to accommodate a leaked gas leaking through a micro gap C corresponding to a first piston ring group and a second piston ring group . Note that the distance piece may be coupled with a third cylinder , or may be coupled with a drive mechanism . 1 25 1 9 81 25 8 e When the compressor is driven, part of the leaked gas leaking from a fifth compression chamber S through the micro gap C is returned to a fourth connection pipe through a leak line . Therefore, the amount of leaked gas leaking from the fifth cylinder to the distance piece can be reduced. Note that while descriptions of other configurations, actions, and effects will be omitted, the descriptions of the first to fourth embodiments can be incorporated into the fifth embodiment. 85 9 e FIG. 9 Next, a compressor according to a sixth embodiment will be described. The compressor according to the sixth embodiment differs from the compressor according to the first embodiment in that a gas cooler is provided on a fourth connection pipe as shown in . 14 85 15 85 81 9 81 85 9 81 9 85 81 9 85 83 e e e e A high-temperature, high-pressure hydrogen gas discharged from a fourth compression stage is cooled by the gas cooler and then introduced into a fifth compression stage . At this time, the gas cooler is disposed downstream of a connection portion of a leak line in the fourth connection pipe . That is, the leak line is connected to a portion upstream of the gas cooler in the fourth connection pipe . Therefore, the hydrogen gas returned from the leak line to the fourth connection pipe joins the hydrogen gas before being cooled by the gas cooler . Therefore, the high-temperature leaked gas flowing from the leak line to the fourth connection pipe can be cooled by the gas cooler . This makes it possible to prevent the hydrogen gas cooled by the gas cooler from being heated by the leaked gas. Note that while the description of other configurations, actions, and effects will be omitted, the description of the first to fifth embodiments can be incorporated into the sixth embodiment. 86 81 FIG. 10 Next, a compressor according to a seventh embodiment will be described. The compressor according to the seventh embodiment differs from the compressor according to the first embodiment in that a check valve is provided on a leak line as shown in . 86 63 9 86 9 63 86 83 25 81 e e While the check valve allows a hydrogen gas to flow from within an intermediate part to a fourth connection pipe , the check valve blocks the flow of a hydrogen gas from the fourth connection pipe to the intermediate part . The check valve is disposed downstream of a volume expansion unit (that is, on the side far from a fifth cylinder ) in the leak line . 1 9 63 86 81 9 63 83 86 9 1 25 9 83 83 9 e e e e e. When a compressor is driven, pressure of a hydrogen gas in the fourth connection pipe may be higher than pressure of a hydrogen gas in the intermediate part . Even in this case, since the check valve is provided in the leak line , it is possible to prevent the inflow of a hydrogen gas from the fourth connection pipe into the intermediate part . Moreover, since the volume expansion unit is disposed upstream of the check valve , even if the pressure in the fourth connection pipe is higher than pressure in a micro gap C in the fifth cylinder , the pressure in the fourth connection pipe is unlikely to affect the volume expansion unit . Therefore, the volume expansion unit is unlikely to be affected by pressure fluctuation in the fourth connection pipe Note that while the description of other configurations, actions, and effects will be omitted, the description of the first to sixth embodiments can be incorporated into the seventh embodiment. FIG. 11 87 81 81 9 e Next, a compressor according to an eighth embodiment will be described. The compressor according to the eighth embodiment differs from the compressor according to the first embodiment as shown in in that a pressure reducing valve is provided on a leak line , and the leak line is connected to a channel with lower pressure than a fourth connection pipe (channel on a suction side). 81 63 9 81 9 9 9 a d c b. For example, one end of the leak line is connected to an intermediate part , and the other end is connected to a gas introduction pipe . Note that the other end of the leak line may be connected to a third connection pipe , a second connection pipe , or a first connection pipe 87 81 63 9 87 83 25 81 a The pressure reducing valve provided on the leak line reduces pressure of a hydrogen gas on the intermediate part side (or on a high pressure side) to predetermined pressure and flow the hydrogen gas to the gas introduction pipe side that is on a low pressure side. The pressure reducing valve is disposed downstream of a volume expansion unit (that is, on the side far from a fifth cylinder ) in the leak line . 1 63 9 87 63 9 83 87 81 83 a a When the compressor is driven, the pressure of a hydrogen gas in the intermediate part may be significantly higher than the pressure of the gas in the gas introduction pipe . However, since the pressure reducing valve is provided, it is possible to prevent the hydrogen gas from excessively flowing from the intermediate part into the gas introduction pipe . Moreover, since the volume expansion unit is disposed upstream of the pressure reducing valve in the leak line , the pressure change in the volume expansion unit can be inhibited. 81 15 81 63 14 9 9 9 81 63 13 9 9 c b a b a. Note that the leak line is not always connected to a fifth compression stage . For example, one end of the leak line may be connected to the intermediate part of a fourth compression stage . In this case, the other end may be connected to the second connection pipe , the first connection pipe , or the gas introduction pipe . Furthermore, one end of the leak line may be connected to the intermediate part of a third compression stage . In this case, the other end may be connected to the first connection pipe or the gas introduction pipe 11 13 15 11 13 15 A first compression stage , the third compression stage , and the fifth compression stage do not have to be configured as a tandem structure. The first compression stage , the third compression stage , and the fifth compression stage may be configured as separate bodies. Note that while the description of other configurations, actions, and effects will be omitted, the description of the first to seventh embodiments can be incorporated into the eighth embodiment. It should be understood that the embodiments disclosed this time are in all respects illustrative and not restrictive. The scope of the present invention is indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and scope of the claims and equivalents are therefore intended to be embraced therein. Therefore, the following embodiments are also included in the scope of the present invention. 81 63 63 51 12 14 a For example, the configuration in which the leak line is connected to the through hole of the intermediate part of the cylinder body may be applied to a second to fourth compression stages to . 15 15 In the first embodiment, the fifth compression stage may be configured, for example, as a tandem structure with the fourth compression stage that is a compression stage preceding to the fifth compression stage . 11 13 15 11 13 15 12 14 12 14 The first compression stage , the third compression stage , and the fifth compression stage do not have to be configured as a tandem structure. In this case, the first compression stage , the third compression stage , and the fifth compression stage may be configured as separate bodies. Similarly, the second compression stage and the fourth compression stage do not have to be configured as a tandem structure. In this case, the second compression stage and the fourth compression stage may be configured as separate bodies. Here, the above-described embodiments will be outlined. A compressor according to one aspect of the present invention is a compressor for compressing a hydrogen gas, and includes: a plurality of compression stages; and a drive mechanism configured to drive the plurality of compression stages. At least one compression stage out of the plurality of compression stages includes: a cylinder; a piston inserted into the cylinder; a first piston ring group installed on the piston; and a second piston ring group installed on the piston on a side of the drive mechanism of the first piston ring group. The cylinder is provided with: a first cooling channel through which a cooling fluid for absorbing heat generated between the cylinder and the first piston ring group flows; a second cooling channel through which the cooling fluid for absorbing heat generated between the cylinder and the second piston ring group flows; and a through hole penetrating a side wall of the cylinder from an inner surface to an outer surface of the cylinder in an intermediate part between the first cooling channel and the second cooling channel. The compressor further includes a leak line connected to the through hole. The hydrogen gas leaking from a compression chamber in the cylinder into the intermediate part through between the cylinder and the piston and then guided into the leak line through the through hole. The leak line includes a piping part and a volume expansion unit in which volume within a predetermined distance range is larger than volume of the piping part at a distance range identical to the predetermined distance range. With this configuration, the leaked gas leaking with expansion of volume and an increase in flow speed in the first piston ring group is cooled by the first cooling channel. This makes it possible to inhibit the expansion of the volume of the leaked gas and the increase in the flow speed, and to inhibit the wear of each piston ring of the first piston ring group more than when the leaked gas is not cooled. Then, the gas leaking into the intermediate part through between the cylinder and the piston is guided to the leak line through the through hole. This leak line includes a piping part and a volume expansion unit in which the volume existing in the predetermined distance range is larger than the volume of the piping part at the same distance range as the predetermined distance range. For this reason, the fluctuation of the flow speed of the leaked gas in the intermediate part is inhibited during reciprocating sliding of the piston. Therefore, the load on the second piston ring group by the leaked gas is reduced, and the wear of the second piston ring group can be inhibited. (2) In the compressor, the leak line may be connected to a suction side channel of the at least one compression stage, or a channel having pressure lower than the suction side channel. With this configuration, the leaked gas flowing into the leak line can be recovered. (3) In the compressor, the volume expansion unit may be a hollow tubular filter connected to the piping part. An inner diameter of a channel part of the hydrogen gas in the filter may be larger than an inner diameter of the piping part. With this configuration, by using a filter having a large inner diameter as the volume expansion unit, it is possible to reduce the cost more than when the volume expansion unit is separately provided in addition to the filter. (4) In the compressor, the volume expansion unit may include a pipe formed in a meandering shape, a spiral shape, or a helical shape. With this configuration, the volume of the leak line can be secured by increasing the pipe length. (5) A hydrogen station according to the embodiment includes: the compressor; an accumulator for storing the hydrogen gas discharged from the compressor; and a dispenser for receiving supply of the hydrogen gas from the accumulator. As described above, wear of the piston rings can be inhibited in the piston in which two piston ring groups are provided. This application is based on Japanese Patent Application No. 2020-176173 filed on Oct. 20, 2020, the contents of which are hereby incorporated by reference. Although the present invention has been fully described by way of example with reference to the accompanying drawings, it is to be understood that various changes and modifications will be apparent to those skilled in the art. Therefore, unless otherwise such changes and modifications depart from the scope of the present invention hereinafter defined, they should be construed as being included therein.
Proper Fractions are fractions which have an absolute value of less than one. , , and are all proper fractions. In a proper fraction, the numerator is less than the denominator. Improper fractions are fractions that have a value of 1 or greater, or -1 or less. , , and are all examples of improper fractions. Mixed numbers contain a combination of an integer and a proper fraction. , , and are all examples of mixed numbers. Every mixed number can be written as an improper fraction. All improper fractions may be written either as an integer or as a mixed number. Place the sum over the denominator. Of course, you should always make sure that your fraction is reduced. The quotient gives you the whole number part of your answer. Place the remainder over the denominator. Write as an improper fraction.
http://sites.austincc.edu/tsiprep/math-review/fractions/mixed-and-improper-fractions/
On the territory to the East and Northeast of the Forum (Agora) of Philippopolis in the years of Early Christianity were formed neighborhoods where several Christian churches were built. In the same area were found also the ruins of a synagogue - a unique building from that period. The ruins of the Small Early Christian basilica were found during the construction works of "Maria Louisa" Blvd. The Small basilica is situated in the eastern outskirts of the Ancient city, next to the fortification wall with a tower from 2nd - 4th century AD. The basilica has a central nave, flanked by two aisles. It is with one apse and with a narthex. A small chapel was built to the South and a baptistery is erected just next to the Northern aisle. The overall length of the basilica, including the apse, is 20 meters, and the width is 13 meters. The basilica was built in the second half of 5th century AD with rich architectural decorations – marble colonnades separating the aisles, marble altar wall, pulpit and synthronos. The floors were covered with rich multicolour mosaic with geometrical motifs. The mosaic includes a panel with donor inscription. Remnants of an altar table were found. After the building had been burnt down, it was reconstructed and renovated. The outer dimensions of the building were not changed, but the floor level was raised with approximately 0.70 m. The new flooring was made of bricks. The layout of the narthex, the altar wall and the pulpit were changed and a baptistery was added next to the Northern aisle. The baptistery had a square plan, cross pool and polychrome mosaic flooring, where Christian symbols - deer, pigeons and others - were depicted. The basilica functioned until the end of 6th century AD. Two donor inscriptions were found during excavation works. One of them was carved on the lining marble slab from the altar of the basilica. The other was shaped in the mosaic of red tesserae on white background, just opposite to the altar apse. It mentions "patrician" and "winner", but the name was erased. Probably it was the name of Basilisk, a Byzantine Emperor in 475-476 AD, and erasing it from the mosaic shows that it was made intentionally after his dethronement. Day for free visit: Every first Thursday of the month for students and retired people. Visits are in compliance with the anti-epidemic measures.
http://www.visitplovdiv.com/en/node/675
Q: Help with a MILP formulation for service scheduling I'm trying to formulate a MILP for scheduling service jobs for multiple devices. Let's assume that each device $i$ has a life $\ell_i$ and that I have $n$ total service visits to allocate at times $\{M_1,\ldots,M_n\}$. Each device must be serviced. I would like to maximize the average time of service per device -- understanding that some devices will share a service (think 2 service visits for 5 devices). So far, I have Objective $$\max \sum_j \sum_i M_j x_{ij}\tag1$$ Constraints $\sum_j x_{ij} = 1$, $\forall i$ (every device is serviced once) (2) $\sum_i x_{ij} \geq 1$, $\forall j$ (every job services one or more devices) (3) $M_j x_{ij} \leq \ell_i$, $\forall i,j$ (each device has a service before the end of life) (4) Variables $M_j \geq 0$ $\forall j$, $x_{ij} \in \{0,1\}$ $\forall i,j$ I realize that $M_j x_{ij}$ is nonlinear. I can rearrange (4) to make it linear ($M_j \leq \ell_i (1+c(1-x_{ij})$ where $c$ is very large), but I'm still stuck with the nonlinear term in the objective. Is there a way to restate the objective, so I can use a MILP solver? Or perhaps a better way to formulate the entire problem? I don't have a background in integer programming, so even suitable problem classes are a bit of a mystery to me. A: There is a way to formulate the problem with everything linear. (You can decided if it is better.) Set a time horizon $T$ (could be the last end-of-life epoch among the devices). Let $x_{it}$ be 1 if device $i$ is serviced at time $t\in \lbrace 1,\dots, T\rbrace$, 0 if not. Let $y_t$ be 1 if there is a service visit at time $t$, 0 if not. Consider the following model: \begin{align} \max&\quad\sum_{i}\sum_{t}tx_{it}\\ \textrm{s.t. }&\quad\sum_{t}x_{it}=1&\forall i\\ &\quad\sum_{t}tx_{it}\le\ell_{i}&\forall i\\ &\quad x_{it}\le y_{t}&\forall i,t\\ &\quad y_{t}\le\sum_{i}x_{it}&\forall t\\ &\quad \sum_{t}y_{t}=n. \end{align} The objective maximizes the sum of the service dates. The first constraint ensures every device is serviced once. The second constraint requires each device be serviced before it dies. The third constraint ensures that devices are only serviced during visits, the fourth prevents idle visits, and the fifth forces exactly $n$ visits. (You might want to consider making the last constraint an inequality, in case not all $n$ visits are needed.) This assumes that there is no limit on how many devices can be serviced at any one time.
CROSS-REFERENCE TO RELATED APPLICATION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS This application claims priority under 35 USC 119 to German Patent Application No. 10 2010 010 435.3, filed on Feb. 26, 2010, the entire disclosure of which is incorporated herein by reference. 1. Field of the Invention The invention relates to a drive system and to a method for operating such a drive system, in particular for a motor vehicle. 2. Description of the Related Art The invention can be applied to any vehicle, but the invention and the problems on which the invention is based will be explained in more detail with respect to a passenger car. A hybrid vehicle generally is considered a vehicle with a drive system that uses a plurality of drive units, such as an internal combustion engine and an electric motor. Parallel hybrid drives usually should be used to generate the highest possible energetic efficiency. A parallel hybrid drive permits the electric motor and the internal combustion engine to apply torque to a transmission either alternatively or cumulatively. Furthermore, the electric motor can be as a generator. For example, brake energy is present in the form of kinetic energy when the vehicle is braked and can be recovered and used, for example, to charge electric energy stores. Frequent starting and acceleration that occur, for example, in road traffic, preferably are carried out or assisted by the electric motor in a hybrid motor vehicle because the operation of the internal combustion engine with frequent load changes results in increased fuel consumption and emissions of pollutants. In contrast to an internal combustion engine, an electric motor already has a high torque at low engine speeds, practically from the stationary state. As a result, an electric motor is suited particularly well for starting and acceleration processes. In contrast, an internal combustion engine can be operated with high efficiency only at its rated rotational speed, for example in the case of constant fast travel. To combine the advantages of an internal combustion engine with the advantages of an electric motor, it is therefore necessary to configure the drive system structurally in such a way that both the power of the internal combustion engine and the power of the electric motor can be input into the drive system. U.S. Pat. No. 7,611,433 B2 describes a hybrid drive system for a motor vehicle having a double clutch transmission that has two component transmissions and one output shaft connected to the component transmissions via gearwheel stages. An electric machine arranged axially with respect to the output shaft can be connected to one end of the output shaft via an additional clutch. WO 2008/046185 A1 describes a further structural design of a hybrid drive system with an internal combustion engine and an electric motor. The electric motor is arranged axially with respect to an output shaft and can be connected to the output shaft via a clutch. These two designs require a large amount of axial space due to the arrangement of the electric motor. Furthermore, when the clutch assigned to the electric motor closes, the electric motor is connected directly to the drive axle of the vehicle via the output shaft. The drive axle rotates as soon as the electric motor is activated. It therefore is not possible to activate the electric motor in the stationary state of the vehicle, for example to start the internal combustion engine. US 2003/0069103 A1 discloses a hybrid drive system for a motor vehicle with an automated conventional transmission, an internal combustion engine and an electric machine. The transmission has two component transmissions that optionally transmit the torque of the internal combustion engine to an output shaft. The electric machine is coupled to the output shaft of the transmission via a gear stage. This arrangement permits activation of the electric motor in the stationary state of the vehicle. An object of the invention is to provide an improved drive system and an improved method for operating a drive system to overcome the above-mentioned disadvantages. The invention relates to a drive system for a motor vehicle. The drive system has a double clutch transmission with two component transmissions, a double clutch and an output shaft. An internal combustion engine optionally can be connected operatively to one of the component transmissions via the double clutch to drive the output shaft. A gear drive is arranged rotatably on the output shaft. An electric machine can be connected operatively via the gear drive to a fixed gear of one of the component transmissions to drive the output shaft and/or to recover kinetic energy from the drive system. The invention also relates to method for operating a drive system that has a double clutch transmission with two component transmissions, a double clutch and an output shaft, and particularly a drive system for a motor vehicle. The method includes optionally operatively connecting an internal combustion engine to one of the component transmissions via the double clutch to drive the output shaft; and operatively connecting an electric machine to at least one fixed gear of one of the component transmissions via a gear drive that is arranged rotatably on the output shaft to drive the output shaft and/or to recover kinetic energy from the drive system. The electric machine is connected operatively via a gear drive that is mounted rotatably on the output shaft to one of the component transmissions of the double clutch transmission to transmit a torque or to recover braking energy. In this context, the output shaft, which is present in any case, is used as a bearing point for the additional gear drive. As a result, additional components are not needed, such as an axis for providing support. An operative connection to an existing fixed gear of a component transmission can be provided via the gear drive, but an additional fixed gear is also possible. The invention has an advantage over the above-mentioned prior art approaches in that a structurally simple coupling of the electric motor to one of the component transmissions of the double clutch transmission is implemented. Furthermore, the drive system of the invention enables the electric motor to be activated when the vehicle is stationary, for example to start the internal combustion engine, without the drive axle of the motor vehicle being moved. Furthermore, the electric machine advantageously can be operated as a generator by the internal combustion engine to charge an energy store, for example during a waiting phase at a traffic light. The electric machine preferably is arranged parallel to the output shaft. This arrangement reduces axial installation space of the drive system and widens the field of application of the drive system. One of the component transmissions preferably has gearwheels for uneven-numbered gear speeds, and the other component transmission preferably has gearwheels for even-numbered gear speeds of the double clutch transmission. This makes it possible to pre-select in the component transmission that is not currently connected to the internal combustion engine a gear speed to follow the gear speed that currently is shifted. Thus, a particularly fast shifting process is ensured. The electric machine preferably can be connected operatively to the component transmission with the even-numbered gear speeds, for example to the fixed gear of a second gear speed. As a result, the first gear speed can be pre-selected in the stationary state of the motor vehicle, and the electric machine that is being operated as a generator can be driven by the internal combustion engine, for example, to charge a battery. As a result, the motor vehicle starts with no delay and accelerates quickly. The gear drive may have a spur gear. As a result, a desired transmission ratio can be implemented in a structurally simple way. The gear drive may have a planetary gear mechanism. Thus, an advantageous transmission ratio can be implemented with a low axial space requirement. The planetary gear mechanism preferably is arranged coaxially with respect to an output shaft of the electric machine or coaxially with respect to the output shaft of the double clutch transmission. As a result a further reduction in the axial space requirement is achieved, which widens the field of application of the drive system. The electric machine can be connected operatively to the internal combustion engine and can be used to start the internal combustion engine. Thus, a separate starter advantageously can be dispensed with, which entails advantages in terms of weight. A clutch device preferably is provided for decoupling the electric machine from the drive system. Thus, the electric machine will not influence the synchronization of the second component transmission, which advantageously produces relatively short shifting times. The electric machine preferably is connected operatively to a fixed gear of the component transmission which has even-numbered gear speeds of the double clutch transmission. The fixed gear may be embodied as a fixed gear of a second gear speed of the component transmission. As a result, it is possible to pre-select the first gear speed in the stationary state of the motor vehicle, and at the same time to drive the electric machine which is operated as a generator with the internal combustion engine, for example to charge a battery. An acceleration advantage therefore is obtained when the motor vehicle starts in the first gear speed of the double clutch transmission. The electric machine may be connected operatively to the component transmission with the even-numbered gear speeds, to the fixed gear of a second gear speed. Therefore, no deceleration occurs when the motor vehicle starts, and as a result an acceleration advantage is obtained. The electric machine may be connected operatively to the fixed gear via a spur gear and/or via a planetary gear mechanism. Thus, a desired transmission ratio between an electric machine and the component transmission can be attained in a comfortable fashion. FIG. 1 FIG. 1 1 2 7 9 9 2 7 36 5 10 5 11 12 10 7 36 11 12 10 7 11 13 3 2 3 17 16 14 15 13 14 15 13 3 13 is a schematic view of a drive system having a double clutch transmission , an internal combustion engine and an electric machine . The electric machine can be operated either as an electric motor or as a generator. In the present embodiment, the double clutch transmission has seven forward gear speeds and one reverse gear speed. However, it is also possible to use any other desired number of gear speeds. In the text which follows, a freely moving gear is understood to be a gearwheel that is mounted rotatably on a shaft, and a fixed gear is understood to be a gearwheel that is secured on the shaft in a frictionally locking or positively locking fashion. A freely moving gear may be connected, for example, via a shifting sleeve to a shaft in a frictionally locking fashion. In the case of rotating shafts, adaptation of the rotational speed of the freely moving gear to the shaft is carried out by synchronization. The internal combustion engine is connected in a frictionally locking fashion to a clutch housing of a double clutch via an output shaft . The double clutch has a first clutch and a second clutch that are connected to the output shaft of the internal combustion engine via the clutch housing . The clutches , are embodied here as wet-running multi-disk clutches. The output shaft of the internal combustion engine can be connected via the first clutch to a solid shaft of a first component transmission of the double clutch transmission in a frictionally locking fashion. The first component transmission is assigned a group of transmission gear speeds, namely the uneven-numbered forward gears one, three, five and seven as well as the reverse gear speed. The reverse gear speed is not illustrated in . In the present example, the first gear speed is implemented with the gearwheel and the third gear speed is implemented with the gearwheel as fixed gears. On the other hand, the fifth gear speed is implemented with the gearwheel and the seventh gear speed is implemented with the gearwheel as freely moving gears on the solid shaft . One of the gearwheels or optionally can be secured on the solid shaft of the first component transmission via a shifting kinematic system (not illustrated) with a shifting sleeve and a synchronization means. The solid shaft is mounted in a transmission housing (not illustrated). 4 18 13 3 2 18 10 7 12 5 5 11 12 11 12 3 4 3 4 7 11 12 18 4 21 19 20 21 19 20 18 19 20 18 FIG. 1 A second component transmission has a hollow shaft that surrounds the solid shaft of the first component transmission and also is mounted in the transmission housing of the double clutch transmission . The hollow shaft can be connected to the output shaft of the internal combustion engine in a frictionally locking fashion via the second clutch of the double clutch . The double clutch is embodied so that the first clutch , the second clutch , or neither of the two clutches , optionally is closed. The first or the second component transmissions , , or neither of the component transmissions , , therefore can be connected operatively to the internal combustion engine . In , neither of the two clutches , is closed. The hollow shaft of the second component transmission is assigned a second group of transmission gear speeds, namely the even-numbered gear speeds: two with the gearwheel , four with the gearwheel and six with the gearwheel . The gearwheel is embodied as a fixed gear and the gearwheels and are embodied as freely moving gears on the hollow shaft . One of the gearwheels or optionally can be connected to the hollow shaft in a frictionally locking fashion via a shifting kinematic system (not illustrated) with a shifting sleeve and a synchronization means. 22 2 13 13 18 2 22 23 29 19 21 14 17 4 3 23 24 28 29 22 14 15 13 3 19 20 18 4 27 26 25 16 17 13 3 21 18 4 25 27 22 13 18 22 2 A main shaft of the double clutch transmission is parallel to the solid and hollow shafts , and is mounted in the transmission housing of the double clutch transmission . The main shaft has gearwheels to engaged with the corresponding gearwheels to and to of the second and first component transmissions , for implementing the desired transmission ratio. In this context, the gearwheels , , and of the fourth, sixth, seventh and fifth gear speeds are fixed gears on the main shaft and each is assigned to a freely moving gear , on the solid shaft of the first component transmission or a freely moving gear , of the hollow shaft of the second component transmission . The gearwheels , , of the third, first and second gear speeds are freely moving gears and engage the corresponding fixed gears , of the solid shaft of the component transmission or with the corresponding fixed gear of the hollow shaft of the component transmission . The freely moving gears - can be secured on the main shaft in a frictionally locking fashion by shifting sleeves. The shaft , , on which the freely moving gear of a pair of gearwheels is arranged will be determined by the installation space available for the double clutch transmission . Variants of structural and functional terms presented above are conceivable. 6 22 2 6 22 30 31 30 31 22 6 30 22 31 6 30 31 24 25 30 31 22 6 22 13 18 3 4 13 18 22 6 13 18 22 6 32 FIG. 1 FIG. 1 An output shaft is arranged parallel to the main shaft and is mounted in the transmission housing of the double clutch transmission . The output shaft is connected operatively to the main shaft via a pair of gearwheels , , which are embodied as fixed gears , on the corresponding shafts , . The gearwheel is assigned to the main shaft , and the gearwheel is assigned to the output shaft . The transmission ratio of the pair of gearwheels is approximately 1 here. In the embodiment of , the gearwheels , are disposed axially between the gearwheels , of the sixth and second gear speeds. Alternatively, the gearwheels , can be arranged at a different location depending on the available installation space in the axial direction of the main shaft . The output shaft is located before a plane spanned by the main shaft and the shafts , of the first and second component transmissions , , and hence is offset laterally with respect to the shafts , , . For the sake of simplified illustration, the output shaft is folded into the plane of the shafts , , in . The output shaft is connected operatively, for example via an obliquely toothed beveled gearwheel , to a drive axle (not illustrated) of the motor vehicle. 8 6 25 4 8 26 27 3 4 8 40 40 6 33 34 9 40 39 33 9 9 40 14 17 19 21 23 31 33 2 A gear drive is arranged rotatably on the output shaft and is engaged with the gearwheel of the second component transmission . Alternatively, the gear drive can be engaged with the gearwheels , of one of the component transmissions , . In the illustrated embodiment, the gear drive comprises a gearwheel , which is embodied as a freely moving gear on the output shaft . A gearwheel is arranged on an output shaft of the electric machine and is engaged with the gearwheel . A clutch device is arranged between the gearwheel and the electric machine so that the electric machine can be decoupled and is embodied, for example, as a dog clutch. The gearwheels , -, -, -, of the double clutch transmission preferably are spur gears, and particularly spur gears with oblique toothing. Alternatively, other types of toothing can be used. 8 8 38 6 38 34 9 38 39 9 39 38 9 FIG. 2 An alternative embodiment of the gear drive is illustrated in . The gear drive is embodied here as a planetary gear mechanism and is arranged coaxially with respect to the output shaft . The planetary gear mechanism also can be arranged coaxially with respect to the output shaft of the electric machine . The planetary gear mechanism permits a large range of transmission ratios to be covered with a very small installation space. The clutch device is provided for decoupling the electric machine . In this embodiment, the clutch device can be embodied so that a ring gear of the planetary gear mechanism can be secured by a dog clutch for the decoupling the electric machine . FIG. 3 FIG. 3 1 21 25 7 9 13 18 3 4 22 13 18 23 6 13 19 23 6 30 31 8 25 4 shows the drive system in a schematic side view in the plane of the wheels and and looking away from the internal combustion engine . For simplicity, the electric machine and the other gearwheels are not shown. The shafts , of the first and second component transmissions , span a plane with the main shaft . An axial distance a between the shafts , and is determined by the desired transmission ratio of the gear speeds and the available installation space. The output shaft is offset laterally offset from the plane spanned by the shafts , and . The output shaft is arranged so that the gearwheels , (not shown in ), the gear drive and the gearwheel of the second component transmission are in engagement. 1 10 7 7 36 5 11 12 11 7 13 3 3 17 7 13 17 26 22 22 30 31 6 The method of functioning of the drive system is described briefly below. The output shaft of the internal combustion engine transmits the torque of the internal combustion engine to the clutch housing of the double clutch and to the first and second clutches , . When the first clutch closes, power flux is produced from the internal combustion engine to the solid shaft of the first component transmission . Depending on the gear speed selected at the first component transmission —this will be the first gear speed with the gearwheel when the motor vehicle starts—the torque of the internal combustion engine is transmitted from the solid shaft to the drive axle and thus to at least one drive wheel of the motor vehicle via the gearwheel , the gearwheel , which in this shifted position is secured on the main shaft so as to rotate with it, the main shaft , the pair of gearwheels , and the output shaft . 4 21 25 22 25 18 9 12 18 5 11 12 11 12 36 5 18 9 25 4 5 8 6 25 21 18 9 7 9 22 25 20 24 19 23 In the meantime, the next desired gear speed is pre-selected in the second component transmission , and in this case is the gear speed two with the gear . The gearwheel is secured on the main shaft by a corresponding shifting sleeve after synchronization of the rotational speed of the gearwheel with that of the hollow shaft . The electric machine advantageously can assist the synchronization here. However, there still is no torque transmitted from the clutch to the hollow shaft since the double clutch permits only optional closing of the first clutch or the second clutch . In the event of a change of gear speed, for example from the first gear speed into the second gear speed, the first clutch opens, while the second clutch closes. The power flux from the clutch housing of the double clutch to the hollow shaft of the second component transmission is produced. Shifting therefore is possible without an interruption of tractive force. The electric machine is connected operatively to the gearwheel of the second gear speed of the second component transmission , irrespective of the position of the double clutch , via the gear drive , which is arranged rotatably on the output shaft . The gearwheel is engaged with the fixed gear , and hence the torque of the electric machine can be transmitted to the hollow shaft . Given a corresponding shift position, a parallel operating mode of the electric machine and the internal combustion engine therefore is possible. The torque of the electric machine can then be transmitted to the main shaft via the gearwheel or via the gearwheel stages , ; , . 9 16 17 3 26 27 4 9 8 6 21 25 9 4 7 9 8 38 34 9 6 An operative connection of the electric machine to the fixed gears , of the first component transmission also is possible via the freely moving gears , , but the transmission ratios in this case are less advantageous. Furthermore, a connection to the second component transmission is advantageous in that the first gear speed, which generally is used to start the vehicle, can be pre-selected without deactivating the electric machine . Thus, a speed advantage is obtained when starting the motor vehicle. The gear drive is arranged on the output shaft , which is present in any case. The gearwheel of the second gear speed is structurally difficult to reach because of its small external diameter, but can be reached comfortably via the gearwheel , which is engaged therewith. The transmission ratio of the electric machine with respect to the second component transmission is defined by the ratio of the maximum rotational speed of the internal combustion engine with respect to the maximum rotational speed of the electric machine . The gear drive can have a planetary gear mechanism arranged coaxially with either the output shaft of the electric machine or the output shaft . Thus, a very large range of transmission ratios can be achieved with a small axial installation space. FIG. 4 1 11 3 27 22 4 25 22 7 10 7 36 11 13 16 27 22 30 31 6 32 9 22 34 9 33 8 25 7 9 9 shows a first exemplary operating state of the drive system . The first clutch is closed and the third gear speed in the first component transmission is engaged. For this purpose, the gearwheel is secured on the main shaft via the corresponding shifting sleeve. The second gear speed in the second component transmission is engaged and the gearwheel is secured on the main shaft . A torque applied by the internal combustion engine is indicated by the thick line and is transmitted to a drive shaft of the motor vehicle, and therefore to at least one vehicle wheel, from the output shaft of the internal combustion engine via the clutch housing , the first clutch , the solid shaft , the gearwheel pair , , the main shaft , the gearwheel pair , , the output shaft and the bevel gear . Parallel to this, a torque of the electric machine , which is operated as an electric motor, is transmitted to the main shaft via the output shaft of the electric machine , the gearwheel , the gear drive and the gearwheel . Thus, the internal combustion engine and the electric machine operate cumulatively. Alternatively, the electric machine also can operate as a generator in the operating state shown, for example, to charge an electric energy store. FIG. 5 1 2 6 12 9 34 9 10 7 36 12 18 21 25 8 33 9 7 1 9 7 7 9 7 shows a second operating state of the drive system . No gear speed of the double clutch transmission is engaged, i.e. no torque is being applied to the output shaft . The second clutch is closed and the electric machine operates in the generator mode. A torque (illustrated by the thick line) is transmitted to the output shaft of the electric machine from the output shaft of the internal combustion engine via the clutch housing , the second clutch , the hollow shaft , the gearwheel , the gearwheel , the gear drive and the gearwheel . The electric machine therefore is connected operatively to the internal combustion engine without setting in motion a vehicle equipped with the drive system . The electric machine generates electrical energy that can be stored in an energy store. This operating state is assumed, for example, to charge the electric energy store during a waiting phase of the motor vehicle at a traffic light. Alternatively, the internal combustion engine can be started by the electric machine in this operating state when the electric machine is operated as an electric motor, and therefore when there is a reversed torque profile. An additional starter for the internal combustion engine therefore advantageously can be avoided. FIG. 6 1 11 12 7 2 4 20 18 34 9 32 6 30 31 22 24 20 18 21 25 8 33 9 2 9 9 4 shows a third operating state of the drive system where both clutches , are opened so that the internal combustion engine is decoupled from the double clutch transmission . In the second component transmission , for example the sixth gear speed is shifted. Alternatively, the gear speeds two or four can be shifted. For this purpose, the gearwheel of the sixth gear speed is secured on the hollow shaft via the associated shifting sleeve. A torque (illustrated by the thick line) is transmitted to the output shaft of the electric machine from the bevel gear via the output shaft , the pair of gearwheels , , the main shaft , the gearwheel , the gearwheel , the hollow shaft , the gearwheel , the gearwheel , the gear drive and the gearwheel . The electric machine operates here in the generator mode to recover braking energy of the vehicle. In this context, kinetic energy of the vehicle first is converted into kinetic rotational energy of the rotating parts of the double clutch transmission via the vehicle wheels and the drive shaft, and then into electrical energy via the electric machine . The vehicle is braked by means of the electric machine operating as a generator. One of the gear speeds two, four or six of the second component transmission is selected depending on the velocity of the vehicle and the desired transmission ratio. BRIEF DESCRIPTION OF THE DRAWINGS The invention will be explained in more detail below on the basis of exemplary embodiments and with reference to the accompanying schematic figures of the drawing. FIG. 1 is a plan view of a drive system according to a preferred embodiment of the invention. FIG. 2 is a plan view of a drive system according to a further preferred embodiment of the invention. FIG. 3 FIGS. 1 and 2 is a side view of the drive system according to . FIG. 4 FIG. 1 shows a first exemplary operating state of the drive system of . FIG. 5 FIG. 1 shows a second operating state of the drive system of . FIG. 6 FIG. 1 shows a third operating state of the drive system of .
We sent our staff writer/photographer Connor Feimster to cover the 2014 Skate and Surf Festival in Asbury Park, NJ. After 1000+ initial shots, Connor weeded out a bunch and was left with 18 albums, separated by each band he saw at the festival. Check out his recap of the weekend and be sure to click on each photo to be linked to their individual album! After witnessing the mess that was the 2013 Skate and Surf Festival (AKA: The Great Bamboozle Decline), I was wildly hesitant to see what 2014 had to offer; the lineup was really small, the venue was moved, ticket prices were pretty high, and one of the biggest names had backed out of the festival altogether (we missed you, DMX). It looked like it would be the turnout alone that would make or break the Skate and Surf name and legacy for next year and beyond. So, because of this, I gladly took charge to cover the festival. I had to see what could possibly happen this year. The first day began at 2:00pm and the gates were even open early. That gave me more than enough time to plan out my day. Having nobody to see until about the 4:00 hour gave me a chance to check out new music and just get some shooting in for whomever I was drawn towards. That being said, I simply walked forward and checked out The Moms, a rowdy rock band from New Jersey who are signed to Paper + Plastick (home of Pentimento). They kept me incredibly entertained and quite pleased with my blind decision. Following The Moms’ performance, I realized I still had time to kill, so I wandered around the grounds and scoped out vendor tents and band merch and the like. I then decided to head over to the main stage to check out a band called Wyland, who had won the Break Contest to play there. I had no idea what to expect for never having heard of them until five minutes prior. What I was given was a wonderful folk rock experience which reminded me of River City Extension, a big favorite of mine. Heading back to my regularly-scheduled programming, I was beyond excited to catch NGHBRS, a band I’ve enjoyed for years but have never had the chance to see live. Due to some stages running behind schedule and others running ahead of schedule, I missed the beginning of NGHBRS’ set, but was still able to get in there to shoot. They played a number of songs from their debut record Twenty One Rooms and absolutely killed it with their infectious hooks and funky rock in tow. Next on my agenda was pop-punk’s newest glory children Knuckle Puck. The Illinois natives had one of the biggest turnouts of the smaller stages and delivered to the masses. This band is most certainly going to blow up in the near future, especially given their place on the biggest pop-punk tour of the year alongside Man Overboard and Transit, which is currently on the road right now. I traveled back to the main stage to see What’s Eating Gilbert, the side project from New Found Glory mainstay Chad Gilbert. Prior to this performance, I only heard Gilbert’s solo music in passing, as a result of flagrant ADD that mostly breaks out once I’m on the internet. What I ended up seeing was a band full of well-dressed folks, Gilbert fronting with a sharp jacket and black bow tie combo. This was one of the more entertaining sets of the day, which made for a fun crowd and a happy Chad. I went back to the side stages one last time to catch Citizen‘s set before I was to set up camp and the main stage for the rest of the evening. I had seen Citizen twice before, but was stuck in the far back for both performances. I was excited to actually see this band and more excited to just hear songs from Youth again. They brought their all, even through the exceedingly loud speaker setup on the Loud Stage (was it an inside pun?). It came time to say adios to the side stages and stand my ground at the main stage. This was the chunk of time I was most looking forward to, because the next three bands were some of my most hotly-anticipated of the weekend. The remainder of the night started off with The Early November and their wide song catalog. This band simply cannot play a bad show. It’s pretty expected to be wowed by TEN and they kept that expectation and made it reality once more. The main highlight of my day (and possibly weekend) followed TEN’s set, and that was none other than Saosin, reunited with original frontman Anthony Green. Anyone reading this probably knows how much Green and his musical endeavors mean to me, so you can only imagine how amazed I was during this performance. Having played all of Translating the Name, “Mookie’s Last Christmas”, and “I Can Tell There Was An Accident Here Earlier”, it’s safe to say that diehard fans of early Saosin were beyond pleased. The only thing that left me confused was why the band didn’t play “You’re Not Alone” or “Voices” or one of their bigger hits from ex-frontman Cove Reber’s era. Speculation suggests the “back-to-basics” ideology, which is completely fine with me. The last band of the day was, without a doubt, the biggest deal of the festival. If you had told me that I’d be seeing, let alone shooting a Midtown set, I’d have called you a liar and maybe cried because of the impossibility. That has all since changed because I got to witness some of the happiest fans and one of the biggest performances in recent memory. They played a wide range of songs from each of their records and tried their best to make everyone happy. And let’s just say that it felt miiiighty good to see Gabe Saporta hold a bass. Day two was underway and was the day that held most of the bands I was excited to see later in the day. As the gates opened, people flocked over to the main stage to wait for We Came As Romans while I perused the merch yet again. I decided to saunter over to the stage to see them. I was a big fan of To Plant A Seed back in high school, so I was curious to see how the band had come along in recent years. While they seem to fall under the category of metalcore bands that need to look showy (why does that need to happen?), they still have the knack for doing what they do, which made for an incredibly fun set. I headed back to the side stages to check out United Nations, the powerviolence band comprised of Thursday‘s Geoff Rickly and David Haik and Zach Sewell of Pianos Become the Teeth. I last saw United Nations at a venue within an Atlantic City casino so I don’t feel as if I had received everything that the band had to offer at that time. I was excited to catch them again, especially given that I’m now a stronger fan of Pianos these days. Even though they were cursed with technical difficulties, they still made themselves loud and clear and made the earth (and political agenda) quake. After realizing that I had made X amount of laps around the entire grounds in the last day and a half, I made my way back to the main stage to bear witness to Hidden in Plain View, who were continuing their reunion following a wildly successful show at Philadelphia’s TLA a few months prior. I wasn’t lucky enough to see that show, so I was pretty excited to see what they were bringing to the festival. Frontman Joseph Reo has some of the best stage presence I’ve seen in the past few years and it only made me think of how wild it must have been back in 2005. As soon as I was finished shooting HIPV, I hotfooted it back to the side stages to catch my friends in Tiny Moving Parts. The guys, who are one of my current favorite bands, are recent signees to Triple Crown Records, which made landing Skate and Surf a huge deal to them. It made me wildly happy to see even a handful of people screaming their lyrics back to them and their performance was outstanding, as usual. Be on the lookout for their TCR debut! I found myself with time to kill after TMP finished their set. I was standing amidst the side stages looking around and wondering where to go next until I heard what sounded like little kids at a microphone. I turn to the stage and was sure enough to see three young boys about to start playing. Unlocking the Truth are a three-piece metal band comprised of seventh-grade boys and they absolutely shred. They were extremely welcome to the festival with open arms and left people totally speechless. Upon asking the guys when they’re next playing Philadelphia, they told me “…with a band called Queens of the Stone Age“…my jaw dropped. After trying to remember my vocabulary aside from the word “wow”, I walked a few stages over to catch Canada’s own Fucked Up. Their previous record David Comes To Life is still talked about for being extremely extensive and epic, so I had to see what their show was all about. Skate and Surf had a general rule for photographers that they were to leave the photo pit area after the second song, but that didn’t mean jack for Fucked Up. Frontman Damian Abraham spent his time, after disrobing to nothing but his shorts, on the grass with the crowd and various passersby, whether they were fans of his band or not. Looks of confusion quickly turned into big smiles when he interacted with anyone and everyone he could, all the while not missing a lyric. Fucked Up definitely held the most memorable set of Skate and Surf for me. Next up were one of my current favorite bands. Having seen a portion of them play a set earlier in the day, I knew that Pianos Become the Teeth would still bring their all. I never expect them to be anything short of amazing and find it hard to think they can get any better because they’re that good. I initially was annoyed to find them playing around the time that the sun was setting because it would make for a challenge with shooting, but there was just something about their music and the sunset that simply went together perfectly. It was an ideal setting for their powerfully emotional songs and I’m glad to say that PBTT gave me my favorite photos from the entire weekend. I sadly had to leave Pianos’ set early to make sure I could shoot my absolute favorite band in the entire world. Circa Survive mean the world to me and it had been a handful of months since I’d last seen them perform. Hot off of Saosin’s performance the night before, Anthony Green was ready to return to his purest element. He expectedly spent most of his time on top of and within the crowd, but his trademark dancing on stage was very present. I honestly spent most of my time gawking at Steve Clifford’s beautiful gold drum kit, but I still walked away with the same euphoria I get after seeing Circa. Their fans are their family and it was a big ol’ reunion. The last band I shot had set up over at one of the side stages and people began flocking right as I arrived. CHON are a very impressive technical instrumental band that have a huge, almost cult-like following in the math rock scene. I had never really devoted enough time to get to know them prior to Skate and Surf, but as soon as they began to play, I was floored. There was such awe-inspiring sounds coming from their instruments that I had to make sure it was really happening. Absolutely do not sleep on this band; they’ll melt your face off. I closed out my festival experience with an amazing set by New Found Glory, but I sadly didn’t find any redeemable photos on my camera after having it die halfway through their first song. The set, however, was incredible. In their usual fashion, they turned everything into a party. In fact, right alongside the main stage, in the Berkeley Hotel, was a wedding reception. After watching the show for a while, I noticed a flash of white run by me along the side of the stage. The beautiful bride and her childishly excited husband were standing on the stairs leading up to the main stage, talking to festival founder John D. After talking with him for a while, the happy couple got the okay to run onstage and interact with the crowd and NFG. Then, once their next song began, the groom stage dove into a sea of welcoming hands, making for a memorable wedding and a beautiful ending to a great weekend. Skate and Surf has redeemed itself after having a tumultuous time trying to breathe in 2013. There was hardly anything worth complaining about, which is unheard of even within bigger festivals like Coachella or Bonnaroo. I can only hope for bigger and better things in 2015, if it ends up happening. Were you at Skate and Surf too? What were some of your favorite moments? Let us know in the comments! My name is Connor and I've been writing for Mind Equals Blown since September of 2013. I've been photographing bands since 2007 but didn't decide to make it more of a profession until 2012. I live in the Philadelphia area and am a graduate of Arcadia University with a Bachelor of Fine Arts degree in acting. When I'm not shooting shows, I'm acting in a different kind of show (that's theatre), writing, or seeing new films.
http://mindequalsblown.net/photos/skate-and-surf-festival-2014
Presentation is loading. Please wait. Published byGeoffrey Johns Modified over 4 years ago 1 Community Health Board Name Health Departments within CHB 2 Optional – picture of locations and/or counties Optional – picture of locations and/or counties Could add text about agency/agencies overview – location, county population, how many health departments, etc. 3 Improvement Team Members Who are your team members? – Names, titles, and roles – Team photo (optional, but photos are nice)- MDH has team photos originally sent – would be happy to send them 4 PLAN Problem Statement Aim Statement 5 PLAN - Current Process Process map or list of the process Information you have about the process Any data collection you did about the current process – check sheet examples, charts, surveys, etc. 5 6 PLAN - Root Cause Analysis Root Cause analysis – fishbone diagram, 5 why’s, etc. 6 7 PLAN - Improvement Theories Brainstorm list of change ideas Prioritizing matrix of solutions if have Improvement theories generated 7 8 DO - Tests of Change – List any PDSA cycles you implemented – Data collected during DO 8 9 Study What did you find as a result of the PDSA’s – Any findings including charts, graphs, check sheets, etc. 10 Act – Adapt, Adopt or Abandon What changes have you made as a result of your findings? – What did you adapt, adopt or Abandon of your improvements? – List or provide pictures of the key improvements your team made – List or provide pictures of your revised tools, forms, etc. that you created – Include before and after photos – Include quotes from clients/staff about improvements What measurable results have you seen? – List them or show the graphs, charts, etc. 11 Key Learnings What are some of the lessons your team has learned by doing the QI process? − Aha moments − Challenges you overcame − Advice you would give to others 12 Future Plans What does your team plan to do next to spread and sustain QI? – Examples: Additional changes, ways to involve more staff, new QI projects, training, performance management development, etc. Similar presentations © 2020 SlidePlayer.com Inc. All rights reserved.
https://slideplayer.com/slide/4551885/
Male Cat aged Young Adult. Starburst is a neutered longer-haired male with gray and white markings, likely born the summer of 2020. He is a very sweet and talkative cat who is warming up to the good life after some months of hardscrabble existence outside. He enjoys being petted and will purr and flop over after a few minutes of petting. He also likes to play and loves treats. Starburst has been altered, vaccinated and dewormed and is ready for his next adventure with a forever family. For more information please fill out a pre-adoption application and contact foster caregiver Kelsey at 513-709-0807.
https://us14b.sheltermanager.com/service?method=animal_view&animalid=1731&account=vn1916
Duration and Geography Nights are shorter than days on average due to two factors. Firstly, the sun is not a point, but has an apparent size of about 32 arc minutes. Secondly, the atmosphere refracts sunlight so that some of it reaches the ground when the sun is below the horizon by about 34 arc minutes. The combination of these two factors means that light reaches the ground when the center of the sun is below the horizon by about 50 arc minutes. Without these effects, day and night would be the same length at the autumnal (autumn/fall) and vernal (spring) equinoxes, the moments when the sun passes over the equator. In reality, around the equinoxes the day is almost 14 minutes longer than the night at the equator, and even more towards the poles. The summer and winter solstices mark the shortest and the longest night, respectively. The closer a location is to either the North Pole or the South Pole, the larger the range of variation in the night's length. Although equinoxes occur with a day and night close to equal length, before and after an equinox the ratio of night to day changes more rapidly in high latitude locations than in low latitude locations. In the Northern Hemisphere, Denmark has shorter nights in June than India has. In the Southern Hemisphere, Antarctica has longer nights in June than Chile has. The Northern and Southern Hemispheres of the world experience the same patterns of night length at the same latitudes, but the cycles are 6 months apart so that one hemisphere experiences long nights (winter) while the other is experiencing short nights (summer). Between the pole and the polar circle, the variation in daylight hours is so extreme that for a portion of the summer, there is no longer an intervening night between consecutive days and in the winter there is a period that there is no intervening day between consecutive nights. Read more about this topic: Night Famous quotes containing the words geography and/or duration:
https://www.primidi.com/night/duration_and_geography
RELATED APPLICATIONS BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION This application claims the benefit of U.S. Provisional Patent Application No. 60/381,071 filed May 17, 2002, incorporated herein by reference. 1. Field of the Invention This invention relates to carpentry tools. More particularly, the invention relates to a method and apparatus for smoothing unfinished wood or plastic surfaces and edges that are of curved other complex shapes. 2. General Background and State of the Art Many individuals engage in building products from wood or plastic, either professionally or as a hobby. Enhancement of a person's home or office, for example, is possible by utilizing the tools and techniques known or taught through experience or through researching books or trade journals on the subject. A wide variety of tools are available to the builder to achieve his or her goals in any building project. Tools vary from the small and simple (pliers, hammers, screwdrivers, to name a few) to the large and complex (lathes, for example). Some tools are powered by hand while others may be powered by electricity. Tools can be used to cut and form simple straight lines or level surfaces. Specialty tools, such as routers, also are available to cut more complex shapes, allowing the individual to form creative and fanciful pieces for purposes of function and/or enhancing the appearance of the finished project. The process of building a finished product of wood or plastic requires a number of steps. It may be required by the particular project to prepare a piece in some fashion before it is cut. After the piece is cut into its desired configuration, other steps remain, which may include smoothing, polishing, assembling and finishing. Smoothing operations are generally achieved by sanding. Currently, all powered sanding tools are only for flat surfaces. Sanding and polishing of more complex surfaces is achieved through time consuming and painstaking manual sanding and polishing. Some smoothing and polishing of complex surfaces may be attempted by using a router, but there is a risk that the router bit could splinter the piece, therefore ruining the product. There is no sander currently available that will sand the custom curves and shapes that a router makes on the edges of various projects. There is no sander currently available that will allow free style sanding with the sanding tubes that are currently widely available. There is no device currently known that is versatile and will allow the smoothing and polishing of a variety of shapes with easily changeable parts. Accordingly, it is an object of the invention to provide a multipurpose machine and method that can be used to meet the requirements of virtually any sanding project that involve complex shapes and surfaces. Another object of the invention is to provide a sanding tool that can be configured into almost any shape to meet the needs of such a project. A further object of the invention is to allow the easy interchange of parts so that a sanding project can proceed efficiently with little interruption regardless of the variety of complex curves and surfaces encountered. An additional object of the invention is to enable the use of removable sanding heads that could match the various shapes currently available in standard router heads. It is yet another object of the present invention to provide a multipurpose machine that could be used in conjunction with standard sanding tubes and paint rollers to allow for freehand sanding and/or polishing over various surfaces and curves. These and other objectives are achieved by the present invention, which, in a broad aspect, provides a high degree of flexibility and efficiency in allowing the user to sand and polish a wide range of complex surfaces and curves in wood or plastic products. An apparatus according to one embodiment of the invention utilizes part of an existing rotational variable speed power tool having a driving mandrel extending from the power unit of the tool. The driving head of the mandrel has locking receptacles in the driving head surface that faces away from the power unit. A sanding head having locking flanges at one of its ends is fitted over the mandrel and is fixed to the driving head by inserting the locking flanges of the sanding head into the locking receptacles of the driving head. The sanding head is further secured to the mandrel by means of a threaded fastener that fits into a threaded recess in the end of the mandrel. A shaped abrasive material is adhered to the outside of the sanding head. The abrasive material can be formed into virtually any shape that will conform to the surface or curve that is to be sanded. When it is desired to use the tool to smooth a different shape, the sanding head can be removed and exchanged with another sanding head that is nearly identical in configuration to the first head, except that on the second sanding head the abrasive material is shaped to conform to the next surface that is to be smoothed. It is possible to build a number of interchangeable sanding heads each having an abrasive layer in a unique configuration that will allow for virtually any complex shape or curve to be smoothed by the tool. Another embodiment of the invention allows the use of a standard sanding tube to be fixed to the driving head of the mandrel by attaching an extension to the threaded recess in the end of the drive shaft of the mandrel so that the combination of mandrel and extension is sized to accommodate the length of a standard sanding tube. The sanding tube can then be fitted over the drive shaft and extension and fixed at the two ends in a manner similar to the way in which the sanding head is fixed to the mandrel. If so desired, a handle may be secured to the end of the extension instead of a threaded fastener. This application allows for free style sanding of the surface that is to be sanded. The length of the sanding surface can be extended to accommodate more than one sanding tube by the addition of one or more additional extensions, thus extending the sanding surface available to the user. If larger diameter sanding tubes are needed for certain applications, rubber sleeves can be fixed over the drive shaft and the extension(s) prior to fixing the sanding tube(s) to the drive shaft and extension(s). In another embodiment of the invention, a standard paint roller can be used in much the same way as the sanding tube for free style polishing of surfaces such as surfing boards or skis. In a further embodiment of the invention, parts of a rotational variable speed tool will be utilized that will enable the use of removable sanding heads that would match the shapes of currently available router heads and also allow the use of standard router bearings. Further objects and advantages of this invention will become more apparent from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention. In the following description of the present invention, reference is made to the accompanying drawings, which form a part thereof, and in which are shown, by way of illustration, exemplary embodiments illustrating the principles of the present invention and how it may be practiced. It is to be understood that other embodiments may be utilized to practice the present invention and structural and functional changes may be made thereto without departing from the scope of the present invention. 10 2 2 12 14 16 18 12 FIGS. 1 A method and apparatus for smoothing unfinished surfaces, according to the preferred embodiment of the invention, is embodied in a power tool generally referred to by the reference numeral (, A and B). A power unit secures arbor , collet and mandrel , which extend outwardly from power unit . While a variety of existing power tools may be suitable for the present application, I have found a Spiral Saw made by the Roto Zip Tool Corporation of Cross Plains, Wis. is well suited for the present application. 18 20 16 22 18 23 24 24 26 24 26 24 18 28 24 22 28 46 30 FIG. 4 Mandrel is illustrated in greater detail in . Mandrel head includes a threaded portion, which is secured to collet . Drive plate of mandrel includes opposing first surface and second surface . Embedded in second surface are locking receptacles for securing sanding heads to second surface . Although a variety of configurations of locking receptacles may be utilized, in the preferred embodiment of the invention, a pair of horizontal grooves extending radially from the center of second surface have been found to be suitable for use in the present application. Those skilled in the art will appreciate that other configurations of locking receptacles in shape, orientation and number may achieve the identical function as the configuration I have chosen. Mandrel also includes drive shaft extending away from second surface of drive plate . Drive shaft further includes threaded recess in end wall . 10 32 28 18 32 3 3 3 32 32 44 40 43 40 44 44 32 44 40 32 44 43 32 44 43 3 3 44 FIGS. 3A FIG. 2B FIG. 3C FIG. 3A FIG. 3B FIGS. 3A To utilize tool in accordance with the present invention, a sanding head is mounted on drive shaft of mandrel . Various configurations of sanding head are further illustrated in , B, C and D. Sanding head may be constructed of various materials including pressed cardboard, hardboard, plastic, or metal, among others. Sanding head is at least partially covered with an abrasive material , which is adhered to outer surface and in some cases, to raised surface of outer surface (best illustrated in ). Abrasive material may be one of a variety of know abrasives such as grit, surface flutes, diamond, or carbide. Abrasive material may be formed into a wide variety of shapes that can be made to match virtually any surface that is encountered in a sanding project. illustrates a sanding head where the abrasive material is shaped into a substantially uniform layer along and around outer surface . illustrates a sanding head where the abrasive material is formed into a shape converging into a sharp-edged face along and around raised surface . illustrates a sanding head where the abrasive material is formed into a shape converging into a substantially rounded face along and around raised surface . While , B, and C are illustrative of some specific shapes, it will be appreciated by one of ordinary skill in the art that there are a virtually infinite number of shapes into which abrasive material may be formed, depending upon the surfaces that are to be smoothed. 32 50 34 50 26 50 26 32 38 38 32 28 50 26 32 18 47 48 46 36 28 FIG. 3D Sanding head further includes drive locks mounted on first end (). In the preferred embodiment of the invention, drive locks are in the shape of raised flanges that conform to and fit into locking receptacles . It will be appreciated that drive locks can take on any number of configurations depending upon the configuration of locking receptacles . Sanding head also includes through hole extending axially through the body of the sanding head. Through hole is sized so that sanding head fits drive shaft and drive locks can be fitted into locking receptacles . Sanding head is further secured to mandrel by means of washer and threaded fastener that fastens to threaded recess in second end of drive shaft . 10 44 32 43 44 48 46 50 26 32 10 44 40 12 In practice the assembled tool is used by applying the abrasive material on sanding head to the curve or shape that requires smoothing. Sanding heads having different shapes formed by raised surface and abrasive material can be easily interchanged as needed as the requirements of any sanding projects dictate by removing threaded fastener from threaded recess , releasing drive locks from locking receptacles , and sliding the sanding head from the tool and replacing it with a similar sanding head having the appropriately-shaped abrasive material on its outer surface . While the present invention allows the flexibility of using one tool for smoothing a wide variety of surfaces, an additional advantage of the present invention is that power unit is a variable speed unit, thus allowing very precise control of the smoothing operation. FIG. 5 FIG. 5 56 56 30 18 48 44 32 illustrates a second embodiment of the present invention. While similar in many respects to the preferred embodiment of the invention described above, I have adapted a different product for use so that the sanding heads can be made to match the size and shape of the standard ¼-inch router heads currently available on the market. It also uses the standard router guide bearing that would allow it to be used to surface the area that a router has previously cut. In addition, it could be used to round the corners of a small box or soft wood that a normal router head would splinter. Guide bearing is mounted to the unit at second end wall of mandrel and secured to the unit by fastener . The sanding heads can be formed into shapes that match the compound curves of some of the router heads, which would then allow the cleaning of an area that a router might have missed or that have become uneven. Abrasive material on sanding head shown in would be formed in the multiple shapes desired to match the router heads. 52 32 52 12 54 52 52 44 In addition, the use of vertical depth guide would allow the depth of the sanding head to be controlled in order to further match the curve and depth to match the routed surface. Guide slips over the power unit and set screw adjusts guide to the desired depth. Guide can be positioned so that it almost reaches abrasive material . For this embodiment of the invention, I have found that the Multipro, a product made by Dremel of Racine, Wis., provides a suitable base on which to build the finished unit. 18 32 58 28 64 60 58 46 58 66 62 FIG. 6A FIG. 6B Further embodiments of the invention are made possible by extending mandrel to accommodate smoothing fixtures other than sanding heads. In , sanding head has been removed and extension has been secured to drive shaft by inserting threaded projection on first edge of extension into threaded recess . Extension , as illustrated in , includes a threaded receiving port in second edge into which may be mounted an additional extension if so desired. It will be appreciated by those skilled in the art that use of one or more extensions can provide the unit with a variety of lengths that would be suited to the desired application. FIG. 7 FIG. 7 58 28 58 58 68 28 68 66 A third embodiment of the present invention is illustrated in , which enables the use of standard sanding tubes currently available on the market. In this embodiment, the extension is sized so that the overall length of drive shaft and extension total a length of 4½ inches and a diameter of ½-inch, which is the length and diameter required to accommodate a standard sanding tube. shows sanding tube in place. Sanding tube may be secured to drive shaft and extension by using an appropriately sized washer (not shown) and inserting a threaded connector (not shown) into receiving port . 72 58 76 66 74 72 If a user so desires, hollow handle may be secured to extension by threading handle fastener into threaded receiving port . Thrust bearing is embedded in handle to allow the handle to turn freely during operation. 68 68 Two sanding tubes may be used in this application by attaching additional extensions as needed to make the overall length of the drive shaft 9 inches. By using two sanding tubes in tandem, the present invention allows sanding over a 9-inch surface or into complex curves of carvings. 1½ inches in diameter), a rubber sleeve 70 is placed onto the drive shaft 28/extension 58 combination prior to placing the sanding tube 68 on the apparatus. Rubber sleeves of If a user desires to use larger size standard sanding tubes (they are available in sizes up to 4½ inch length are available from the Ridge Tool Company of Elyria, Ohio. FIG. 8A FIG. 8B 78 28 78 28 80 90 80 86 82 78 26 22 88 80 78 28 90 78 90 98 28 100 28 102 66 100 78 A fourth embodiment of the present invention is illustrated in , which utilizes a paint roller mounted to drive shaft . This embodiment of the invention is ideal for polishing surfaces such as skis, snowboards, and hand rails, among other applications. In order to secure paint roller to drive shaft , plastic end caps and are inserted in opposing ends of the paint roller. illustrates the configuration of the two end caps. Flanged end cap is constructed with locking flanges on first side to secure paint roller to locking receptacles in drive plate . Opening in flanged end cap allows paint roller to slide over drive shaft . Outer cap is inserted in the opposite end of paint roller . Outer cap includes orifice , which is sized to slide over drive shaft . Roller handle is secured to drive shaft by inserting roller handle screw into threaded receiving port . Use of the roller handle in conjunction with paint roller allows the user to better control the polishing task. 78 28 78 28 104 28 80 22 78 104 102 66 FIG. 8C An alternative way of mounting paint roller to drive shaft in order to ensure a suitable fit between paint roller and drive shaft , is through the use of plastic insert , which is fitted onto drive shaft after flanged end cap is secured to drive plate as illustrated in . Paint roller is then mounted on insert . The assembled parts are further secured by inserting roller handle screw into threaded receiving port . The foregoing descriptions of exemplary embodiments of the present invention have been presented for purposes of enablement, illustration, and description. They are not intended to be exhaustive of or to limit the present invention to the precise forms discussed. There are, however, other configurations for apparatuses and methods for smoothing unfinished surfaces not specifically described herein, but with which the present invention is applicable. The present invention should therefore not be seen as limited to the particular embodiments described herein, rather, it should be understood that the present invention has wide applicability with respect to sanding and polishing projects. Such other configurations can be achieved by those skilled in the art in view of the descriptions herein. Accordingly, the scope of the invention is defined by the following claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an exploded perspective view of an exemplary apparatus according to the present invention. FIG. 2A illustrates a side view of an exemplary apparatus according to the present invention. FIG. 2B illustrates a sectional side view an exemplary apparatus according to the present invention FIG. 3A illustrates a side view of the sanding head with an abrasive formed into a shape having a sharp edge adhered to the outside surface of the sanding head. FIG. 3B illustrates a side view of the sanding head with an abrasive formed into a shape having a rounded edge adhered to the outside surface of the sanding head. FIG. 3C illustrates a side view of the sanding head with an abrasive formed into a uniform layer adhered to the outside surface of the sanding head. FIG. 3D illustrates an end view of the sanding head. FIG. 4 illustrates a side view of the drive mandrel. FIG. 5 illustrates a side view of a second embodiment according to the present invention. FIG. 6A illustrates a side view of the apparatus without a sanding head in place and with an extension attached to the end of the mandrel. FIG. 6B illustrates a side view of the extension. FIG. 7 illustrates a cross-sectional side view of a third embodiment according to the present invention, using a sanding tube in place of a sanding head. FIG. 8A illustrates a side view of a fourth embodiment according to the present invention, using a paint roller in place of a sanding head. FIG. 8B illustrates a side view of the end caps used in conjunction with the fourth embodiment according to the present invention. FIG. 8C illustrates a cross-sectional side view of the paint roller and paint roller sleeve used in conjunction with the fourth embodiment according to the present invention.
The invention belongs to the technical field of coating preparation, and particularly relates to a primer for a color steel tile and a preparation method thereof. The primer for the color steel tile is composed of a component A and a component B, wherein the component A comprises the following components in parts by weight: 50-60 parts of a water-based acrylic emulsion, 0.2-0.4 part of a dispersing agent, 0.1-0.2 part of a defoaming agent, 0.2-0.4 part of a wetting agent, 2-5 parts of a filler, 5-8 parts of deionized water, 0.1-2 parts of a pH regulator, 0.5-1.5 parts of mineral short fibers and 2-5 parts of microcapsule particles; the capsule core of the microcapsule particle contains dry short-oil alkyd resin; the component B is a water-based curing agent; the mass ratio of the componentA to the component B is (6-8): 1. The primer for the color steel tile has repairing and antirust functions, and the adhesive force of the repaired paint film is good.
I’ve been waiting for weeks to grill some chicken on the MAK. I wanted a nice summer day. Today was that day. First day of 80* this summer; about a month later than normal. Jeff and Sarah came over for lunch today; it was nice to meet her. No way was I going to bust out the camera though. After they left, I put the chairs on the deck and cleaned them up. Took a seat and listened to the Dodgers on the radio. My routine: read, gaze at the sky, doze; repeat. Dodgers beat the Reds 11 – 8, scoring 5 runs in the 7th inning to tie it at 7; then scoring 4 in the 11th for the win. I was excited to do the brined grilled chicken today; I got to use my Grill Grates for the first time. They fit real nice in the MAK. Early this morning I brined the chicken in a solution of 4 quarts water, 1/2 cup salt, 1/2 cup sugar. After a walk with Carla, I took them out of the brine and set the grill to 400. - 1/4 cup lime juice - 2 Tablespoons fish sauce - 2 garlic cloves - 3 tablespoons finely chopped fresh mint - 1 teaspoon red pepper flakes - 1/2 cup vegetable oil.
https://2for66.com/2011/06/05/beautiful-summer-day-brined-grilled-chicken/
DNA hybridization analysis of genomic DNA with ADK probes. Arabidopsis genomic DNA (8 μg/lane) was digested with eitherHindIII (lanes 2, 7, and 12), EcoRI (lanes 3, 8, and 13), EcoRV (lanes 4, 8, and 14), or XbaI (lanes 5, 10, and 15) and the products separated by electrophoresis through a 1% (w/v) agarose gel. The DNA blots were hybridized with a radiolabeled full-length ADK1 cDNA in a hybridization solution containing either 30% (A) or 50% (B) (v/v) formamide or with the ADK2 cDNA in a 50% (v/v) formamide hybridization buffer and washed with 1× SSC at 42°C. The partial ADK1 cDNA (1 ng) was used as a positive control (lanes 1 and 6) and as a test of hybridization specificity of the ADK2 probe (lane 11). Positions of the λHindIII fragments are shown on the left in kb. Isolation of ADK fusion proteins and ADK antibodies. A, Each ADK cDNA expressed as a His-tagged fusion protein in E. coli and purified by nickel affinity chromatography. Analysis of overexpressed ADK recombinant proteins in E. coli by SDS-PAGE and Coomassie Blue staining. Lane 1, Molecular mass markers in kD from largest to smallest are 97.4, 66.2, 45, 31, 21.5, and 14.4; lane 2, uninduced culture; lanes 3 and 4, induced ADK1 and ADK2, respectively; lanes 5 and 6, purified ADK1 and ADK2, respectively; lane 7, 10 μg leaf crude extract. B, Proteins from a replicate gel shown in A were transferred to PVDF and reacted with ADK antiserum. ADK breakdown products were detected in the ADK overexpressing cultures, and a 38-kD peptide was detected in the leaf tissue. C, The ADK antiserum was titered on slot blots of purified ADK1 and ADK2 containing the indicated amount of each protein. Kinetic analysis of ADK1 and ADK2. Results are based on the radiochemical assay using purified His-tagged ADKs as outlined in “Materials and Methods.” In all panels ADK1 and ADK2 are represented by ● and ▴, respectively. A, Determination of the optimal ATP:MgCl2 by varying the ratio from 0/8:1 to 5:1 while maintaining 4 mm ATP. B, ADK activity in the presence of 1 to 8 mm ATP while maintaining the 4:1 ATP:MgCl2 ratio. C, Ado concentration was varied from 0.22 to 5 μm. D, Using the optimal assay conditions, the concentration of Pi was increased from 0 to 50 mm. Activity is expressed as the percentage of the activity in the absence of added Pi. Northern analysis of ADK transcript levels in different organs. RNA was extracted and analyzed by northern hybridization using radiolabeled gene-specific probes for ADK1 (A) and ADK2 (B) as described in “Materials and Methods.” Ethidium bromide-stained ribosomal RNA is shown below each panel. Samples were isolated from leaf (lane 1), flower (lane 2), stem (lane 3), and root (lane 4). Analysis of ADK protein levels in various organs of Arabidopsis. A, Ten micrograms of total protein from crude extracts prepared from leaves of 3- and 6-week-old plants (lanes 1 and 2), flowers of 4- and 6-week-old plants (lanes 3 and 4); roots (lane 5); siliques (lane 6); stems (lane 7); and dry seeds (lane 8) was separated by SDS-PAGE. Each sample was prepared from organs collected from a pool of 10 plants. ADK detected using a fluorescent substrate, quantified, and expressed as a percentage of the amount detected in stems is indicated below each lane. B, The same extracts analyzed in A were desalted and assayed for ADK activity using the radiochemical assay described in “Materials and Methods.” Activity is expressed as a percentage of total ADK activity in stem tissue (18.9 nmol mg−1 min−1). Comparison of amino acid sequences of ADKs from various sources Pairwise clustal analysis of representative ADK sequences from other organisms versus the conceptual translation of the ADK1 andADK2 cDNAs. Genbank accession numbers are given in “Materials and Methods.” Kinetic analysis of ADK1 and 2 Purified recombinant ADK1 and 2 were used to determine bothK m and V max for three substrates of adenosine kinase.V max/K m is presented here as a measure of overall enzyme efficiency for each substrate. Assays were as described in “Materials and Methods.” Table of Contents Thank you for your interest in spreading the word on Plant Physiology. NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.
http://www.plantphysiol.org/content/124/4/1775/tab-figures-data
Technical Field Background Art Disclosure of Invention Technical Problem Technical Solution Brief Description of the Drawings Mode for the Invention The present invention relates to a method and apparatus for generating a downlink signal in a cellular system, and more particularly, to a method of searching a downlink cell in an orthogonal frequency division multiplexing (OFDM)-based cellular system. In a cellular system, for initial synchronization, a terminal should acquire timing synchronization and frequency synchronization on the basis of signals transmitted from a base station, and perform a cell search. After the initial synchronization, the terminal should track the timing and frequency, and perform the timing and frequency synchronization between adjacent cells and the cell search in order for handover. In a synchronous cellular system, all base stations can perform frame synchronization using common time information from an external system. However, a cellular system that has been developed by 3GPP (3rd generation partnership project) is an asynchronous system in which the frame timings of all base stations are independent. The asynchronous cellular system needs to perform a cell search process, unlike the synchronous cellular system. Therefore, a method of acquiring synchronization using a separate preamble and searching a cell has been proposed. However, the method cannot be applied to a system without the preamble. In addition, a method of acquiring synchronization and searching a cell using pilot symbols disposed at start and end points of a sub-frame has been proposed. However, the method has a problem in that a large number of pilots should be used. The present invention has been made in an effort to provide a cell searching method and apparatus that are capable of forming a plurality of synchronization channels in one frame to effectively acquire synchronization and search a cell in an OFDM-based cellular system. In order to achieve the object, according to an exemplary embodiment of the present invention, there is provided an apparatus for generating a downlink signal in an orthogonal frequency division multiplexing (OFDM)-based cellular system. The downlink signal generating apparatus includes a pattern generator and a time-frequency mapping unit. The pattern generator generates synchronization patterns for a plurality of synchronization blocks forming one frame of the downlink signal, and the synchronization blocks each have a continuous series of sub-frames. The synchronization pattern includes a cell group number and information on a start point of the frame. The time-frequency mapping unit maps the synchronization patterns to a time-frequency domain to generate the downlink signal. According to another exemplary embodiment of the present invention, there is provided an apparatus for searching a cell including a terminal in an orthogonal frequency division multiplexing (OFDM)-based cellular system. The cell searching apparatus includes a receiver and first to third estimators. The receiver receives one frame of synchronization blocks. Each of the synchronization blocks has a plurality of adjacent sub-frames, and a plurality of OFDM symbols of the synchronization block each have a synchronization pattern that is composed of a combination of a cell group identification code for identifying a cell group and a frame synchronization identification code for indicating a frame start point. The combination of the cell group identification code and the frame synchronization identification code is referred to as a combination of codes. The first estimator estimates a start point of the synchronization block from the synchronization pattern. The second estimator estimates the frame start point and a cell group number of the cell group to which the cell including the terminal belongs, using the start point of the synchronization block. The third estimator estimates a cell number of the cell including the terminal, using a cell identification scrambling code included in a pilot symbol of the frame. According to still another exemplary embodiment of the invention, there is provided a method of searching a cell including a terminal in an orthogonal frequency division multiplexing (OFDM)-based cellular system. First, a downlink frame including a plurality of synchronization blocks, each having a synchronization pattern that is composed of a combination of a cell group identification code for identifying a cell group including the terminal and a frame synchronization identification code for indicating a start portion of the frame (a combination of codes), is received, and a start point of the synchronization block is estimated in the received downlink frame. Then, a cell group number and frame synchronization are acquired from the estimated start point of the synchronization block and the synchronization pattern, and a cell number is acquired from a cell identification scrambling code included in the downlink frame. FIG. 1 is a block diagram schematically illustrating an apparatus for generating a downlink signal in a cellular system according to an exemplary embodiment of the present invention. FIG. 2 is a diagram illustrating the configuration of a downlink frame of the cellular system according to the exemplary embodiment of the present invention. FIG. 3 FIG. 2 is a diagram illustrating the detailed configuration of the downlink frame shown in . FIG. 4 FIG. 3 is a diagram illustrating a signal waveform obtained by converting the downlink frame shown in into a time domain. FIG. 5 is a diagram illustrating the bandwidth scalability of the downlink frame according to the exemplary embodiment of the present invention. FIG. 6 is a diagram illustrating the bandwidth scalability of a downlink frame accor ding to another exemplary embodiment of the present invention. FIG. 7 is a block diagram schematically illustrating a cell searching apparatus according to an exemplary embodiment of the present invention. FIG. 8 is a flowchart illustrating a cell searching method according to an exemplary embodiment of the present invention. FIG. 9 is a block diagram schematically illustrating the configuration of a synchronization estimator according to an exemplary embodiment of the present invention. FIG. 10 is a diagram illustrating a method of allocating a cell group identification code and a frame synchronization identification code according to an exemplary embodiment of the present invention. FIG. 11 is a diagram illustrating a method of allocating a cell group identification code and a frame synchronization identification code according to another exemplary embodiment of the present invention. FIG. 12 is a block diagram schematically illustrating the configuration of a cell group estimator according to an exemplary embodiment of the present invention. In the following detailed description, only certain exemplary embodiments of the present invention have been shown and described, simply by way of illustration. However, the present invention is not limited to the following exemplary embodiments, but various modifications and changes of the invention can be made. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive. Like reference numerals designate like elements throughout the specification. It will be understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Hereinafter, a method and apparatus for generating a downlink signal and a method and apparatus for searching a cell in a cellular system according to exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. FIG. 1 FIG. 2 is a block diagram schematically illustrating an apparatus for generating a downlink signal in a cellular system according to an exemplary embodiment of the present invention, and is a diagram illustrating a downlink frame structure of a cellular system according to an exemplary embodiment of the present invention. FIG. 1 FIG. 2 As shown in , a downlink signal generating apparatus 100 according to an exemplary embodiment of the present invention includes a pattern generator 110, a code generator 120, a time-frequency mapping unit 130, an OFDM transmitter 141, and a transmitting antenna 142, and is provided in a base station (not shown) of the cellular system. As shown in , the downlink signal generated by the downlink signal generating apparatus 100 according to the exemplary embodiment of the present invention includes a plurality of synchronization blocks 210, and each of the synchronization blocks 210 includes a plurality of sub-frames 220. Information for identifying a cell group and information for estimating frame synchronization are allocated to first symbol durations 230a and 230b of each synchronization block 210. In addition, different frame synchronization identification codes are allocated to the synchronization blocks 210. The pattern generator 110 generates a synchronization pattern and a pilot pattern of the downlink signal using a set of orthogonal codes indicating cell number information, cell group information, and information for identifying frame synchronization. The pattern generator 110 allocates a series of orthogonal codes to a cell group number for identifying a cell group, and uses the series of orthogonal codes to recognize a frame start point. Hereinafter, for better comprehension and ease of description, the orthogonal codes allocated to the cell group numbers are referred to as "cell group identification codes," and the orthogonal codes used to recognize the frame start points are referred to as "frame synchronization identification codes." The pattern generator 120 matches the cell group identification codes with the frame synchronization identification codes to generate a set of codes, and allocates the set of codes to a frequency domain of a synchronization channel symbol duration of the downlink signal to generate a synchronization pattern of the downlink signal. The pattern generator 110 allocates to a pilot channel symbol duration a unique scrambling code that is allocated to each cell in order to encode a common pilot symbol and a data symbol in the cellular system, thereby generating a pilot pattern of the downlink signal. The code generator 120 generates orthogonal code sets that are used as the cell group identification codes and the frame synchronization identification codes, and transmits the generated orthogonal code sets to the pattern generator 110. Then, the pattern generator 110 uses the orthogonal code sets to generate a synchronization pattern and a pilot pattern. FIG. 2 The time-frequency mapping unit 130 maps data to a time-frequency domain, using the synchronization pattern information and the pilot pattern information generated by the pattern generator 110, and frame structure information and transmission traffic data that are transmitted from the outside, to form a frame of downlink signals (reference numeral 200 in ). Then, the OFDM transmitter 141 receives the downlink signal from the time-frequency mapping unit 130, and transmits the signal through the transmitting antenna 142. FIG. 2 FIG. 2 sync sub t Referring to , one frame 200 of downlink signals in a cellular system according to an exemplary embodiment of the present invention is composed of N synchronization blocks 210, and each of the synchronization blocks 210 includes N sub-frames 220. An OFDM symbol duration 230a of the downlink signal uses N subcarriers each having a frequency range of Δf. Pilot symbol durations 240a to 240e, each having pilot data therein, are formed in the headers of the sub-frames 220 forming one synchronization block 210. A first sub-frame of the synchronization block 210 is provided with synchronization symbol durations 230a and 230b each having data including a cell group identification code and a frame synchronization identification code arranged therein. The synchronization symbol durations 230a and 230b may be disposed in a first OFDM symbol duration of the first sub-frame or the last OFDM symbol duration of the first sub-frame. Each of the synchronization symbol durations 230a and 230b is divided into two frequency bands 250 and 260 in the frequency domain, and each of the frequency bands 250 and 260 has the cell group identification code and the synchronization identification code inserted therein. As shown in , the pattern generator 110 does not form a synchronization pattern in the entire frequency domain of each of the symbol durations 230a and 230b, but allocates codes to only a central portion of the frequency bandwidth except a DC subcarrier to form the synchronization pattern in the central portion. In a 3GPP system, the downlink frame 200 includes 20 sub-frames 220, and one sub-frame 220 corresponds to a time of 0.5 msec. In the case of unicast transmission, one sub-frame 220 includes 7 OFDM symbol durations, and in the case of multicast transmission, one sub-frame 220 includes 6 OFDM symbol durations. In the downlink frame of the 3GPP system, as an example, the synchronization block 210 may include 5 sub-frames 220. In this case, one frame includes four synchronization channel symbol durations. FIG. 1 FIGS. 3 4 Next, the generation of the synchronization pattern and the pilot pattern by the pattern generator 110 shown in will be described in detail with reference to and . FIG. 3 FIG. 4 FIG. 3 is a diagram illustrating the OFDM symbols in the synchronization channel symbol duration in which the synchronization pattern is formed, and is a diagram illustrating a signal waveform when the synchronization channel symbol duration shown in is converted into a time domain. FIG. 3 As shown in , the pattern generator 110 divides a predetermined bandwidth into a frequency band 250 for inserting the cell group identification code and a frequency band 260 for inserting the frame synchronization identification code on the basis of a central subcarrier in the entire frequency bandwidth of the channel symbol duration 230a, and sequentially inserts orthogonal codes into the divided frequency bands to form the synchronization pattern. FIG. 3 <mrow><msup><mi>C</mi><mfenced><mi>k</mi></mfenced></msup><mo>=</mo><mfenced separators=""><msubsup><mi>c</mi><mn>0</mn><mfenced><mi>k</mi></mfenced></msubsup><mo>,</mo><msubsup><mi>c</mi><mn>1</mn><mfenced><mi>k</mi></mfenced></msubsup><mo>,</mo><msubsup><mi>c</mi><mn>2</mn><mfenced><mi>k</mi></mfenced></msubsup><mo>,</mo><mo>⋯</mo><mo>,</mo><msubsup><mi>c</mi><mrow><msub><mi>N</mi><mi>G</mi></msub><mo>−</mo><mn>1</mn></mrow><mfenced><mi>k</mi></mfenced></msubsup></mfenced></mrow> <mrow><msup><mi>C</mi><mfenced><mi>u</mi></mfenced></msup><mo>=</mo><mfenced separators=""><msubsup><mi>c</mi><mn>0</mn><mfenced><mi>u</mi></mfenced></msubsup><mo>,</mo><msubsup><mi>c</mi><mn>1</mn><mfenced><mi>u</mi></mfenced></msubsup><mo>,</mo><msubsup><mi>c</mi><mn>2</mn><mfenced><mi>u</mi></mfenced></msubsup><mo>,</mo><mo>⋯</mo><mo>,</mo><msubsup><mi>c</mi><mrow><msub><mi>N</mi><mi>F</mi></msub><mo>−</mo><mn>1</mn></mrow><mfenced><mi>u</mi></mfenced></msubsup></mfenced></mrow> <mrow><msubsup><mi>c</mi><mi>n</mi><mfenced><mi>k</mi></mfenced></msubsup><mo>=</mo><mi>exp</mi><mfenced open="{" close="}" separators=""><mo>−</mo><mi>j</mi><mn>2</mn><mi mathvariant="italic">π k</mi><mfrac><mrow><mi>n</mi><mfenced separators=""><mi>n</mi><mo>+</mo><mn>1</mn></mfenced></mrow><mrow><mn>2</mn><msub><mi>N</mi><mi>G</mi></msub></mrow></mfrac></mfenced><mo>,</mo><mspace width="1em" /><mi mathvariant="italic">n</mi><mo>=</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>,</mo><mo>⋯</mo><mo>,</mo><msub><mi>N</mi><mi>G</mi></msub><mo>−</mo><mn>1</mn></mrow> <mrow><msubsup><mi>c</mi><mi>n</mi><mfenced><mi>u</mi></mfenced></msubsup><mo>=</mo><mi>exp</mi><mfenced open="{" close="}" separators=""><mo>−</mo><mi>j</mi><mn>2</mn><mi mathvariant="italic">π u</mi><mfrac><mrow><mi>n</mi><mfenced separators=""><mi>n</mi><mo>+</mo><mn>1</mn></mfenced></mrow><mrow><mn>2</mn><msub><mi>N</mi><mi>F</mi></msub></mrow></mfrac></mfenced><mo>,</mo><mspace width="1em" /><mi mathvariant="italic">n</mi><mo>=</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>,</mo><mo>⋯</mo><mo>,</mo><msub><mi>N</mi><mi>F</mi></msub><mo>−</mo><mn>1</mn></mrow> G F The pattern generator 110 allocates to the frequency bands 250 and 260 the orthogonal codes in two independent orthogonal code sets transmitted from the code generator 120. Referring to , the pattern generator 110 allocates an orthogonal code set of and an orthogonal code set of to the frequency band 250 for identifying a cell group and the frequency band 260 for identifying frame synchronization to form the synchronization pattern, respectively. In this case, "k" indicates a cell group number, "u" indicates a frame synchronization identification code number, "N" indicates the length of the cell group identification code, and "N" indicates the length of the frame synchronization identification code. The pattern generator 110 according to the exemplary embodiment of the present invention may use GCL (generalized chirp-like) codes as the cell group identification code and the frame synchronization identification code, and these codes can be expressed by the following Equations 1 and 2: , and FIG. 3 ∗ G F B S B The orthogonal codes expressed by Equations 1 and Equation 2 are allocated to the positions shown in to generate the synchronization pattern. That is, the pattern generator 110 does not sequentially allocate the orthogonal codes obtained by Equations 1 and 2 to adjacent subcarriers, but allocates even-numbered subcarriers or odd-numbered subcarriers in the frequency bands 250 and 260. Subcarriers between the subcarriers having the orthogonal codes allocated thereto are used as nulling subcarriers to which no sequence is allocated. Therefore, the subcarriers including the nulling carriers that are arranged in the synchronization channel symbol duration for forming the pattern occupy substantially 2[(N+ N) + N] (hereinafter, referred to as N) subcarrier bands. In this case, "N" indicates the number of subcarriers in a guard band. FIG. 4 FIG. 4 FIG. 4 When the synchronization pattern is converted into a time domain, the signal waveform shown in is obtained. shows the signal waveform of the OFDM symbol except a cyclic prefix. As can be seen from , two repeated patterns are generated in the time domain due to two kinds of inserted orthogonal codes. FIG. 3 FIG. 4 FIG. 4 As shown in , the downlink signal generating apparatus 100 according to the exemplary embodiment of the present invention forms a synchronization pattern such that one nulling subcarrier exists between the subcarriers to which sequences are allocated over the frequency domain of the synchronization channel symbol duration in which the cell group identification code and the synchronization identification code are allocated, thereby generating signals. Therefore, the generated signal has the repeated pattern shown in , and a terminal having received the downlink frame acquires initial symbol synchronization and estimates a frequency offset, using the signal pattern shown in . G F The lengths N and N of the cell group identification code and the synchronization identification code inserted into each of the synchronization channel symbol durations of the downlink frame may be different from each other, and information on the lengths of these identification codes and information on the synchronization patterns thereof are shared by a terminal and a base station. FIG. 3 The terminal having received the downlink frame 200 having the synchronization pattern shown in demodulates the two frequency bands 250 and 260 for each synchronization block to obtain information on the cell group number and the frame start point, which makes it possible to rapidly and effectively search the cells. In addition, the frequency domain of the channel symbol duration is divided into two frequency bands, and the same sequence or different types of sequences are allocated to the two divided frequency bands, which makes it possible to prevent the lowering of a correlation performance due to the selective fading of frequencies. In the exemplary embodiment of the present invention, the cell group identification code is inserted before the frame synchronization identification code on a frequency axis of the synchronization channel symbol duration, but the invention is not limited thereto. For example, the cell group identification code may be inserted after the frame synchronization identification code to form the synchronization pattern. Further, in the exemplary embodiment of the present invention, the same type of orthogonal code is used as the cell group identification code and the frame synchronization identification code, but the invention is not limited thereto. For example, different types of orthogonal codes may be used as the cell group identification code and the frame synchronization identification code. In this case, general orthogonal codes, such as a Hadamard code, a KAZAC code, a gold code, a Golay code, and a pseudo-noise (PN) code, may be used as the identification codes. FIG. 5 FIG. 6 is a diagram illustrating the bandwidth scalability of a downlink frame according to an exemplary embodiment of the present invention, and is a diagram illustrating the bandwidth scalability of a downlink frame according to another exemplary embodiment of the present invention. FIGS. 5 6 FIG. 3 FIGS. 2 3 FIG. 5 FIG. 6 and show the comparison between the bandwidth of the synchronization channel symbol duration shown in with the entire bandwidth supported by the cellular system. As shown in and , the downlink signal generating apparatus 100 according to the exemplary embodiment of the present invention inserts orthogonal codes into the center of the frequency bandwidth to generate a synchronization pattern. In the cellular system, since the terminals have different supportable bandwidths according to their levels, it is possible to support the bandwidth scalability of the terminals through the frame structure. shows a synchronization pattern allocated to a 1.25 MHz band within the frequency bandwidth. Traffic data cannot be allocated to an OFDM symbol without a synchronization pattern in the channel symbol duration, and transmitted thereto. shows a synchronization pattern allocated to a 1.25 MHz band or a 5 MHz band within the frequency bandwidth. A terminal supporting a 5 MHz band or more can receive all synchronization patterns transmitted, but terminals supporting a 1.25 MHz band and a 2.5 MHz band can receive some synchronization patterns that are arranged in the center of the frequency bandwidth. According to the exemplary embodiment of the present invention, it is possible to extract the cell group number and information on the synchronization start point from the downlink frame using only some of the received synchronization patterns, and thus support the bandwidth scalability. FIGS. 7 8 Next, a method of allowing a terminal to search a cell using the downlink signal will be described in detail below with reference to and . FIG. 7 FIG. 8 is a block diagram schematically illustrating a cell searching apparatus according to an exemplary embodiment of the present invention, and is a flowchart illustrating a cell searching method according to an exemplary embodiment of the present invention. FIG. 7 Referring to , a cell searching apparatus 400 according to an exemplary embodiment of the present invention includes a receiver 410, a symbol synchronization estimator 420, a Fourier transformer 430, a cell group estimator 440, and a cell number estimator 450. The Fourier transformer 430 can perform fast Fourier transform (FFT). FIG. 8 As shown in , the receiver 410 receives signals transmitted from a base station. The symbol synchronization estimator 420 filters the received signal within the bandwidth allocated to a synchronization channel, removes a guard interval, performs differential correlation to acquire symbol synchronization or sub-frame synchronization, and estimates a frequency offset (S110). Then, the Fourier transformer 430 performs Fourier transform on the received signal on the basis of the symbol synchronization estimated by the symbol synchronization estimator 420 (S120). The cell group estimator 440 estimates a frame start point from the sequence of the synchronization channel symbol duration included in the received signal that has been subjected to Fourier transform, acquires frame synchronization, and estimates the cell group number (S130). The cell number estimator 440 estimates the cell number using scrambling code information included in the pilot symbol duration (S140). FIG. 9 Next, the acquisition of sub-frame synchronization and the estimation of a frequency offset by the symbol synchronization estimator 420 will be described in detail with reference to . FIG. 9 is a block diagram schematically illustrating the structure of the symbol synchronization estimator 420 according to an exemplary embodiment of the present invention. FIG. 9 Referring to , the symbol synchronization estimator 420 according to the exemplary embodiment of the present invention includes a filter 421, a delay unit 422, a correlator 423, a power detector 424, a comparator 425, and a frequency offset detector 426. FIG. 4 The symbol synchronization estimator 420 estimates sub-frame synchronization and frequency offset from a received signal having the time domain signal waveform shown in in the synchronization channel symbol duration. The symbol synchronization estimator 420 may estimate the last OFDM symbol duration of the sub-frame where the synchronization pattern is formed and a frequency offset in the last OFDM symbol duration. S S The filter 421 filters the time domain signal within a bandwidth allocated to the synchronization channel and removes a guard interval to extract signalsy(n+l)in N subcarrier bands, which are central subcarrier bands, in which the synchronization patterns are formed in the entire frequency band corresponding to the synchronization channel symbol duration. The filter 421 can perform bandpass filtering. The length of the signal y(n+l) output from the filter 421 corresponds to N. S s <mrow><mi>Y</mi><mo>=</mo><mrow><mstyle displaystyle="true"><mrow><munderover><mo>∑</mo><mrow><mi>l</mi><mo>=</mo><mn>0</mn></mrow><mrow><mn>1</mn><mo>/</mo><mn>2</mn><msub><mi>N</mi><mi>S</mi></msub><mo>−</mo><mn>1</mn></mrow></munderover></mrow></mstyle><mrow><mi>y</mi><mfenced separators=""><mi>n</mi><mo>+</mo><mi>l</mi></mfenced><msup><mi>y</mi><mrow><mo>*</mo></mrow></msup></mrow></mrow><mfenced separators=""><mi>n</mi><mo>+</mo><mi>l</mi><mo>+</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><msub><mi>N</mi><mi>S</mi></msub></mfenced></mrow> The delay unit 422 delays the filtered signal y(n+l) bya time corresponding to half the effective symbol length N. The correlator 423 performs differential correlation on the input signal y(n+l) and an output signal y(n+l+N/2) of the delay unit 422 in a sample duration corresponding to half the effective symbol length. The differential correction performed by the correlator 423 can be expressed by Equation 3 given below: <mrow><mover><mi>τ</mi><mrow><mo>^</mo></mrow></mover><mo>=</mo><munder><mi>max</mi><mi>l</mi></munder><mfenced open="{" close="}"><msup><mrow><mfenced open="|" close="|"><mi>Y</mi></mfenced></mrow><mn>2</mn></msup></mfenced></mrow> The power detector 424 having received the correlation result Y calculated by Equation 3 calculates a differential correlation value of the received signal, that is, the power of the received signal. The comparator 425 selects the time when the power detector 424 outputs a maximum value by Equation 4 given below, and sets the selected time as an initial symbol synchronization time. The frequency offset detector 426 estimates an initial frequency offset. In this exemplary embodiment of the present invention, differential correlation is performed on only the time domain signals corresponding to one synchronization channel symbol duration to detect the initial symbol synchronization and the frequency offset, but the invention is not limited thereto. For example, the time domain signals in a different synchronization channel symbol duration in one downlink frame may be accumulated, and the differential correlation may be performed on the accumulated signals. In addition, in order to improve an estimating performance, data obtained from synchronization patterns of a plurality of frames may be accumulated, and the differential correlation may be performed on the accumulated data. The Fourier transformer 430 performs Fourier transform on the received signal on the basis of sub-frame synchronization estimated by the symbol synchronization estimator 420. FIGS. 10 to 12 FIGS. 10 11 FIG. 12 The estimation of the frame synchronization and the cell group number by the cell group estimator 440 from the synchronization pattern of the signal that has been subjected to Fourier transform will be described in detail below with reference to . First, referring to and , a method of generating the synchronization pattern of the downlink frame and estimating the cell group number and frame synchronization from the generated synchronization pattern will be described with reference to . FIGS. 10 11 FIG. 3 FIGS. 10 11 FIG. 10 FIG. 11 FIGS. 10 11 (k) (u) and are diagrams illustrating a method of allocating the synchronization pattern shown in . The downlink generating apparatus according to the exemplary embodiment of the present invention combines a cell group identification code C with a frame synchronization identification code C to generate a synchronization pattern. and show combinations of the cell group identification codes and the frame synchronization identification codes in the form of (k, u) (A in and A' in ). In and , it is assumed that a frame 200 of downlink signals includes 4 synchronization blocks 210. FIG. 10 FIG. 10 FIG. 10 FIG. 10 (1) (2) (3) (4) (k) shows a synchronization pattern generated by combining orthogonal codes using only common frame synchronization identification codes C, C, C, and C to all cell groups in the cellular system. In , cell No. 1 to cell No. 4 form cell group No. 1, cell No. 5 to cell No. 8 form cell group No. 2, and cell No. 9 to cell No. 12 form cell group No. 3. shows a combination of codes when C (k is cell group number, k= 1, 2, 3, ...) is used as the cell group identification code. When the synchronization pattern is formed as shown in , the same frame synchronization identification code is transmitted from all cells. Therefore, it is possible to obtain a macro diversity gain. That is, the terminal having received the downlink frame performs correlation on a synchronization channel symbol duration to detect a frame synchronization identification code, in order to acquire the frame synchronization. In this case, since the same code is used for all cells, a correlation characteristic is improved, and thus a frame synchronization acquiring performance can be improved. In this case, the number of cell groups that can be divided may be set to be equal to the length of the code that is set to identify the cell groups, and the length of the frame synchronization identification code may be smaller than the length of the cell group identification code due to the diversity gain. FIG. 11 FIG. 11 FIG. 10 shows the formation of a frame 200 of downlink signals using a combination of codes that is formed by allocating different frame synchronization identification codes to the cell groups. In this case, the number of frame synchronization identification codes that are available in the cellular system is equal to the length of the codes. When the synchronization pattern is formed as shown in , the number of combinations of the cell group numbers and the frame synchronization identification codes increases since various frame synchronization identification codes are used. Therefore, as compared with the synchronization pattern shown in , it is possible to increase the number of cell groups that can be identified. A base station and terminals share information on the combination of codes according to the exemplary embodiment of the present invention, and the terminals use the information to search cells. FIG. 12 is a block diagram schematically illustrating the cell group estimator 440 according to the exemplary embodiment of the present invention. FIG. 12 As shown in , the cell group estimator 440 according to the exemplary embodiment of the present invention includes a code storage unit 441, a correlator 442, an inverse Fourier transformer 443, and a comparator 444. The code storage unit 441 stores orthogonal codes that are used as the cell group identification codes and the frame synchronization identification codes allocated to the synchronization channel symbol duration, and also stores information on the combination of codes forming the synchronization pattern. Meanwhile, when information on the cell including a terminal therein and peripheral cells (information on the cell number and the cell group) is known beforehand (that is, when the terminal is busy or in a standby state), the code storage unit 441 can extract a candidate combination of codes, and use the extracted combination of codes to search cells. The correlator 442 receives the signals in the synchronization channel symbol duration that have been subjected to Fourier transform, and multiplies the signals having been subjected to Fourier transform by the conjugates of the orthogonal codes included in a combination of codes that are stored in the code storage unit 441. That is, when the correlator 442 sequentially performs a conjugate operation on sequences in the synchronization channel section of the received downlink frame over the frequency domain, an operation for identifying a cell group and an operation for estimating frame synchronization are sequentially performed, which makes it possible to shorten the time to search cells. FIG. 10 The inverse Fourier transformer 443 performs inverse Fourier transform on a cell group identifying band and a frame synchronization identifying band among the signals output from the correlator 442 to generate time domain signals. In this case, the inverse Fourier transformer 443 may perform inverse fast Fourier transform (IFFT). The comparator 444 selects the maximum value from the time domain signals output from the inverse Fourier transformer 443, and extracts information on a combination of codes having the maximum value from the code storage unit 441, thereby identifying the cell group number and the frame synchronization. As can be seen from , as an example, when information on a combination of codes extracted by the comparator 444 is (1, 2), the current cell belongs to the cell group No. 1, and the terminal starts estimating the frame synchronization in the second synchronization block of the downlink frame. In this way, it is possible to estimate a frame start point. Finally, the terminal estimates the cell number using scrambling information included in the pilot symbol duration. Since the terminal knows the cell group information, the terminal estimates the cell number on the basis of the scramble information of the cells belonging to the corresponding cell group. In this case, a general estimating method, such as a method of using the sum of powers of a set of subcarriers of the pilot symbol, may be used to estimate the cell number. In this exemplary embodiment of the present invention, the cell number is estimated from the scrambling information of the pilot symbol duration, but the invention is not limited thereto. For example, the cell number may be estimated by using symbols in a common channel section including system information of a base station. In addition, in this exemplary embodiment of the present invention, the cell group identification code is allocated to the synchronization pattern, but the invention is not limited thereto. Instead of the cell group identification code, a cell identification code may be allocated to one of two bands of the synchronization symbol duration to generate a downlink frame. In this case, the estimation of the cell number using the scramble code may be used to verify cell number information obtained from the synchronization pattern. The constituent elements according to the exemplary embodiment of the present invention may be implemented by at least one hardware component composed of a programmable logic element, such as a DSP (digital signal process) processor, a controller, an ASIC (application specific integrated circuit), or a FPGA (field programmable gate array), other electronic devices, or a combination thereof. In addition, at least a portion of the function or procedure according to the exemplary embodiment of the present invention may be executed by software, and the software may be recorded on a recording medium. Further, the constituent elements, the function, and the procedure according to the exemplary embodiment of the present invention may be implemented by a combination of hardware and software. While this invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. As described above, according to the exemplary embodiment of the present invention, it is possible to use a plurality of synchronization patterns formed in one frame to search a cell group and to estimate frame synchronization. In addition, it is possible to use the synchronization patterns to estimate sub-frame synchronization.
Questions tagged [high-altitude] Regions on or above the earth's surface located at least 2,400 meters (8,000 ft) above sea level. 66 questions 1 vote 1answer 84 views Could the Perlan II glider be used for nearspace tourism? I asked the same question about the SR-71 Blackbird, and I'm also curious about the Perlan II sailplane which travelled above 70,000 ft, and is planned to be flown to 90,000 ft, if it could be used ... 1 vote 1answer 120 views How high can manned subsonic planes go? What is the maximum altitude a manned plane that flies subsonically can reach? Subsonic airplanes can't go as high as super- and hypersonic ones of course, so record holders are the latter ones. But ... 21 votes 3answers 6k views Could the SR-71 Blackbird be used for nearspace tourism? The Blackbird had two seats onboard. Could it be reactivated for nearspace tourism, placing a paying tourist in the other seat? If no, why not or why isn't it being proposed? In leveled flight, the ... 2 votes 1answer 103 views What limits high altitude powered paragliders other than the engine? Due to their extremely low wing loading, powered paragliders stall at very low indicated airspeeds compared to other airplanes. A typical paraglider can continue flying all the way down to 10-15 ... 0 votes 0answers 143 views What's the highest altitude ever achieved by a (rocket-?)plane for level flight? As probably most of us know, the SR-71 Blackbird holds the altitude record for leveled flight for (manned) jet planes (and ground-launched planes) at 90,000 ft MSL. However, the very altitude record ... 2 votes 1answer 162 views Did every X-15 pilot become weightless when flying into the upper strato- and mesosphere? The North American X-15 was aimed at breaking airspeed and altitude records. The flights that sought to achieve new airspeed records (Mach 5+) were rather leveled but the flights into the upper ... -1 votes 1answer 162 views Are there jet aircraft for whose flights astrodynamics must be taken into account? [duplicate] Civilian jet airliners don't need to push their yokes down to follow the curvature of the Earth because within the atmosphere the curvature is accounted for by automatical systems, right? I wonder ... 13 votes 1answer 306 views Could one see stars from the Concorde in daytime? The Concorde flew to an altitude of 60,000 ft (18.3 km) where stars should be visible at noon, aren't they? This question asks on how high stars become visible, and it is said Blackbird pilots could ... 32 votes 2answers 3k views How is altitude reached by aircraft flying above the stratosphere measured? Aircraft flying above the stratosphere (above 30km) are rare, but there is still the X-15 and the spaceship two. For those aircraft, maximal altitude is measured to within 500 ft. As described in this ... 9 votes 2answers 307 views What is the maximal altitude at which an emergency slide can be deployed? If I understand correctly, the mechanism to inflate an emergency slide relies on a tank of inert gas and on an "aspirator" that sucks ambient air. This aspirator works thanks to depression ... 0 votes 1answer 174 views How does 'standalone' propeller efficiency change with altitude? I've seen questions regarding propeller aircraft and altitude and their interaction. But I'm wondering what the relationship between just the propeller and altitude (changing air density) is. How does ... -1 votes 1answer 254 views What happens in an F-16 once the aircraft exeeds its service ceiling? What exactly is the sequence of events that would unfold as an F-16 attempts to exceed its service ceiling? The scenario is simply full afterburner and attempting to simply climb higher and higher (at ... 0 votes 3answers 203 views What happens if an aircraft goes above maximum altitude? [duplicate] We all know what happens when a plane goes overspeed... but what happens when a plane goes above their max. altitude? And why are planes certified maximum altitudes anyway? Is it because there won't ... 2 votes 2answers 161 views How to adjust time of useful consciousness for pilots acclimatised to high altitudes? The time of useful consciousness (TUC) is the length of time an average pilot, breathing ambient-pressure air without supplemental oxygen, is capable of functioning usefully at a given altitude. For ... 5 votes 1answer 445 views Does high density altitude affect your landing speed? So I had this question asked in my recent commercial-pilot checkride: The DPE asked me about high-density-altitude airports; he specifically asked me if I should add airspeed when I'm on landing ... 10 votes 2answers 4k views Which aircraft can fly at FL600 or above? I would like to know some examples of aircraft (specific models) which can fly at FL600 or higher. I am interested in aircraft which can or could fly at that altitude on a regular basis (service ... 9 votes 3answers 3k views How does latitude affect the air temperature at 8 km altitude where planes fly? Is the air temperature (at 8km) above the North or South poles colder than the same altitude above the equator? Why? Ground level air temperature difference between coldest and warmest places on ... 36 votes 1answer 7k views Do the Mt. Everest rescue helicopters have modified engines to operate at high altitudes? There are so many climbers on Mt. Everest today that there is now a helicopter rescue service there. They operate rescue up to 21,000 + ft. Because this is essentially stationary service it seems that ... 1 vote 1answer 251 views Where and when would a 737-700 be taking off at an altitude of 19 kft? Another question triggered by reading the 737 technical site; according to this section of the page on the 737's pressurisation system, the 737-700 comes with an option that allows it to safely take ... 4 votes 2answers 650 views What kind of drone might be able to fly so high that it “operate(s) outside national boundaries”? In the question What is a propellant burner trailer? I mention that CNN and NPR have pointed out that the US president recently tweeted a surveillance photo of an explosion of a rocket on a launch ... 36 votes 4answers 16k views Why do private jets such as Gulfstream fly higher than other civilian jets? I heard in a TV show that private jets such as Gulfstream can fly at about 50,000 ft, higher than other civil jets. Is it an aerodynamic reason (lighter aircraft to be sustained in a less dense air) ... 1 vote 4answers 200 views Could a hybrid electric steam plane be more efficient at altitude than at sea-level, and how to optimise it to frigid air? In the thirties a steam powered plane was built. video Could a hybrid electric steam plane be more efficient at altitude than at sea-level, and how could such a steam engine be optimised to benefit ... 5 votes 3answers 2k views How do carbureted and fuel injected engines compare in high altitude? I couldn't find enough information on the internet for this topic. What I found can be paraphrased as: Carburetor means easy engine start while fuel injected means manual fuel start up and ... 1 vote 0answers 159 views What lights to use for a high-altitude weather balloon? I am designing a high-altitude weather balloon for a college project. The balloon may fly at night, which means that, per FAA regulations, I need a light that is visible for 5 miles. Does anyone ... 1 vote 1answer 70 views Proportionally, what is more affected by a hot and high takeoff - a electrically driven propeller or a turbofan? My assumption would be the electrically driven propeller would be less affected compared to turbofans at high altitudes and take off as although the air would be less dense at these ambient conditions,... 14 votes 4answers 6k views How do planes know what altitude they're cruising at? I know that when planes enter the aerodome containing the airfield of destination, the ATIS will tell them an altimeter setting so the system knows how to calculate their altitude above their field; ... 4 votes 0answers 302 views What will happen to a modern high bypass engine at 80 000 feet with cruise speed of 0.8 Mach? While keeping the typical cruise speed of 0.8 Mach, will the engine shut down due to "air starvation" at such high altitude? Is it possible to design a turbofan engine, which is able to provide thrust,... 6 votes 1answer 425 views Has fuel boiling ever been an issue for high altitude aircraft? Are special measures being taken to prevent fuel from boiling inside the fuel systems at high altitudes? If so, when (with what aircraft) did this practice start? (question is pertaining to air-... 1 vote 1answer 1k views Difference between Mach buffet and shock stall What is the difference between Mach buffet and shock stall? 3 votes 4answers 418 views Why do suborbital planes feature H-tail and/or large wingtips? Spaceplanes that are suborbital are mostly supersonic and feature H-tail or rather large wingtips. This looks to be similar to a H-tail configuration but I'm unsure abut how it works, which is why I ... 7 votes 2answers 1k views Why does aircraft stability increase when it transitions from subsonic to supersonic flight? I am puzzled why would the aircraft be more stable at high speed flight at high altitudes. Given that the Centre or Pressure moves gradually towards the 50% MAC region, once past Mcrit. Has this got ... 3 votes 2answers 878 views What forces act on an airplane's door from the outside in cruise? The inside cabin pressure from what I understand is pressurized to the values found at 6000-8000 feet high (11.34 psi at 7000 feet), at which point we can find the amount of force can be found acting ... 2 votes 1answer 152 views Why do high altitude air-launch plane designers choose to use two fuselages? We see double fuselages in both the huge Stratolaunch vehicle and in Virgin Galactics high altitude launch vehicle. What about this design makes it desirable? 7 votes 2answers 721 views If maximum speed was a priority for modern military fighter jets and bombers, approximately how fast would they likely be? Burt Rutan talks rather passionately about the lack of innovation in space flight, but also mentions how fighter jet (maximum) speed performance has stalled: In fact,... 3 votes 1answer 2k views Is there a limit to the possible altitude for electric jets? So I was thinking about Elon Musk's supersonic electric jet idea. Assuming we have sufficiently energy-dense batteries, where would the limits be in terms of speed and altitude. His logic seems to ... 6 votes 1answer 372 views What special tyres (tires) are needed for high altitude takeoff and landing? I just watched this video of AA922 taking off from El Alto (LPB) to Santa Cruz. The video's author mentions that the aircraft needs "... special tyres" due to the high altitude. While I do ... 6 votes 2answers 3k views What is the highest operational ceiling for an air-breathing jet engine? An air-breathing jet engine can only operate at relatively low altitude, where the atmosphere is dense enough. However, some jet-powered aircraft can operate at an altitude higher than 20 Km (e.g., ... 20 votes 4answers 6k views Is there a height limit to national airspace? Given the following facts: satellites can operate at an altitude of 200km without triggering frontier violation incident aircraft can trigger diplomatic protestations if they cross a frontier at an ... 2 votes 2answers 550 views Could an airliner get better fuel efficiency at higher altitude? [duplicate] By higher I mean like 60k feet instead of 30k. Seems like the rate limiting step is engine performance at 60k feet. Are people working engines that can operate at higher altitudes? How close are we ... 2 votes 1answer 177 views Are artificial magnetic fields used or proposed to be used to shield high altitude pilots and astronauts from charged, high energy radiation? If not for the earth's magnetic field our atmosphere would be bombarded by high energy plasma (ionized gas) from the sun and other space sources. At very high altitudes and orbital altitudes this ... 14 votes 3answers 6k views What are the limiting factors for high altitude planes (e.g: U2 or SR71) preventing them from going higher? I'm curious as to why planes like the U2 Dragon Lady and the SR71 Blackbird couldn't fly higher. What physical constraint set their operational ceiling? Pilots wore spacesuits, so that wasn't the ... 1 vote 1answer 2k views Why is the ratio of TSFC at altitude vs. sea level different for hi-bypass vs. lo-bypass turbofan engines? I've read up on how the thrust-specific fuel consumption of turbofan/turbine engines increases with altitude, and the Georgia Institute of Technology plot in particular seems to indicate that, for ... 2 votes 1answer 717 views Which small, light aircraft are suitable for operating at very high elevations? I live in Jammu and Kashmir, in an area with an average elevation of 4,500m / 14,700ft MSL . I would like to know what would be a good and cheap light aircraft to carry 3-4 people for a 3-4 hour ... 11 votes 1answer 967 views Why is tethered hover testing performed on helicopters? The KGUC website notes that they have a tether for helicopter testing. From what I can tell this testing has something to do with hot and high conditions, but I don't know what the purpose is of ... -1 votes 4answers 1k views What is a blimp called if it is designed not to be lighter than air? I have not found a name for a this type of vessel what class a vessel is it? Could it operate similar to a parasail if dropped from a high altitude or orbit with air far thinner then operational ... 3 votes 2answers 11k views What are the ground and flight requirements for high performance endorsement? I'm told ground is proficiency based but that in the plane you have to fly up to 25,000 feet! I don't know anything short of a turboprop capable of doing that, let alone have the ability to rent it. I'... 5 votes 4answers 745 views Can a light glider without thermal protection land from the orbit, starting from the orbital speed? I ask about a glider without any special thermal protection (pilot in the space suit), so both answers to the other question do not cover the topic. When the glider enters the atmosphere, it starts ... 1 vote 3answers 2k views Could a plane with an electric turbine engine generate enough lift to provide a electric areodynamic lift and runway to space? This question is similar to my previous ones but configuration is different and so is the question. Could a modified electric turbine engine (E-Fan) plane create enough of its own lift not to burden ... 4 votes 4answers 859 views Is it possible for a modified hang glider to land safely coming from orbit? [closed] Can a glider be dropped from geosynchronous or other orbits and safely land? 6 votes 2answers 2k views Is there a maximum airfield elevation in which a Cessna 172P can operate? The title says it all: is there a maximum airfield elevation in which a Cessna 172P can operate? Or a maximum density altitude at which take off shouldn't be considered?
https://aviation.stackexchange.com/questions/tagged/high-altitude
Is Geranium an Acidic Plant? Though hardy geraniums (Geranium) adapt to many types of soil, they usually prefer acidic soil, meaning a pH below 7.0. Their adaptability makes them a prize for a beginning or hands-off gardener who wants showy blooms with minimal effort, but creating an acidic foundation for them is beneficial. Meanwhile, plants of the Pelargonium genus often carry the geranium name, even though they are no longer considered to be related to true geraniums. Before you plant your shrubs or flowers, determine which type of plant you have and prepare your soil accordingly. Testing Your Soil - The optimal pH for geraniums varies by type, but before you begin adjusting your soil for your geraniums, you'll need to perform a soil test. You can purchase a home soil test kit from your local garden supply store. Draw soil samples from several places around your garden, preferably when the soil is dry and hasn't been fertilized recently. This helps ensure that you don't take an inaccurate sample, and it can lead you to a part of your garden that is well-suited for geraniums, depending on your soil test results. Hardy Geraniums - True geraniums, such as the Geranium maderense and spotted geranium (Geranium maculatum), prefer acidic soil. Hardy geraniums typically thrive somewhere within U.S. Department of Agriculture hardiness zones 3 through 8, depending on the variety. To increase the acidity of your soil, work sulfur into the top 6 inches of soil. If you're not in a hurry, you can lay the sulfur on top of the dirt and let it sink in over time instead. Pelletized sulfur is a safer choice than powdered sulfur because it won't create a sulfuric dust that can harm your health. Pelargoniums - Members of the Pelargonium genus -- commonly called geraniums because they used to be considered such -- often grow best in neutral to slightly alkaline soil instead of acidic soil. An example is the zonal geranium (Pelargonium x hortorum). Hardy to USDA zones 10 and 11, it prefers rich, well-drained soil that's neutral to slightly alkaline. Work limestone into the dirt to make soil more alkaline. Dolomitic limestone, made of calcium carbonate and magnesium, can help neutralize acidic soil more quickly than other times of limestone. Checking Soil pH - Changing the pH level of your soil takes time, so start the process weeks or even months before you plant your geraniums. You should also perform a soil test regularly -- such as once a year or every other year -- to make sure fertilizers and your natural environment aren't adjusting the acidity of your soil too much. For instance, applying ammonium fertilizer for the nitrogen gradually acidifies the soil, which can limit the availability of nutrients to your geraniums in the long run -- even for geraniums that prefer slightly acidic soil. References - University of California Agriculture and Natural Resources: Geranium--Pelargonium Spp. - University of California, Davis: The Madeira Island Geranium: Geranium Maderense - Lady Bird Johnson Wildflower Center: Geranium Maculatum L.
https://homeguides.sfgate.com/geranium-acidic-plant-63169.html
Searching... |Clarence Library||X||Juvenile Fiction||Open Shelf| Searching... On Order Summary Summary It's 1948 and ten-year-old Fred has just watched her teacher leave -- another in a long line of teachers who have left the village because the smell of fish was too strong, the way of life too hard. Will another teacher come to the small Athabascan village on the Koyukuk River to teach Fred and her friends in the one-room schoolhouse? Will she stay, or will she hate the smell of fish, too?Fred doesn't knowwhatto make of Miss Agnes Sutterfield. She sure is a strange one. No other teacher throws away old textbooks and reads Greek myths andRobin Hood.No other teacher plays opera recordings, talks about "hairyos,"and Athabascan kids becoming doctors or scientists. No other teacher ever said Fred's deaf older sister should come to school, too. And no other teacher ever,evertold the kids they were each good at something. Maybe it's because Miss Agnes can't smell anything, let alone fish, that things seem to be all right. But then Miss Agnes says she's homesick and will go back to England at the end of the year. Fred knows what this is about: Just when things seem to be good, things go back to being the same.How Fred and her friends grow with Miss Agnes is the heart of this story, told with much humor and warmth by Fred herself This is a story about Alaska, about the old ways and the new, about pride. And it's a story about a great teacher who opens a door to the world -- where, once you go through, nothing is ever the same again. Reviews 2 Booklist Review Gr. 4^-6. From the author of Winter Camp (1993) comes another moving novel about Athabascan life. But instead of a wilderness survival tale, this story is an uplifting portrait of a dedicated teacher, set mostly in a cozy village classroom in 1948. Fred, a ten-year-old girl, describes the year Miss Agnes takes over the one-room school. Unlike the school's other teachers, none of whom have lasted, Miss Agnes encourages the children to explore art, literature, and their own potential. She also teaches basic subjects in relevant ways and shows sensitivity to the rhythms of village life and to each child. The students are devastated when it's time for Miss Agnes to leave, but the story ends with a happy surprise. Readers longing for action may resist the simple, subdued story. But Fred's plain, direct voice, sprinkled with regionalisms, will connect readers with the well-integrated cultural particulars, the poignant scenes of home life, and the joy Fred feels learning in the snug classroom, the snow falling outside. --Gillian Engberg School Library Journal Review Gr 2-5-Teaching the children in an Athabascan village in a one-room schoolhouse on the Alaskan frontier in 1948 is not every educator's dream. Then one day, tall, skinny Agnes Sutterfield arrives and life is never the same for the community. Frederika (Fred), the 10-year-old narrator, discovers that unlike previous teachers, Miss Agnes doesn't mind the smell of fish that the children bring for lunch each day. She also stokes the fire to warm the schoolhouse before the students' arrival each morning, wears pants, and speaks with a strange accent. Miss Agnes immediately packs away the old textbooks, hangs up the children's brightly colored artwork, plays opera music, and reads them Robin Hood and Greek myths. She teaches them about their land and their culture, tutors both students and parents in her cabin in the evening, and even learns sign language along with her students so that Fred's deaf sister can attend school. Hill has created more than just an appealing cast of characters; she introduces readers to a whole community and makes a long-ago and faraway place seem real and very much alive. This is an inspirational story about Alaska, the old and new ways, a very special teacher, and the influence that she has over everyone she meets. A wonderful read-aloud to start off the school year.-Kit Vaughan, Midlothian Middle School, VA (c) Copyright 2010. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.
https://bepl.ent.sirsi.net/client/en_US/default/search/detailnonmodal/ent:$002f$002fSD_ILS$002f0$002fSD_ILS:1083063/ada
A charter captain who guided light-tackle legend Raleigh Werking to some of his four dozen IGFA world records set his own line-class mark this summer, completing a decade-long quest to catch a world-record Pacific snook (Centropomus spp.) on 6-pound line when he boated a 43.5 pounder in Costa Rica. Capt. George Hughes Beckwith Jr., who runs Down East Guide Service in Morehead City, North Carolina, was fishing in July off Quepos, Costa Rica, where he also owns a charter boat, The Dragin Fly. He went out with Capt. Roy Zapata, his go-to choice for snook trips. Zapata has guided several anglers to world-record snook, including the all-tackle record Pacific black snook, a 59.8 pounder caught off Quepos in 2014 by Ward Michaels. Beckwith says he often tags along in hopes of setting a new 6-pound record for snook. “I’ve caught a lot of big fish with Roy on 6-pound line over the past few years,” he notes, “but I never quite got that big snook bite.” That finally changed July 3. Using live sardines for bait, Beckwith and companions were slow-trolling tidelines parallel to the beach when one of his party caught a decent snook. Now confident that snook were nearby, they made another pass. Dangerous Trolling Technique The favored method for trolling snook in Costa Rica is to run parallel to the beach just outside the breaking surf. “It’s about getting as close in there as you can without getting swamped by these sets of crazy waves that can come out of nowhere,” Beckwith explains. “There’s a history of people losing boats fishing this way.” A friend of Beckwith’s always takes along a surfer when trolling for snook. “The surfer doesn’t fish; he just watches waves and says, ‘Hey, man, you guys might want to watch this set coming up.’ You look up and offshore there’s four or five waves curling up and you gotta get the hell out of there.” Beckwith hooked what felt like “a pretty good” fish, but he wasn’t certain, at first, that it was a snook. The fish made a strong run that took about 100 yards of line. “After that first long run, it jumped,” he recalls. “It was so big it couldn’t get out of the water. It got about halfway out and flopped back down. Then we all knew this was the one we’d been looking for. The record was 31 pounds, and this fish was well over 30.” About 12 minutes into what would turn out to be a 20-minute fight, Beckwith’s snook made another 100-yard run, this time heading straight toward the beach. Given the need to stay outside the breaking surf, he knew the boat would be unable to follow. He’d have to turn the fish, which he managed to do just in the nick of time. “These big giant rollers started curling up and the fish was right in there,” Beckwith recalls, “but fortunately I was able to stop it just this side of the break and get him going back toward deeper water. That was the first time I made the fish do something it didn’t want to do, and I felt pretty confident after that. Then it was just heavy heat, working the fish back to deeper water until I was able to get on top of it again.” The snook made another brief run, but Beckwith was able to get the leader through the first few rod guides and bully the big fish into the dip net wielded by mate Greg Gregory. Innovative Leader for Setting Records Beckwith started guiding in 1994, soon after he earned a degree in marine biology from the University of North Carolina Wilmington. While guiding Werking he learned a lot about light-line angling that he still uses today—including the terminal rig he used for landing his record snook. He’s dedicating the record to his friend, who died in 2017. “Part of what Raleigh taught me about light-line fishing is that it’s about staying connected until you get that leader,” Beckwith says. Using the maximum leader length allowed by IGFA rules, he ties a 12-inch Bimini twist in his 6-pound line and uses a double uni knot to tie that double line to 50-pound fluorocarbon. Then he flosses the knot. “I used waxed floss, like you’d use to rig a ballyhoo, and I’ll tie a series of half-hitches over that knot in order to give the knot some protection as it goes in and out of the guides,” Beckwith explains. About a foot above his 7/0 Eagle Claw 2004 EL circle hook he attaches a ½-ounce slip weight, fixed so that it won’t slide. The weight is a technique Werking developed for reducing mortality when hooking red drum on light line. “We found out that using a circle hook on a long leader with a slip weight we were still gut-hooking 20 percent of our fish, but if the weight were fixed within 6 inches of the hook, the weight pulls the hook to the corner of the fish’s mouth and your gut-hook rate drops from 20 percent to less than 4 percent,” Beckwith says. He likes the setup for snook because it creates a better presentation. “The sardine is always struggling and fighting against the weight,” he says, “and I think it gets the attention of the fish because it keeps the bait acting more lively.” The only downside to the record catch for Beckwith is that the fish was not strong enough to release after the weigh-in, but he hopes bumping the record so high will spare a lot of 30-pound fish of the kind he’s been releasing for years. “It’s like Raleigh always said, you’re fishing for one fish, and every single fish you catch on light line is an accomplishment,” he says. “But this fish in particular was pretty special for me, because I’ve been chasing it for at least 10 years. It’s a cool thing.” Atlantic Snook Record Broken Beckwith’s catch, which IGFA certified in September, is the second snook record to fall recently. On July 28, John Kelly landed an 88-centimeter Atlantic snook while fishing Indian River Lagoon in Florida, setting an IGFA all-tackle length fly record for the species. Three nights later he surpassed that with a 91-centimeter snook that now how holds the fly length record.
https://www.saltwatersportsman.com/news/igfa-record-pacific-atlantic-snook/
SakchiMemberMay 21, 2021 at 11:59 am:: Water is most important resource of our earth . Though more than half of the parts of earth are covered with water but it is unfit for drinking . Most of the water is sea water or you can say salty water that is unfit for drinking. We need to save water . These are the ways by which we can save water:- 1. Donot try to waste water. Take only that amount of water that much you can drink donot throw water . 2. Shut off the taps when they are not in use especially while washing face, brushing , hand washing etc. 3. Take shower instead of bucket bath this will save water . 4. Switch over to traditional methods of water harvesting. Store rain water in tankers or containers and later use it for planting or washing cars etc.. 5. After mopping use that water in planting. Donot throw it . 6. Create awareness about misuse of water in your locality. 7. Check the leakage and repair it as soon as possible. 8. After washing fruits and vegetables use that water for plantation or for washing cars etc. 9. Improve your drainage system too. 10. Donot release industrial waste in river water this will decrease the quality of river and also effect the aquatic life too. 11. Donot wash your clothes or bath in the river water. This will make river water polluting.. 12. Donot throw plastic bags in the water . These are the some basic steps for saving water. Water is our life our need . Save water! Save life! Thank you!! - KumariMemberMay 20, 2021 at 11:41 pm:: Water is one of the most important resource we have on our planet. All living beings such as animals, plants and human need water for the proper functioning on there systems. Water has various importance in everyone’s lives. Even though we have a lot of river and oceans but the water they have is very salty and cannot be consumed directly. That means only some amount of water is appropriate for intake. We should use the water effectively and make sure it is not wasted. We can reuse water in many ways possible. Some of them are:- • If you have a pet make him bath in the lawn so that the water is not wasted and the grasses are being watered from the same. • Try taking shower in place of bathing as it requires lesser amount of water. • Do rainwater harvesting where the water collected during raining is used for different day to day work. • Take the amount of water that is needed in the glass for drinking if water is still left,put it in the plants. • Drink water and put the left water in the glass in the plants. • Store water and then use it for cleaning utensils clothes and other stuffs. • Put the leftover ice which have been freezed long time before on the lawns as the melted ice will become water and will help in the growth of grasses of the lawn • Turn off the tap while shaving or brushing your teeth. • Water the lawn in the evening so the plants can take in the water before it gets absorbed by the sun. • Collect the overflowing water in the tub so that it can be later used in washing clothes utensils or watering plants. • Make sure to repair all the leaking taps and showers of the home. - MahimaMemberMay 21, 2021 at 12:40 am:: WAYS TO REUSE THE WATER: The water crisis in India is a fact that no person or organization should ignore. By 2050, India’s population is expected to hit 1.7 billion, putting enormous strain on the country’s groundwater levels. Currently, the nation uses about 250 cubic kilometers of water per year, with irrigation and domestic use accounting for 65 and 85 percent, respectively. Overdependence on rivers and groundwater has left all sources barren and polluted, resulting in significant water waste. Furthermore, contamination of water supplies by household and industrial waste has made water from many bodies of water unfit for human use. The growing population and urbanization have also contributed to the wastage of water, today only 62% of the population has access to tap water. If this water wastage will continue at this pace only then in no soon time our country will face a water crisis. There are several ways by which wastewater that is water that is already used in household activities can be used reused again for several purposes. 1.Usage of shower bucket: After a long and exhausting day, we all look forward to a wet and relaxing shower. When we turn on the tap, though, a lot of water is lost accidentally. When you turn on the shower, put an empty bucket under it to catch the water that drips out. The amount of wastewater collected would clearly show you how much water is lost in each shower. 2.Reusing water used for washing of fruits and vegetables: Frequently, the water used to wash vegetables ends up down the sink. Water is also used comfortably when washing vegetables or even boiling some edible products, so this activity is a complete waste of water. When you use piped water to wash your veggies or cook your noodles, the water can be safely reused in toilet flushes, room mopping, or garden watering. 3.Create a rainwater storage garden: A rain garden is a built environment that absorbs rainwater from roofs, pipes, and driveways, among other sources. During the monsoon, a huge amount of rainwater is wasted because most houses lack a rainwater collection system, and it is lost in drains. Washing vehicles, utensils, and other things that need water other than ingestion are common uses of wastewater from rain gardens. A rain garden also prevents further burden on the city’s sewer system by collecting rainwater rather than allowing it to flow into the wastewater system. 4.Collect overflowing water from the plants: Your love of gardening could be the source of your home’s excessive water use. You’ve already seen how some water drains from the bottom of pots into drainage holes. You should catch the extra water and reuse it to water your plants instead of making it go to waste. When watering larger plants, the wastewater obtained from drainage holes may be used to water the smaller ones. 5.Reuse excess drinking water: Many of us lose drinking water in our houses, even if unwittingly. We sometimes empty half-filled water bottles and glasses, just to refill them with fresh water. Several liters of water is lost over time as a result of this everyday habit of ours. Empty the half-filled bottle or glass over a plant or use it to clean utensils or other household products. It should become second nature to not waste any amount of water, no matter how small.
https://members.kidpid.com/ask/topic/reuse-the-water/
Pietermaritzburg Girls’ High School offers girls from Grade 8 to Grade 12 a dynamic learning environment, with diverse opportunities for the development of each individual. GHS is a proudly South African school, led by highly-qualified and dedicated educators who guide our learners in maintaining a balance between their academic, sporting and cultural endeavours. EVENTS AND ALUMNI COORDINATOR The events coordinator is responsible for the strategic planning, organisation and execution of a wide range of events for the school to support internal and external strategic goals, supporting both the retention of students and continued growth. He/she will need to collaborate with a wide range of stakeholders including, but not limited to, the school administration, teachers, facilities and logistics teams, prospective parents and parents. The primary focus of the role is to co-plan, coordinate and manage a calendar of external and internal events on and off campus, organising and running these events, as well as supporting teachers and leadership in delivering and liaising closely with internal communications, marketing, facilities and the IT team. The Alumni Relations part of the role will focus on the general administration required to identify and keep records of our alumni. This person will be responsible for building and managing the school's alumni network and strengthening our contact with GHS past pupils now scattered across the world. List of job roles and responsibilities Events and Functions Coordination: - Organise, coordinate, and attend meetings, as required, to plan, promote, and implement school events - Works alongside the Marketing and Management team to provide strategic direction, organisation and execution of all events - Plan, review, refine, and support the execution of the internal functions and events calendar (including all academic, cultural and sporting events/fixtures), working closely with the Marketing Manager and Management team - Works collaboratively with staff across the school, and when appropriate students, providing strategic advice and key liaison with estates and logistics teams - Contributes to budget management, ensuring effective use of spend of related functions budgets (i.e., catering budget etc.) - Purchase, store and distribution of groceries for the School and Boarding Establishment (i.e., tea, coffee, cleaning agents, disposables etc.) Alumni Coordination: - An alumni relations officer organizes and supports events and programs through the alumni relations office. - Primary responsibilities are to handle the promotion of special events through social media and direct contact with alumni, plan and develop projects, and oversee the actual events. - Identify alumni, create and maintain alumni records using our existing online alumni administration programme. - Organizes and coordinates alumni functions with special responsibility for one or more of the following: local, regional and national chapters, alumni publications, fund raising, recognition and awards, reunions, investments and other special events and services. - Establish a fundraising program, and drive, to capacitate the GHS Trust with funds that can be used for Sports and academic scholarships - Maintain regular communication in order to establish ongoing relationships and strengthen alumni connections to GHS. - Work with Marketing to generate materials and various communication items used for communication with alumni. - Act as the first point of contact for individuals seeking information about alumni activities - Be part of the team responsible for the strategic development of our alumni relations. - Oversee the maintenance of the archives and the newly opened GHS museum. General Administration: - Marketing department assistance - Admissions assistance (EXPOs, talks, events etc) - Venue bookings, i.e., Department of Education, school functions etc.v - Perform additional general administrative duties such as filing, typing, photocopying, collating, etc. Hours of work: - Monday - Friday, 07:15 - 16:00 - After-hour and weekend commitments required as necessary - Perform duties during the school holidays as required. The incumbent will benefit from a portion of each school holiday. Necessary qualifications and skills would include: - Bachelor's Degree in Business, Marketing or related field or related experience - Experience with event planning and management - Ability to establish and follow budgets - Ability to work with others - Highly motivated and able to take initiative - Ability to work in a fast-paced, dynamic environment meeting multiple deadlines - Strategic thinker with the ability to learn existing processes and improve them by setting up new procedures - Exceptionally detailed in work - Ability to communicate to a variety of audiences - Strong organizational skills - Desire to work collaboratively with colleagues - Excellent written and verbal communication skills ALL GHS staff are expected to: - Perform all of their duties with integrity and diligence. - Project a professional demeanour and appearance at all times. - Develop positive relationships with members of the GHS school community, which includes parents, learners and colleagues. - Be prepared to support and uphold the ethos of the school as contained in the Mission Statement. NOTE: The responsibilities associated with this job will change from time to time in accordance with the School's needs. More specifically, the incumbent may be required to perform additional and/or different roles from those detailed in their Job Profile. The Job Profile is not intended to be an all inclusive list of the roles and outcomes of the job described, nor is it intended to be such a listing of the skills requirements to do the job. Rather, it is intended to describe the general nature of the job. All applications will be treated in strict confidence. The school reserves the right not to proceed with the filling of the post. An application in itself does not entitle the applicant to an interview. Under the Protection of Personal Information Act (POPIA) which came into effect on 1 July 2021, all organisations and schools alike have a legal obligation to manage the personal information it processes appropriately, by applying specific principles and conditions. Our school is committed to ensuring the security and protection of your personal information and to provide a compliant and consistent approach to data protection.
https://www.oldschoolties.co.za/vacancies/item/3535-events-and-alumni-coordinator
8608 Queens Blvd, Elmhurst, NY 11373 - Retail Space 8608 Queens Blvd is located at 8608 Queens Blvd in the Elmhurst neighborhood, NY, Elmhurst, 11373. The Retail building was completed in 1909 and features a total of 28,890 Sqft. There are 6 retail spaces for lease in the Elmhurst neighborhood, totaling 27,681 Sqft of available retail space. The retail space availability for the 11373 zip code is 27,681 Sqft, in 6 retail spaces. At zip code level, there are 9 commercial properties, of which 5 are retail buildings over 50,000 square feet.
https://www.commercialcafe.com/commercial-property/us/ny/elmhurst/8608-queens-blvd-1/
TECHNICAL FIELD The present disclosure relates to three-dimensional displacement measurement methods, and more particularly to a three-dimensional displacement measurement method for laser speckle images and a robotic arm positioning error compensation device operable by the same. RELATED ART The structure of a conventional robotic arm is based on an open-loop formed of a series of links connected by rotational joints. The open-loop complicates kinematic analyses and static analyses of the robotic arm. A positional relationship between two adjacent links in a coordinate system, a relationship between a posture of the end-effector and joint variables and a relationship between a force of the end-effector and a driving torque of each joint are very complicated and thus need to be simplified. On condition that speed and torque control are ignored, it is necessary to find the forward and inverse kinematics of the robotic arm first in order for the robotic arm to switch freely between the Cartesian coordinate system and the joint coordinate system. The forward kinematics uses an angle of rotation of each shaft of the robotic arm to estimate a position of a working point in the three-dimensional space and a direction vector of the end-effector. The inverse kinematics, in turn, uses the three-dimensional spatial coordinates and directions to reversely estimate rotation parameters of each shaft. Commercialized, existing robotic arms are controlled by parametric kinematics. However, errors of each component in the manufacture of the robotic arms are variable. Furthermore, errors arising from the assembly process of the robotic arms cannot be evaluated by any conventional measurement techniques. Errors caused by these two factors lead to discrepancies between the position and direction of the predictive control of the end-effector of the robotic arm and the position and direction which the robotic arm is actually driven to. Definition errors of a conventional robotic arm roughly fall into two categories: geometric errors and non-geometric errors. The forward kinetics of the robotic arm is a conversion relationship between the parameters of shafts of the robotic arm and the position and direction of the end-effector of the robotic arm and the conversion relationship is obtained by multiplying homogeneous transformation matrix between each link and the coordinate system. The homogeneous transformation matrix is usually expressed by Denavit-Hartenberg parameters. Causes of geometric errors include the error of the link parameters, the error between the reference coordinate system and the actual coordinate system, and the non-parallelism of the robotic joint axis of the robotic arm. By contrast, non-geometric errors include gear backlash, bending and torsion of joints and links, thermal deformation and gearing errors. Therefore, it is important to enhance the positioning precision of the conventional robotic arm, regardless of whether a definition error of the conventional robotic arm is a geometric error or a non-geometric error. Since the errors of the robotic arm operating at different positions in a working area are different, simple compensation can only be achieved by performing absolute correction rather than performing precision correction repeatedly. An absolute correction compensation method based on physical measurement entails compensating and correcting the absolute positioning of the end-effector of the robotic arm directly, so as to effectively improve the precision in the positioning of the end-effector of the robotic arm. US 2018/178339 refers to a measurement, calibration and compensation system for machine tool includes a first positioning base; two first speckle image sensors for sensing speckle positions of an object holding unit at a first XY plane and a first XZ plane of the first positioning base before and after the machine tool is started for machining; a second positioning base; two second speckle image sensors for sensing speckle positions of a cutter holding unit at a second XY plane and a second YZ plane of the second positioning base before and after the machine tool is started for machining. Thus, the thermal expansion at all axes of the machine tool can be measured in a simplified and low-cost way, and the absolute positioning coordinates of all axes of the machine tool can be calibrated in real time to avoid reduced positioning accuracy due to the thermal expansion of the multi-axis machine tool. US 4,606,696 discloses mechanism to determine position and orientation in space of a robot linkage or the like. The mechanism typically includes a plurality of structural beams that form the linkage and each structural beam has an associated measuring beam which, in the preferred embodiment, is housed within the structural beam in a way that deflection of the structural beam does not impose loads on the measuring beam. Angular measuring devices and linear measuring devices serve to locate endpoints of the measuring beam and the information derived serves to locate the free end or endpoint (typically a gripper or the like) relative to an anchor to establish position and orientation at the endpoint. US 2016/110628 discloses a precision calibration method for being applied in a high-precise rotary encoder system, wherein the primary technology feature of the precision calibration method is that: using a laser speckle image capturing module to capture N frames of laser speckle image from an optical position surface of a rotary encoding body, and then using image comparison libraries and particularly-designed mathematical equations to calculate N number of image displacements, so as to eventually calculate N number of primary variation angles and sub variation angles corresponding to the N frames of laser speckle image. Therefore, after the rotary encoding body is rotated by an arbitrary angle, an immediate angle coordinate can be precisely positioned according to the primary variation angles, the secondary variation angles and the N number of image displacements. ICHIROU YAMAGUCHI ET AL, "LINEAR AND ROTARY ENCODERS USING ELECTRONIC SPECKLE CORRELATION", OPTICAL ENGINEERING, SOC. OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, BELLINGHAM, (19911201), vol. 30, no. 12, ISSN 0091-3286, pages 1862 - 1868, XP000240908 , describes that speckle patterns can be used as natural surface markings that are printed by the high coherence of laser light. Speckle displacement caused by linear or rotary motion of a surface is detected with a linear image sensor whose output is analyzed by real-time correlator. By using an image sensor of 1/100 deg have been attained. The main advantages of the encoders are extreme simplicity in optical configurations and wide ranges of measurement. WO 91/19169 A1 discloses a system where beams of laser light are directed onto the surface of a rotary shaft at points spaced longitudinally along the shaft from one another. Light backscattered by the optically rough surface of the shaft is detected by means of detectors which produce output signals related to the detected intensity. Signal-processing means associated with each beam include memory means for storing a reference waveform. The signal-processing means provide a signal indicative of the phase of the detected intensity relative to a reference waveform stored in the memory means. A comparator compares the output signals from the signal-processing means to provide an indication of torque transmitted through the shaft. Alternatively, a single beam and detector may be used to give an indication of angular position of the shaft. SUMMARY The present invention is defined by the independent claims. Preferred embodiments are subject-matter of the dependent claims. In view of the above disadvantages of the prior art, the main objective of the present disclosure is to provide a three-dimensional displacement measurement method for laser speckle images, comprising the steps of: (1) emitting a first coherent light and a second coherent light; (2) obtaining a first laser speckle image and a second laser speckle image, wherein the first coherent light is incident on a first surface of a working object and scatters to generate a first laser speckle and the first laser speckle is recorded to obtain the first laser speckle image, and the second coherent light is incident on a second surface of the working object that is perpendicular or adjacent to the first surface and scatters to generate a second laser speckle and the second laser speckle is recorded to obtain the second laser speckle image; (3) repeating the step (2) when the working object moves, wherein a third laser speckle image is obtained according to the first laser speckle and a fourth laser speckle image is obtained according to the second laser speckle, and the first laser speckle image is compared with the third laser speckle image to determine a shift direction and a shift distance of the first surface and the second laser speckle image is compared with the fourth laser speckle image to determine a shift direction and a shift distance of the second surface; (4) determining a three-dimensional displacement of the working object by measuring the shift direction and the shift distance of the first surface of the working object and the shift direction and the shift distance of the second surface of the working object. The three-dimensional displacement measurement method of the present disclosure provides an absolute correction compensation method by physical measurement method for correcting and compensating an absolute positioning of the end-effector of the robotic arm directly in order to effectively improve positioning precision of the end-effector of the robotic arm. In order to achieve the above objective, according to an aspect of the present disclosure, a three-dimensional displacement measurement method for laser speckle images is provided and comprises the steps of: (1) emitting a first coherent light and a second coherent light; (2) obtaining a first laser speckle image and a second laser speckle image, wherein the first coherent light is incident on a first surface of a working object and scatters to generate a first laser speckle and the first laser speckle is recorded to obtain the first laser speckle image, and the second coherent light is incident on a second surface of the working object that is perpendicular or adjacent to the first surface and scatters to generate a second laser speckle and the second laser speckle is recorded to obtain the second laser speckle image; (3) repeating the step (2) when the working object moves, wherein a third laser speckle image is obtained according to the first laser speckle and a fourth laser speckle image is obtained according to the second laser speckle, and the first laser speckle image is compared with the third laser speckle image to determine a shift direction and a shift distance of the first surface and the second laser speckle image is compared with the fourth laser speckle image to determine a shift direction and a shift distance of the second surface; (4) determining a three-dimensional displacement of the working object by measuring the shift direction and the shift distance of the first surface of the working object and the shift direction and the shift distance of the second surface of the working object. Since a fully coherent light source (light source with single point and single wavelength) is still not available, the coherent light mentioned in the present disclosure refers to a light source having high coherence. Therefore, the light source has a long coherence length, and the light source can be implemented by a Vertical Cavity Surface Emitting Laser (VCSEL), an Edge Emission Laser (EEL), and a high-coherence gas laser, a high-coherence solid-state laser or laser diodes with high coherence by emitting narrow-band light. The first laser speckle is the laser speckle generated by the first coherent light that incident on the first surface of the working object and scattered. The first laser speckle may also refer to a laser speckle that is scattered by the first coherent light incident on the first surface of the working object when the working object moves. Similarly, the second laser speckle may also refer to a laser speckle that is scattered by the second coherent light incident on the second surface of the working object when the working object moves. The three-dimensional displacement measurement method for laser speckle images of the present disclosure is applied to a robotic arm positioning error compensation device, and the device has a first robotic arm, two first light sources, two first image sensors and a first signal processing component. The first light sources emit the first coherent light and the second coherent light. The first coherent light and the second coherent light are incident on the first robotic arm to generate the first laser speckles and the second laser speckles. The first image sensor records the first laser speckles and the second laser speckles to generate the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckle images. The first signal processing component processes the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckle images to determine the three-dimensional displacement of the first robotic arm. The robotic arm positioning error compensation device according to the present disclosure further has a first bracket module disposed around the first robot arm, one end of the first bracket module is provided with the first light sources and the first image sensors, and the other end of the first bracket module is provided with two second light sources and two second image sensors. The second light sources emit the first coherent light and the second coherent light. The first coherent light and the second coherent light are incident on the first robotic arm to generate the first laser speckles and the second laser speckles. The second image sensors record the first laser speckles and the second laser speckles to generate the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckle images. The first signal processing component processes the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckle images to determine the three-dimensional displacement of the other end of the first robotic arm. Regarding the robotic arm positioning error compensation device according to the present disclosure, a material of the first bracket module is an invar, a super invar, a zero thermal expansion glass-ceramics or other material with a low thermal expansion coefficient. Regarding the robotic arm positioning error compensation device according to the present disclosure, the first robotic arm is a cuboid, a cylinder or a semi-cylinder. According to one aspect of the present disclosure, a three-dimensional displacement measurement method for laser speckle images is provided, and the step comprises: (1) emitting a first coherent light and a second coherent light; (2) obtaining a first laser speckle image and a second laser speckle image, wherein the first coherent light is incident on a first surface of a working object and scatters to generate a first laser speckle and the first laser speckle is recorded to obtain the first laser speckle image, and the second coherent light is incident on a second surface of the working object that is perpendicular or adjacent to the first surface and scatters to generate a second laser speckle and the second laser speckle is recorded to obtain the second laser speckle image; (3) repeating the step (2) when the working object moves, wherein a third laser speckle image is obtained according to the first laser speckle and a fourth laser speckle image is obtained according to the second laser speckle, and the first laser speckle image is compared with the third laser speckle image to determine a shift direction and a shift distance of the first surface and the second laser speckle image is compared with the fourth laser speckle image to determine a shift direction and a shift distance of the second surface; (4) repeating the step (3) to generate a laser speckle group, and generating a laser speckle image database according to the laser speckle group and related position information of the first surface and the second surface; (5) repeating the step (2) and comparing the first laser speckle image and the second laser speckle image with the laser speckle image database to obtain the related position information of the first surface and the second surface for determining the three-dimensional position of the working object. The three-dimensional displacement measurement method for laser speckle images of the present disclosure is applied to a robotic arm positioning error compensation device, and the device has a second robotic arm, two third light sources, two third image sensors and a second signal processing component. The third light sources emit the first coherent light and the second coherent light. The first coherent light and the second coherent light are incident on the second robotic arm to generate the first laser speckles and the second laser speckles. The third image sensors record the first laser speckles and the second laser speckles to generate the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckle images. The second signal processing component processes the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckles and moves the second robotic arm to create the laser speckle image database. After moving the second robotic arm, the second signal processing component compares the first laser speckle images and the second laser speckle images with the laser speckle image database for determining the three-dimension position of the second robotic arm. The first laser speckle is the laser speckle generated by the first coherent light that incident on the first surface of the working object and scattered. The first laser speckle may also refer to a laser speckle that is scattered by the first coherent light incident on the first surface of the working object when the working object moves. Similarly, the second laser speckle may also refer to a laser speckle that is scattered by the second coherent light incident on the second surface of the working object when the working object moves. A secondary robotic arm positioning error compensation device has a light source and an image sensor. Therefore, a robotic arm positioning error compensation device can be implemented by two secondary robotic arm positioning error compensation device and a signal processing component. The term "moving" or "move" as used in the present disclosure refers to changing the original position or direction. Therefore, the term may refer to the movement or rotation of the object, such as moving the robot arm, rotating the shaft of the robot arm, and the like. The robotic arm positioning error compensation device according to the present disclosure further has a second bracket module disposed around the second robot arm. One end of the second bracket module is provided with the third light sources and the third image sensors, and the other end of the second bracket module is provided with two fourth light sources and two fourth image sensors. The fourth light sources emit the first coherent light and the second coherent light. The first coherent light and the second coherent light are incident on the second robotic arm to generate the first laser speckles and the second laser speckles. The fourth image sensors record the first laser speckles and the second laser speckles to generate the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckle images. The second signal processing component processes the first laser speckle images, the second laser speckle images, the third laser speckle images and the fourth laser speckle images and moves the second robotic arm to create the laser speckle image database. After the second robotic arm is moved, the second signal processing component compares the first laser speckle images and the second laser speckle images with the laser speckle image database for determining the three-dimensional position of the second robotic arm. Regarding the robotic arm positioning error compensation device according to the present disclosure, a material of the first bracket module is an invar, a super invar, a zero thermal expansion glass-ceramics or other material with a low thermal expansion coefficient. Regarding the robotic arm positioning error compensation device according to the present disclosure, the first robotic arm is a cuboid, a cylinder or a semi-cylinder. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a flow chart diagram of a three-dimensional displacement measurement method for laser speckle images according to an embodiment of the present disclosure; FIG. 2 is a schematic diagram of the robotic arm positioning error compensation device according to an embodiment of the present disclosure; FIG. 3 is a schematic diagram illustrative of a movement of a laser speckle of the three-dimensional displacement measurement method for laser speckle images according to the present disclosure; FIG. 4 is a schematic diagram of a robotic arm positioning error compensation device according to another embodiment of the present disclosure; FIG. 5 is a schematic diagram of a robotic arm positioning error compensation device according to another embodiment of the present disclosure; FIG. 6 is a flow chart diagram of the three-dimensional displacement measurement method for laser speckle images according to an embodiment of the present disclosure; FIG. 7 is a schematic diagram of a robotic arm positioning error compensation device according to another embodiment of the present disclosure; FIG. 8 is a partial enlarged diagram of a first shaft of the robotic arm positioning error compensation device of the embodiment according to the present disclosure; FIG. 9 is a partial enlarged diagram of a second shaft of the robotic arm positioning error compensation device of the embodiment according to the present disclosure; and FIG. 10 is a partial enlarged diagram of a first arm of the robotic arm positioning error compensation device of the embodiment according to the present disclosure. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS To make it easier for the examiner to understand the objectives, characteristics and effects of this present disclosure, embodiments, accompanying drawings and detailed description of the present disclosure are herein provided. <mfrac><mrow><mn>4</mn><mi mathvariant="italic">δd</mi></mrow><mi>γ</mi></mfrac><msup><mi>cos</mi><mn>3</mn></msup><mspace width="1ex" /><msub><mi>θ</mi><mi>A</mi></msub> δ θ γ A An undistorted laser speckle-image captured device uses a non-specular reflection two-dimensional laser speckle image captured device to measure the laser speckle at a position where a scattering angle and a reflection angle are about 10 degrees apart, and introduces an aperture to limit an incident angle of view of a scattered light of an illuminated surface into two-dimensional image sensors, (i.e., limits an imaging range of an object plane). Parameters of a laser speckle size, a focal length of an imaging lens, an imaging angle and an object plane imaging range are properly integrated, a laser speckle path corresponding to a variable of an optical path difference is less than λ/5 when the laser speckle moves. is an average laser speckle size, is the imaging angle that is an included angle between a normal line of the object plane and an optical axis of the imaging lens, d is the object plane imaging range, is a vertical distance from the imaging lens to the object plane and λ is a wavelength of an incident light. Therefore, a laser speckle image generated on a picture plane is unlikely to be deformed. An undistorted laser speckle is defined as follows: when the image capture device moves relative to the object plane, the laser speckle on the picture plane also moves, but its shape and intensity hardly change in the course from its appearance in to its disappearance from the imaging range of the two-dimensional image sensors. Since the laser speckle obtained by the laser speckle image capture device does not deform while moving, it is advantageous for precise laser speckle image recognition and positioning. In addition, every variation in three-dimensional texture of the surface of an unpolished object is unique, and thus every undistorted laser speckle image taken of the surface of the unpolished object is also unique. When an imaging area is large enough (for example, equal to or greater than a 0.5 mm × 0.5 mm imaging range), each of the laser speckle images is determined to be unique after tens of thousands of the laser speckle images are compared. Since each of the laser speckle images of the object plane is unique, each of the laser speckle images can provide an absolute positioning coordinate to a system after the laser speckle images are compared and positioned. FIG. 1 Referring to , there is shown a flow chart diagram of a three-dimensional displacement measurement method for laser speckle images according to an embodiment of the present disclosure. The steps of the three-dimensional displacement measurement method are described below. In step (1) S110, a first coherent light and a second coherent light are emitted. In step (2) S120, a first laser speckle image and a second laser speckle image are obtained. The first coherent light is incident on a first surface of a working object and scatters to generate a first laser speckle. The first laser speckle is recorded to obtain a firs laser speckle image. The second coherent light is incident on a second surface of the working object that is perpendicular or adjacent to the first surface, and scatters to generate a second laser speckle, and the second laser speckle is recorded to obtain a second laser speckle image. In step (3) S130, step (2) is repeated when the working object moves. A third laser speckle image is generated by the first laser speckle, and a fourth laser speckle image is generated by the second laser speckle. The first laser speckle image is compared with the third laser speckle image to determine a shift direction and a shift distance of the first surface, and the second laser speckle image is compared with the fourth laser speckle image to determine the shift direction and the shift distance of the second surface. In step (4) S140, a three-dimensional displacement of an object is determined by measuring the shift direction and the shift distance of the first surface of the working object and the shift direction and the shift distance of the second surface of the working object. FIG. 2 Referring to , there is shown a schematic diagram of the robotic arm positioning error compensation device according to an embodiment of the present disclosure. The robotic arm positioning error compensation device has a robotic arm (210), two light sources (221, 222) and two image sensors (231, 232). A first surface (211) of the robotic arm (210) is adjacent to a second surface (212) of the robotic arm (210). A first secondary light source (221) illuminates the first surface (211) of the robotic arm, and a first secondary image sensor (231) obtains the laser speckle image generated by the first secondary light source (221). Similarly, a second secondary light source (222) illuminates the second surface (212) of the robotic arm, and a second secondary image sensor (232) obtains the laser speckle image generated by the second secondary light source (222). In the embodiment, the robotic arm is cuboid, but the present disclosure is not limited thereto. In a preferred embodiment, two adjacent surfaces of the robotic arm are configured on one side of the robotic arm and perpendicular to each other. In a broader embodiment, any robotic arm having two adjacent surfaces that are not parallel to each other and configured on one side of the robotic arm is suitable for the robotic arm positioning error compensation device according to the present disclosure. FIG. 3 FIG. 3 Referring to , there is shown a schematic diagram illustrative of a shift of the laser speckle of the three-dimensional displacement measurement method for the laser speckle image according to the present disclosure. shows a working object, a first laser speckle image (321), a second laser speckle image (322), a third laser speckle image (331), a fourth laser speckle image (332), and the working object has a first surface (311) and a second surface (312) that are perpendicular or adjacent to each other. 1 0 0 At an initial time t, images of the laser speckles from the first surface (311) of the coordinate Z= Z and the second surface (312) of the coordinate X= X are obtained and recorded. The first laser speckle image (321) is obtained from the first surface (311), and the second laser speckle image (322) is obtained from the second surface (312). 2 At another time t, the third laser speckle image (331) is obtained from the first surface (311) and the fourth laser speckle image (332) is obtained from the second surface (312). 2 1 2 1 A first displacement (ΔX, ΔY) of the working object in a time interval Δt(t-t) is obtained by comparing the position of the laser speckle in the first laser speckle image (321) with the position of the laser speckle in the third laser speckle image (331). The displacement is a relative displacement between the working object and an observer (usually the observer is an image sensor used to record the laser speckle). Similarly, a second displacement (ΔY, ΔZ) of the working object in the time interval Δt(t-t) is obtained by comparing the position of the laser speckle in the second laser speckle image (322) with the position of the laser speckle in the fourth laser speckle image (332). A three-dimensional displacement (ΔX, ΔY, ΔZ) of the working object is obtained by integrating the first displacement and the second displacement. FIG. 4 Referring to , there is shown a schematic view of a robotic arm positioning error compensation device according to another embodiment of the present disclosure. The robotic arm positioning error compensation device has a robotic arm (410), two light sources (421, 422) and two image sensors (431, 432). The robotic arm (410) has a first surface (411) adjacent or perpendicular to a second surface (412) of the robotic arm (410). A first secondary light source (421) illuminates the first surface (411) of the robotic arm (410), and a first secondary image sensor (431) obtains a first laser speckle image (441) generated by the first secondary light source (221). Similarly, a second secondary light source (422) illuminates the second surface (412) of the robotic arm (410), and a second secondary image sensor (432) obtains a second laser speckle image (442) generated by the second secondary light source (422). In the embodiment, the robotic arm is cylinder, and a measurement method is cylinder coordinate system or Cartesian coordinate system. θ θ θ θ θ θ θ θ θ 1 0 1 1 1 2 2 1 1 2 2 1 1 2 The cylindrical coordinate system (the height is z axis, r axis is perpendicular to z axis, an angle between the X axis and the r axis on the X-Y axis plane is degrees) is as a positioning coordinate in the following descriptions. At an initial time t, the first laser speckle image (441) is at a position (Z=h, r=R and = )of one side, and the second laser speckle image (442) is at a position (Z=0, r=R and = ) of a bottom. At another time t, images of the first laser speckle image (441) and the second laser speckle image (442) are obtained and recorded as laser speckle images. A first displacement (Δ , Δz) of the robotic arm in a time interval Δt(t-t) is obtained by comparing the laser speckle image of the first laser speckle image (441) at time t with the laser speckle image of the first laser speckle image (441) at time t. A second displacement (Δr, Δ ) of the robotic arm in the time interval Δt(t-t) is obtained by comparing the laser speckle image of the second laser speckle image (442) at time t with the laser speckle image of the second laser speckle image (442) at time t. A three-dimensional relative displacement (Δr, Δ , Δz) of the robotic arm is obtained by combining the first displacement and the second displacement, and the three-dimensional relative displacement (Δr, Δ , Δz) is able to transfer to relative displacement (ΔX, ΔY, ΔZ) of the Cartesian coordinate system. Since displacements of the laser speckles of an image capturing plane aligned in a vertical direction are unable to be recorded by the sensor, the displacement component perpendicular to the sensor device is unable to be sensed. Therefore, one sensor only can measure the displacements of two axes direction. FIG. 5 Referring to , there is shown a schematic view of a robotic arm positioning error compensation device according to another embodiment of the present disclosure. Because the robotic arm may be deformed due to excessive load or long-term wear, the deformation of the robotic arm must be measured in order for correction and compensation to be carried out. FIG. 5 shows a robotic arm positioning error compensation device according to another embodiment of the present disclosure. The robotic arm positioning error compensation device has a robotic arm (510), four light sources, four image sensors, a bracket module and a signal processing component. The robotic arm (510) has a first surface (511) and a second surface (512). The first surface (511) is perpendicular or adjacent to the second surface (512). The bracket module has a first secondary bracket (521) and a second secondary bracket (522). The bracket module is connected to a base (530), but not to the robotic arm (510). The first secondary bracket (521) is disposed on the same side of the first surface (511). The second secondary bracket (522) is disposed on the same side of the second surface (512). A material of the bracket module is an invar (or material such as a super invar or a zero thermal expansion glass-ceramics that has a low thermal expansion coefficient). The bracket module is fixed and strengthened to the bottom of the robotic arm, and each bracket has no load other than two image sensors and two light sources. Since the load of each bracket is light, the first and second secondary brackets (521, 522) can be regarded as no-load device, so as long-term use of the bracket is assured, and the displacement of the relative position between the bracket and the bottom of the robotic arm is less than a system requirements specification. The four light sources generate four laser speckles image (541, 542, 551, 552). The robotic arm (510) has a first surface (511), a second surface (512), a first end (513) and a second end (514), and the first surface (511) is perpendicular to or adjacent to the second surface (512). A first laser speckle image group (541, 542) is on the first end (513) and the second laser speckle image group (551, 552) is on the second end (514). A first laser speckle image (541) of the first laser speckle image group (541, 542) and a first laser speckle image (551) of the second laser speckle image group (551, 552) are on the first surface (511). A second laser speckle image (542) of the first laser speckle image group (541, 542) and a second laser speckle image (552) of the second laser speckle image group (551, 552) are on the second surface (512). A three-dimensional relative displacement of the first end (513) is obtained by the first laser speckle image group (541, 542) and the three-dimensional displacement measurement method for the laser speckle images. Similarly, a three-dimensional relative displacement of the second end (514) is obtained by the second laser speckle image group (551, 552) and the three-dimensional displacement measurement method for the laser speckle images. Therefore, a degree of deformation of the robotic arm is obtained by the three-dimensional relative displacements between the two ends (513, 514) of the robotic arm. FIG. 6 Referring to , there is shown a flow chart diagram of the three-dimensional displacement measurement method for the laser speckle image according to another embodiment of the present disclosure. The steps of the method are described below. In step (1) S610, a first coherent light and a second coherent light are emitted. In step (2) S620, a first laser speckle image and a second laser speckle image are obtained. The first coherent light is incident on a first surface of a working object and scatters to generate a first laser speckle. The first laser speckle is recorded to obtain a first laser speckle image. The second coherent light is incident on a second surface of the working object that is perpendicular or adjacent to the first surface and scatters to generate a second laser speckle, and the second laser speckle is recorded to obtain a second lase speckle image. In step (3) S630, step (2) is repeated when the working object moves. A third laser speckle image is generated by the first laser speckle, and a fourth laser speckle image is generated by the second laser speckle. The first laser speckle image is compared with the third laser speckle image to determine a shift direction and a shift distance of the first surface, and the second laser speckle image is compared with the fourth laser speckle image to determine the shift direction and the shift distance of the second surface. In step (4) S640, step 3 is repeated to generate a laser speckle image group, and related position information between the laser speckle image group and the first and the second surface are made into a laser speckle image database. In step (5) S650, the step (2) is repeated. The first laser speckle image and the second laser speckle image are compared with the laser speckle image database for obtaining the related position information of the first surface and the second surface in order to determine a three-dimensional position of the working object. FIG. 2 to FIG. 5 The three-dimensional positioning measurement method for the laser speckle images is also applicable to the robotic arm positioning error compensation device shown in . FIG. 3 In the following descriptions, illustrates an embodiment of the three-dimensional positioning measurement method for the laser speckle images. 1 0 0 At an initial time t, images of the lase speckles from the first surface (311) of the coordinate Z=Z and the second surface (312) of the coordinate X=X are obtained and recorded. The first laser speckle image (321) is obtained from the first surface (311), and the second laser speckle image (322) is obtained from the second surface (312). 2 At another time t, the third laser speckle image (331) is obtained from the first surface (311) and the fourth laser speckle image (332) is obtained from the second surface (312). 2 1 A first displacement (ΔX, ΔY) of the working object in a time interval Δt(t-t) is obtained by comparing the position of the laser speckle in the first laser speckle image (321) with the position of the laser speckle in the third laser speckle image (331). The working object is repeatedly moved and the first displacement is repeatedly measured. By measuring the first displacement, a first secondary laser speckle image database is created by the laser speckle image group that corresponds to the two-dimensional position of the first surface (311). Similarly, a second secondary laser speckle image database is created by the laser speckle image group that corresponds to the two-dimensional position of the second surface (312). The first secondary laser speckle image database and the second secondary laser speckle image database are integrated into a laser speckle image database. After the laser speckle image database is created, it is only necessary to measure the first laser speckle image on the first surface and compare the first laser speckle image with the first secondary laser speckle image database, then measure the first laser speckle image on the second surface and compare the first laser speckle image with the second secondary laser speckle image database, so as to obtain a three-dimensional instant coordinate of the working object. Therefore, by using the three-dimensional positioning measurement method for the laser speckle images, after the laser speckle image database is created, the moving of the working object does not need to be continuously measured, and the working object can be positioned only once. FIGS. 7-10 are schematic diagrams of the robotic arm positioning error compensation device according to another embodiment of the present disclosure. Generally, the multi-axis robotic arm determines a positioning coordinates of the end-effector by a rotation angle parameter of each shaft, but cannot consider deformation variables between each transmission shaft. The deformation variables comprise thermal expansion, heat drift, gear backlash, torsional deformation, wear deformation, etc. In this embodiment, three-dimensional positioning variables between two adjacent transmission shafts and an absolute rotation angle of each shaft are measured by the robotic arm positioning error compensation device of the present disclosure, and are used to sequentially calibrate the absolute positioning coordinates of each transmission shaft corresponding to the base of the robotic arm and up to the positioning coordinate of the end-effector. Since the absolute positioning of actual motion of the laser speckle on the surface of the transmission shaft is utilized, it can be used to correct positioning errors of original rotation angle parameters, thereby obtaining the positioning precision beyond original specification. In the embodiment, the robotic arm is multi-axis robotic arm (700) and has a base (710), a first shaft (721), a second shaft (722), a third shaft (723), a fourth shaft (724), a fifth shaft (725), a sixth shaft (726), a first arm (731), a second arm (732), a third arm (733), a fourth arm (734), an end-effector (735), a first robotic arm positioning error compensation device (741), a second robotic arm positioning error compensation device (742), a third robotic arm positioning error compensation device (743), a fourth robotic arm positioning error compensation device (744), a fifth robotic arm positioning error compensation device (745), a sixth robotic arm positioning error compensation device (746), a seventh robotic arm positioning error compensation device (747), an eighth robotic arm positioning error compensation device (748), a ninth robotic arm positioning error compensation device (749) and a tenth robotic arm positioning error compensation device (750). The first shaft (721) is disposed on the base (710)and rotates the second shaft (722), the second shaft (722) rotates the firs arm (731), the first arm (731) drives the third shaft (723), the third shaft (723) rotates the second arm (732), the second arm (732) drives the fourth shaft (724), the fourth shaft (724) rotates the third arm (733), the third arm (733) drives the fifth shaft (725), the fifth shaft (725) rotates the fourth arm (734), the fourth arm (734) drives the sixth shaft (726) and the sixth shaft (726) rotates the end-effector (735). The first robotic arm positioning error compensation device (741) measures the first shaft (721), the second robotic positioning error compensation device (742) measures the second shaft (722), the third robotic positioning error compensation device (743) measures the first arm (731), the fourth robotic positioning error compensation device (744) measures the third shaft (723), the fifth robotic arm positioning error compensation device (745) measures the second arm (732), the sixth robotic arm positioning error compensation device (746) measures the fourth shaft (724), the seventh robotic arm positioning error compensation device (747) measures the third arm (733), the eighth robotic arm positioning error compensation device (748) measures the fifth shaft (725), the ninth robotic arm positioning error compensation device (749) measures the fourth arm (734), the tenth robotic arm positioning error compensation device (750) measures the sixth shaft (726), and the following descriptions illustrate the operation method by a partial enlarged diagram of the multi-axis robotic arm. FIG. 8 is a partial enlarged diagram of the first shaft of the robotic arm positioning error compensation device of an embodiment according to the present disclosure. The first robotic arm positioning error compensation device has a first secondary robotic arm positioning error compensation device (831) and a second secondary robotic arm positioning error compensation device (832) that are disposed on a first surface (821) of a first shaft and a second surface (822) of the first shaft respectively. First, the first shaft is rotated one turn to create a laser speckle image database having the first surface (821) of the first shaft and the second surface (822) of the first shaft. Next, a shaft center point on a bottom surface of the base (710) is set as a reference coordinate origin to establish a Cartesian coordinate system and a cylindrical coordinate system. The first shaft is continuously rotated one turn at a small angle interval. Whenever the first shaft rotates once, laser speckle images are captured and recorded at positions of the first secondary robotic arm positioning error compensating device (831) and the second secondary robotic arm positioning error compensating device (832) respectively. And each adjacent laser speckle image is overlapped each other and generates an overlap area greater than 1/2 area of the laser speckle image. A relative displacement of the adjacent laser speckle images is measured by a laser speckle sub-pixel alignment method, such as Scale-invariant feature transform (SIFT), Speeded Up Robust Features (SURF), Sum of Absolute Difference (SAD), Sum of Squared Differences (SSD) or Normalized Cross-Correlation (NCC). The relative displacements of all adjacent laser speckle images are continuously compared, and all the displacements are added and set to a laser speckle image circumference length. Each displacement is divided by the laser speckle image circumference length and multiplied by 360 degrees to obtain the angle between all adjacent laser speckle images. Assuming that the first laser speckle image is 0 degree coordinate laser speckle image, the remaining adjacent laser speckle images sequentially calibrate the coordinate angles with adjacent angles to form a coordinate laser speckle image. Therefore, the laser speckle image database having coordinates is created by rotating the first shaft one turn at two positions of the first secondary robotic arm positioning error compensating device (831) and the second secondary robotic arm positioning error compensating device (832). A A A A A A set set set A set A set set After the laser speckle image database is completed, the laser speckle image obtained at other time is called an instant laser speckle image. All coordinate laser speckle images of the laser speckle image database and the instant laser speckle image are used to measure the absolute positioning coordinate of the instant laser speckle image by using the laser speckle sub-pixel alignment method (such as SIFT, SURF, SAD, SSD or NCC) to locate the three-dimensional position of the first shaft. Assume that the absolute positioning coordinate (X, Y, h) of the instant laser speckle image can be switched from a Cartesian coordinate to a column coordinate (r, θ, h), a measured value (the column coordinate (r, θ, h)) is compared with a rotation angle θ, a presetting distance r and a presetting height Z of forward movement or inverse movement, and differences (Δr = r-r, Δ θ=θ - θ, ΔZ = Z - h) between the actual rotation of the first shaft and preset position parameters can be obtained. This three-dimensional difference can be instantly fed back to a servo motor system for immediate positioning correction compensation. FIG. 9 FIG. 8 is a partial enlarged diagram of the second shaft of the robotic arm positioning error compensation device of the embodiment according to the present disclosure. The second robotic arm positioning error compensation device has a third secondary robotic arm positioning error compensation device (931) and a fourth secondary robotic arm positioning error compensation device (932) that are disposed on a first surface (921) of a first arm (910) and a second surface (922) of the first arm (910) respectively. Since the first arm (910) can only be rotated to 180 degrees, the method (360 degrees self-calibration method) as shown in cannot be used to create the laser speckle image database. As a result, other calibration methods must be used to create the laser speckle image database. A positioned disc is disposed at a center of the second shaft (722), and is a disc for creating a laser speckle image database of the disc by the three-dimensional positioning measurement method for the laser speckle images of the present disclosure. Therefore, the precise angle of the disc can be obtained by the laser speckle image. A secondary robotic arm positioning error compensation device is mounted on the second surface (822) of the first shaft to obtain the laser speckle image of the disc. The laser speckle image of the disc is compared with the laser speckle image database of the disc, and the rotation angle of the second shaft is obtained. At a small angle interval, the first arm (910) is continuously rotated half a circle (or cover a required range of work). Whenever the first arm (910) rotates once, the laser speckle images are captured and recorded at the positions of the third secondary robotic arm positioning error compensating device (931) and the fourth secondary robotic arm positioning error compensating device (932) respectively. And each adjacent laser speckle image is overlapped each other and generates an overlap area greater than 1/2 area of the laser speckle image. At the same time, the rotation angle is read by the positioning disc, so that the first laser speckle image is the 0 degree coordinate laser speckle image. Then other adjacent laser speckle images can use the rotation angle read by the positioned disc to sequentially calibrate the coordinate angle to become the coordinate laser speckle image. Therefore, the laser speckle image database having coordinates is created by rotating the first arm (910) half a circle (or cover the required range of work) at two positions of the third secondary robotic arm positioning error compensating device (931) and the fourth secondary robotic arm positioning error compensating device (932). A A A A A A set set set A set A set set After the laser speckle image database is created, the laser speckle image obtained at other time is an instant laser speckle image. All coordinate laser speckle images of the laser speckle image database and the instant laser speckle image are used to measure the absolute positioning coordinate of the instant laser speckle image by using the laser speckle sub-pixel alignment method (SIFT, SURF, SAD, SSD, NCC, etc.) to locate the three-dimensional position of the first arm. Assume that the absolute positioning coordinate (X, Y, h) of the instant laser speckle image can be switched from a Cartesian coordinate to a column coordinate (r, θ, h), the measured value (column coordinate (r, θ, h)) is compared with a rotation angle θ, a presetting distance r and a presetting height Z of forward movement or inverse movement, and differences (Δr = r-r, Δθ=θ - θ, ΔZ = Z - h) between the actual rotation of the first arm and preset position parameters can be obtained. The three-dimensional differences can be instantly fed back to the servo motor system for immediate positioning correction compensation. The third shaft, the fourth shaft, the fifth shaft, and the sixth shaft also operate on the same principle, that is, the shaft drives the arm to rotate. FIG. 10 is a partial enlarged diagram of the first arm of the robotic arm positioning error compensation device of the embodiment according to the present disclosure. The third robotic arm positioning error compensation device (743) has a fifth secondary robotic arm positioning error compensation device (1031), a sixth secondary robotic arm positioning error compensation device (1032), a seventh secondary robotic arm positioning error compensation device (1033) and an eighth secondary robotic arm positioning error compensation device (1034), and the fifth secondary robotic arm positioning error compensation device (1031), the sixth secondary robotic arm positioning error compensation device (1032), the seventh secondary robotic arm positioning error compensation device (1033) and the eighth secondary robotic arm positioning error compensation device (1034) are disposed at two ends of a first bracket (1040) respectively. The fifth secondary robotic arm positioning error compensation device (1031) and the sixth secondary robotic arm positioning error compensation device (1032) are fixed to a front end of the first bracket (1040) and measure the first surface (1021) of the first arm and a second surface (1022) of the first arm, respectively. The seventh secondary robotic arm positioning error compensation device (1033) and the eighth secondary robotic arm positioning error compensation device (1034) are fixed to a rear end of the first bracket (1040) and measure the first surface (1021) of the first arm and the second surface (1022) of the first arm, respectively. The front end of the first bracket (1040) refers to one end near the center of the second shaft (722). In this embodiment, the first bracket (1040) is fixed on the first arm (910) near one end of the second shaft (722) and maintains a certain distance from the surface of the first arm. The first bracket (1040) is made of an invar, a super invar, a zero thermal expansion glass-ceramics or other low thermal expansion materials. The first bracket (1040) is only configured with the third robotic arm positioning error compensation device (743), and has no other load, so the load of the first bracket (1040) is very light (and thus is also known as "no-load bracket.") It can ensure that the displacement of the relative position of the first bracket (1040) is small (inevitably smaller than what is required by the system specification) even after long use. When the first arm is assembled and corrected, the laser speckles of the four positions of the first arm are captured by the four secondary robotic arm positioning error compensation devices on the first bracket and recorded as coordinate laser speckle images, respectively. The coordinate laser speckle images are further combined with coordinates to create a laser speckle image database. 5 5 6 6 7 7 8 8 7 5 7 5 8 6 8 6 When the robotic arm is in operation, the laser speckles of the four positions are read at any time, and all the coordinate laser speckle images of the database and the laser speckles are compared and positioned by using the laser speckle sub-pixel alignment method such as SIFT, SURF, SAD, SSD, NCC, and the like. The absolute coordinate displacements between the instant positions and the initial positions of the four positions of the surface of the first arm can be accurately obtained. It is assumed that the fifth secondary robotic arm positioning error compensation device (1031) measures a displacement (0, ΔY, ΔZ), the sixth secondary robotic arm positioning error compensation device (1032) measures a displacement (ΔX, ΔY, 0), the seventh secondary robotic arm positioning error compensation device (1033) measures a displacement (0, ΔY, ΔZ), and the eighth secondary robotic arm positioning error compensation device (1034) measures a displacement (ΔX, ΔY, 0). The displacement of the seventh secondary robotic arm positioning error compensation device (1034) is subtracted from the displacement of the fifth secondary robotic arm positioning error compensation device (1031) (ΔY=ΔY-ΔY, ΔZ=ΔZ-ΔZ) to obtain a deformation displacement on the Y-Z plane (ΔY, ΔZ). The displacement of the eighth secondary robotic arm positioning error compensation device (1034) is subtracted from the displacement of the sixth secondary robotic arm positioning error compensation device (1032) (ΔX=ΔX-ΔX, ΔY=ΔY-ΔY) to obtain a deformation displacement on the X-Y plane (ΔX, ΔY). By integrating the two deformation displacements, the three-dimensional relative displacement variable (ΔX, ΔY, ΔZ) of the front end and the rear end of the first arm can be obtained, so that a body deformation of the first arm can be obtained. The second arm also operates on the same principle, that is, the degree of arm deformation must be measured. According to the present disclosure, no matter how many shafts and arms of the robotic arm, the three-dimensional positioning error and the deformation error of all the transmission components from the base of the robotic arm to the end-effector can be measured by relevant error correction parameters, and provided to servo control system for instant positioning absolute compensation correction. Due to the image comparison using the laser speckle images and conventional image processing technique, the positioning error can be smaller than 0.01 pixel size of the image sensor, so a good positioning precision is provided. Moreover, because of the absolute positioning characteristics of the laser speckle image, there has no accumulate correlation errors, so the present disclosure can provide high positioning precision to multi-axis robotic arm. The three-dimensional displacement measurement method of the present disclosure provides an absolute correction compensation method by physical measurement for correcting and compensating an absolute positioning of the end-effector of the robotic arm directly in order to effectively improve a positioning precision of the end-effector of the robotic arm.
The NORTHERN PLAINS BOTANIC GARDEN SOCIETY (NPBGS), in response to inquiries, and as a development tool, is creating opportunities for tributes, memorials, and gifts. As a Society in the infancy of the development of their fifty-five acre site, The Board of Directors desires to establish clear and appropriate guidelines for these memorials for the following reasons: 1. To create a formal agreement between donors and the NPBGS, thereby eliminating future misunderstandings and to benefit the progress of the site development. 2 To accept only memorials and objects that relate to the mission and the master plan of the NPBGS. 3. To accept only memorials and objects which are compatible with aesthetic and design criteria established by the Board. 4. To establish monetary amounts for donor opportunities that are sufficient to honor the subject, person, event, or situation, and where appropriate, an additional amount for maintenance (or replacement) of the memorial. The amount may be altered as conditions warrant. 5. To establish an appropriate period of time after which the naming rights, tributes or memorials expire, unless a renewed commitment is made, or provision made, for the renewed commitment. Regulations: 1. The Board will not approve a memorial of a religious nature. The Board reserves the right to determine whether the content of a proposed memorial is religious in nature. 2. A Memorial Fund has been established. The records of the Memorial Fund will be made available for public viewing upon request. 3. Any signs or identification labels on or near the memorial article will only make reference to names or events, and will be of a size, design, and location as determined by the NPBGS. 4. Damage by natural causes shall not be the responsibility of the NPBGS. This shall not be cause for the NPBGS to replace the memorial. GARDEN MEMORIAL POLICY The Master Plan locates gardens throughout the site. The potential for a memorial by sponsoring a garden is available. Each garden will be analyzed for content, cost and design if it is not part of the current planting plan. The garden plan will be reviewed with the prospective donor. All final decisions made in regard to the location and design will be the right of the Garden Committee of the NPBGS and the NPBGS Board of Directors, in conjunction with the Fargo Park District. The original design may change periodically due to weather, diseases, pests, etc., while it maintains its status as the memorial. In general, the cost associated with the intended garden will include materials and labor, and an appropriate amount for signage, maintenance and replacement, such as annual plantings. Total cost and size of the garden shall be considerations for determining the length of the memorial period. Donors may also adopt an existing garden as a memorial, and donor funds would be used for upkeep of an existing garden. Costs will depend on the size and type of the existing garden, and will again include signage, maintenance, plant replacement, etc. The period of time for donor recognition shall be negotiated. TREE MEMORIAL POLICY The Master Plan designates many areas for new tree planting. The potential for designating a memorial tree is available. Some of the plans are groups of trees for screening or mass effect. While these group plantings may not be as attractive to a potential donor as single ornamental trees, their function and purpose are no less important. Donors may consider sponsoring groves or groups of tree plantings. Individual tree memorial locations shall follow the Master Plan in location and species. The cost to the donor associated with the memorial tree is intended to furnish the Society with sufficient funds to purchase and maintain the tree(s), and possibly tree replacement. Trees have a relatively low initial cost, but have high maintenance costs over the life of the tree. Watering, pruning, and treating for diseases can consume a large part of the maintenance budget. Tree memorial amounts are based on trunk diameter (see page 3). The Arboretum, which is the designated 450 feet of the site south of 32nd Avenue, will be a collection of trees and shrubs with signage describing their nature and characteristics while creating an area of repose and relaxation in all seasons. The Arboretum offers memorial opportunities for trees and shrubs as well as benches, paths, landscaping, signage, and other hardscaping. All final decisions made in regard to the location, number, and species of trees will be the right of the Arboretum Committee of the NPBGS and the NPBGS Board of Directors, in conjunction with the Fargo Park District. TREE MEMORIAL SCHEDULE Listed below are suggested donations for tree memorials and commemorative trees, as determined by the tree trunk diameter at breast height (DBH), which is measured at 1.5 meters above ground level. DBH Donation Up to 3” $1,500 4-5” $2,000 6-7” $2,500 8-9” $3,500 Over 9” $5,000 BENCHES, PATHS, STRUCTURES, WALLS, AND OTHER HARDSCAPING POLICY The Master Plan makes provisions for special items of construction throughout the development. To maintain the character of design in each area of the site, the NPBGS, within the guidelines of the standing agreement with the Fargo Park District, will make the final decisions as to need, location, design, materials, and appropriate nature. The construction could be a bench in a particular garden or along an open path, a Japanese Garden bridge, a winding path in a wooded picnic area, or a stone wall in a rock garden. The possibilities are so vast that it is not possible to describe all of the naming opportunities. It is suggested that a meeting be arranged to create an agreement. Because a bench is a popular memorial and relatively easy to define, a minimum sum of $5,000 is recommended. Bench style, materials and character will be determined. SMALL MEMORIAL GIFTS Small memorial gifts (less than $5,000) to the Northern Plains Botanic Garden Society that are not otherwise designated shall be put into the Memorial Fund for garden construction as determined by the Board. NAMING POLICY NAMING RIGHTS FOR BUILDINGS AND GARDENS A four-season conservatory is planned including adjunct spaces such as administrative offices, gift shop, classroom/multi-purpose meeting spaces, and other supportive spaces, but construction will not occur for several years. Naming rights are immediately given the same recognition as any object that is placed or built immediately. Other structures for possible naming opportunities will be the planned Japanese Garden Pavilion, Support Greenhouses, Maintenance Sheds, and Gated Entries. Donated money will be placed in escrow to gain interest until used for the intended purpose. The following are some examples of the possible naming opportunities for rooms, equipment, and functional spaces: Gift Shop; Classrooms; Library; Exhibit Spaces; Conference Rooms; Display Lobby; Exhibit Wings; Entry Pond and Fountain; and Support Greenhouses. Donors considering naming opportunities should contact the NPBGS. This Memorial and Naming Policy was approved by the Northern Plains Botanic Garden Society Board of Directors on 16 June 2011.
https://www.npbgs.org/tributes-memorial-gifts
Unexpected results were attained in a simulation that point to a possible problem with the dataset. While writing about this in the discussion section, I'd like to elaborate on my explanation for this by presenting a simple calculation that reinforces my interpretation. Is it allowed to do this in a discussion section or should I lay the foundation for this in the method section and then include it in the results? Generally yes. The purpose of a discussion section is to interpret your results, both in light of what was previously known (hopefully described in the introduction), and with new supporting arguments or hypotheses. A calculation (possibly short or rough) is a good way of providing a supporting argument, and it's quite common to see one in the discussion section of a scientific paper. However, theses are often subject to local rules, some of which may be silly, so make sure to check your institution's style or thesis guide, or check with your advisor whether they have any preferences. - This sounds about right. You should also consider whether you should say something in a Future Work section, as well. – Buffy Feb 16 at 15:52 I think it is fine and preferable. This really more "analysis" than "method" at least in the context of the original study design. Also for a reader it makes more sense in that order. In general, there is some looseness about the exact configuration of discussion and results. As long as you are organized and show an understandable narrative, I highly doubt you will get someone telling you "that doesn't go in that section". I can look at papers in my field and see differences in how discussion was organized in any issue of the major journals and it is fine, no heads turned.
https://academia.stackexchange.com/questions/125067/discussion-section-rough-calculation-to-explain-unexpected-results
My Dashboard Student Activities Parent Activities Lesson: Representing Complex Numbers Using Argand Diagrams Learn how to use an Argand diagram to represent a complex number in this lesson. Learn how to identify complex numbers that have been plotted on an Argand diagram and discover geometric properties of complex numbers. We have sent a verification email to #email Please check your inbox and confirm your email address to complete your sign-up process. If you didn't receive the verification email, click here to resend it.
Under the terms of the JV at Leonard Shultz, Ok Tedi is required to spend $US12 million ($A11.93 million) over six years to earn 58% on the exploration licence, then carry Frontier to completion of a bankable feasibility study. Similar terms are offered at Likuruanga, with Ok Tedi to earn 80.1% for the same outlay. Ok Tedi will also garner pro-rata repayments from Frontier from 50% of future metal sales from the two exploration licences. The JV will drill a total of 5000 metres at Likuruanga from late April and a total of 3000m at Leonard Shultz in October. Meanwhile, the results from Ok Tedi's aeromagnetic and radiometric geophysical surveys of the two areas are in, with major anomalous zones found in both areas. The survey will be enhanced with further geophysical ground data collection to help whittle down targets for the forthcoming drilling program. Frontier says Likuruanga is highly prospective for world-class porphyry copper-gold, high-grade gold-silver skarns and structurally controlled and epithermal gold deposits. Leonard Shultz is highly prospective for copper-gold-molybdenum.
https://www.pngreport.com/png/news/1104114/frontier-ok-tedi-jv-ready-to-drill
Download "Shape - Typical designs with sector angles of pi/2 [90 degrees], and 2pi/3 [120 degrees] are shown below." 1 Sector Torus Cores Started 01 Jun 012 By Newton E. Ball Definitions - Torus - Restricted to Circular Torus, the solid shape formed by the rotation of a circular area, about an axis that is external to the circle. Sector - Angle subtended, at the axis by a partial torus, referred to as a Sector Torus. Rod - Right circular cylinder of ferromagnetic material, with the same crossection diameter as adjacent Sector Torus core pieces. Core - Solid assembly of ferromagnetic material, forming a closed path for magnetic flux. The path is usually linked by turns of magnet wire, close to the core. Winding - All of the Sector Torus and rod portions of each core, are to be closely surrounded by layers of magnet wire, each turn, of which, links the magnetic flux path. All of the turns that are in series, constitute a winding. Pitch - The closely spaced turns of a winding form a spiral. The turn to turn distance is inversely quantified as pitch, in turns per inch, or turns per meter. Lay - The spiral winding of a layer has a right handed or left handed sense, corresponding to the sense of right or left handed threads. That is, the winding layer can be right lay or left lay.. Coherent Winding - If all of the layers of a winding have the same pitch, and the same lay, then the winding is said to be coherent. Gap - The portion of magnetic path that is empty of ferromagnetic material, is a gap. Gaps are used to store magnetic energy. Gaps usually have parallel walls, perpendicular to the direction of magnetic flux. Gaps are usually filled with solid material, such as plastic, that is not ferromagnetic. Working Gap - In a motor, generator, or actuator, the portion of a gap that is filled or emptied of ferromagnetic material, during torque or thrust generation, is called the working gap. Shape - Typical designs with sector angles of pi/2 [90 degrees], and 2pi/3 [120 degrees] are shown below. 5 why coherent winding layers are preferred. The figure below, illustrates windings of AWG 20 wire, for use with 1/4" rod or sector torus cores, wound on mandrels, with layer 2 then threaded onto layer 1, ready for insertion of rod cores, or of sector torus cores. The left view is of Layer 1. The center view is of layer 2. The right view shows the point in the assembly, when layer 2 is threaded onto layer 1, ready for core insertion. As a sector torus core is inserted, The wire of the second layer, is forced toward the inside of the sector turn, finding a place such that the composite winding is only one layer deep at the outside of the sector torus turn. If a rod core is inserted, then the relative position, shown in the right view, remains. After all of the core, constituting the complete, closed magnetic path, has been inserted, then a partial turn, at each end of each layer, is unwound, stripped of coating, and formed for termination. The termination may be to a terminal, or directly to a printed circuit board, by surface mount, or through hole mounting. These terminations are typically the principal mounting for the entire magnetic assembly. Ends of core pieces, may be held in alignment by a short piece of thin wall shrink tubing, or dilated plastic tubing. In the case where inductive energy storage is desired, circular plastic shims separate the core ends, under the tubing. In the example winding shown, the layers are right lay, and both have pitch, slightly greater than the diameter of the AWG 20 magnet wire. Use in Motors - Cores assembled from Sector Tori [plural of torus] and rods, allow winding shapes to adapt to available space, while always surrounding a circular crossection. The motor, shown in part below, has a specified case length and diameter. It is a unidirectional motor only, no reverse operation, no generator operation. motors of this sort need only two phases. A continuous winding surrounds the cores on opposite sides of the rotor. 6 Views - At the top left is a canted view of the magnetic core parts only. Top right adds rotor, bearings, and shaft. Below is an axial view, showing how each winding fits a quadrant of the motor. Adapter, wound core to rotor - At each of the eight ends of the cores to be wound, shown above, is a core part that is not to be wound. These parts called adapters, have constant crossection area along their length, but the shape changes from circular, at the wound core end, to trapezoidal, at the rotor end, so that the short dimension minimizes the angle from center, on the rotor. The adapter is shown in bold lines below. 7 * Two Phase Two Coil - In the motor shown above, a single winding surrounds two core sections, on opposite sides of the rotor, with the winding crossing outside of the rotor. in two places. The motor is said to have two coils, or two calipers. Caliper designates the stationary mounting of the core and coil, and forms a quadrant of the stator. Two Phase Four Coil - Two more calipers, occupying the upper vacant stator quadrants, can be added to this motor design, doubling it's torque. The coils of a phase may be connected in series, or in parallel, depending on whether higher voltage or higher current is desired. Three Phase - A motor-generator, or reversible motor or generator, or actuator, is usually equipped with three phases of drive. 120 degree Sector Torus shapes, fit these configurations, with three or six caliper, coil and core sets.. Rotor - The rotor has a center layer, perforated in the pattern shown at the left, below. The rotor has also two identical outer layers perforated in a different pattern, and shown with the center layer, at the right. 9 Use in Actuators - Linear electric actuators use the same sector torus and rod cores. They are aranged as shown here. Only one caliper, of two or three calipers, is shown. Arrows indicate direction of travel. All of the sector torus, and rod portions, shown, are to be wound with a continuous winding. Voids in the ferromagnetic armature, are not shown. Actuator Drive - Actuators are driven by the same circuit, as the one shown for motors, except that a linear position encoder is substituted for the shaft encoder.
http://autodocbox.com/Auto_Parts/67263789-Shape-typical-designs-with-sector-angles-of-pi-2-90-degrees-and-2pi-3-120-degrees-are-shown-below.html
I got the curve editor basically working well, but how to actually travel along the curve? The thing about Bezier curves is that, as you evaluate them over a specific t∈[0,1], the change in distance travelled is not directly proportional to the change in t. Even worse, if you are joining multiple curves together, using time elapsed as the t value could give wild discontinuities in velocity as you cross the boundary between a long curve and a short curve. It's also very difficult to control acceleration at the beginning and end of travel along the curve. So, today has been about evaluating a Bezier curve in terms of distance travelled, rather than elapsed time. It appears that evaluating the length of a Bezier curve is fairly expensive as there is no closed form for computing it. You basically have to recursively divide it up into a bunch of line segments and sum up their lengths, with improved accuracy the smaller you make the segments. The error is just the difference between the sum of the lengths between the control points vs. the length between the first and last points - that is, keep subdividing until the segment is close enough to a straight line. This is fairly straightforward, so my current approach is: - The BezierPathComponent takes an associated BezierCurveComponent and breaks it down into a bunch of segments (storing length, t0 and t1 for each segment) - Store the sum of the length all of the segments for a total curve length - Set a constant target distance along curve, target (maximum) speed and acceleration value - Keep track of the total distance travelled and the current speed That should let us move at a controlled speed along a set of curves of varying shapes and lengths.
http://blog.basemetalgames.com/2013/10/so-you-want-to-travel-along-bezier-curve.html
Sec 14-1. Definitions.Vehicle. Every device in, upon, or by which any person or property is or may be transported or drawn upon a street or highway, except devices moved by human power or used exclusively upon stationary tracks; provided that, for the purposes of this chapter bicycles shall be deemed vehicles and every rider of a bicycle upon a highway shall be subject to the provisions of this chapter applicable to the driver of a vehicle except those that by their nature can have no application. Sec. 14-102. Riding on the handle bars prohibited. The operator of a motorcycle or bicycle when upon a street shall not carry any other person upon the handle bars, frame or tank of any such vehicle, nor shall any person so ride upon any vehicle. Sec. 14-103. Brakes required. It shall be unlawful to operate a bicycle on a street, alley, sidewalk or public highway of the City, unless it is equipped with a braking system in sufficient working order to control and stop the movement of the bicycle. Sec. 14-104. Lamps required if used at night. Every bicycle shall be equipped with a lamp on the front exhibiting a white light visible under normal atmospheric conditions from a distance of at least three hundred (300) feet to the front, and with a reflex mirror or lamp on the rear exhibiting a light visible under like conditions from a distance of two hundred (200) feet to the rear. Sec 14-107. Pedicabs. (Note: this section addresses the use of pedicabs, defining them as devices with three or more wheels, pedaled by one individual, and used for transporting passengers on seats or a platform. Since the code doesn't have a wide application, it will not be quoted here except to note that a paragraph does state that "It shall be unlawful to operate a pedicab upon public sidewalks in the city." It is stressed that bicycles may be operated on sidewalks in the city except where expressly prohibited by posted signs.) PARKS APPLICATIONS Sec. 15-134. Traffic. No person shall: (9) Ride a bicycle on other than the right-hand side of a paved vehicular road or path designated for that purpose; or fail to keep in single file when two (2) or more bicycles are operating in a group. A bicyclist shall be permitted to wheel or push a bicycle by hand over any grassy area or wooded trail or on any paved area for pedestrian use(10) Ride any person over the age of six (6) years on a single passenger bicycle in any park; (11) Leave a bicycle unattended in a place other than a bicycle rack when such is provided and there is space available; (12) Leave a bicycle lying on the ground or paving, or set against trees, or in any place or position where other persons may trip over or be injured by it; (13) Ride a bicycle on any road between thirty (30) minutes after sunset and thirty (30) minutes before sunrise without an attached headlight plainly visible at least two hundred (200) feet from the front, and with a red tail light or red reflector plainly visible from at least one hundred (100) feet from the rear of such bicycle. RAILROAD APPLICATION Sec. 17-16. Bicycle riding, walking restricted.It shall be unlawful for any person to ride a bicycle or any other vehicle or to walk along the right-of-way of any railroad tracks at any time within the City.
https://charlottenc.gov/Transportation/Programs/Pages/BicycleLaws.aspx
FIELD OF THE INVENTION The present invention relates to tables made up of a number of parts, and more specially to such a table designed as a bench or table for use in trade or industry, having four table legs and a table top supported thereon. BACKGROUND OF THE INVENTION In civilized countries tables have come to be used on a very wide scale so that their design, function and different possible uses are quite well known. They are generally used as supports or rests for many different things and purposes and as a rule are made of such a height that a person in an upright or seated position may quickly and simply take up and put down the things on the table. The general design of a table is such that there is a table top and at least three or, as is most frequently the case, four table legs. There is a very wide range of different possible table sizes, the selection of the table size being dependent on the amount of free space in which the table may be put. For manufacturing a table, in the simplest case the first step is making a selection of the right size of table top and then fixing the legs thereto. But for some less common and somewhat complex designs of table, which make possible for the table top size to be changed after the table design has been completed, the size of the table top is fixed once and for all when the different parts of a table are put together. However in the general run of things, it is a shortcoming that before the size of a table top may be changed in a factory producing tables, a more or less complex process of retooling of the production machines is needed, this being more specially the case if extruding or injection molding machines are being used. SUMMARY OF THE INVENTION It is for these reasons that one purpose of the present invention is that of designing a table whose table top may be changed in size. A still further purpose of the invention is that of producing a table that is low in price and may be used as a general purpose table, more specially for the machining of wood and resin material in connection with different electric tools such as a saw placed under the top face of the table, saws that are guided by means placed over the table, overhead milling cutters with a copying pin placed at a lower level, overhead millers cutting into the lower side of the work, grinding and polishing machines (that is to say belt sanders, angle sanders, plate sanders, porcupines or roller grinding machines), planes, wood turning systems, etc. For effecting these and further purposes that will become clean on reading further parts of the instant account, the said top is formed by two spaced parallel first sections, running in a first coordinate direction, of the same size and having a rectangular area, two second spaced parallel sections of the same size and of equal rectangular area running in a second coordinate direction normal to the first sections between which they are placed, and a bridge support or flat bridge that is placed between the sections running in the two said directions and is bounded thereby, the said support being able to be removed or taken off so that a support of different size can be put in its place. One of the important useful effects of this table top of the present invention is that the table top may changed in size without changing the cross sectional areas of the sections running in the first and in the second directions by simply changing the length of each of these parts to be in line with what is needed. To keep to the same size of area, all that is necessary, more specially in the case of the manufacturing processes in which the sections are made by drawing, as for example by extursion or injection molding, is for the separate sections to be cut off to the desired length so as to be longer or shorter without any other changes in the design. This being the case, it is best for the said sections running in the first and second directions to be made of extrusions. This goes for the bridge support as well if it is in the form of a section. If this is not to be the case, then the bridge sport itself has to be changed in size, more specially in the case of certain forms of the invention of which more details will be given later herein. One material that has specially useful properties for extrusion is aluminum. This material is highly useful inasfar as on one hand it is very simply machined while on the other hand the finished product is light in weight so that the table is more readily handled. As a further part of the general teaching of the present invention the bridge support is temporarily fixed in position so that it may be replaced by another one when desired. There are a number of different ways in the present invention for making this possible. One way, that is more specially of value, is one in which the sections running in the first direction of the table are molded on the inner side which faces the bridge support with a step running all the way along the length of the sections, so that the bridge support or support board may locked in position quite simply by placing it on the lower part of the table or frame with the legs. Furthermore the bridge support may then be used for supporting heavier weights without going past the load carrying capacity of the material. The fact that the bridge support is only temporarily fixed in place gives a further useful effect when it comes to using the table as a carpenter's or mechanic's bench for a number of different tools such as circular saws, keyhole saws, overhead millers, saws placed under the table, and planers, inasfar as in such cases, as for example in the case of overhead millers, the supporting table top has to be free of openings, while on the other hand in the case of a circular or other form of saw under the table the bridge support has to have an opening or a slot, which is best placed running in the length direction of the table, for the tool. In this case for a changeover it is only necessary for the bridge support to be lifted up out of the way and a different size of bridge support to be put in its place. On using tools sticking through openings in the bridge support it is best for the opening in each case to be made as small as possible without however being so small as to be in the way of the tool. In keeping with a further useful development of the present invention the table or bench top is so designed that its top face os lined up with the faces of the sections running in the said first and second directions. The outcome is then a table top with a generally plane or flat and flush top face so that working with the table becomes very simple. One material that may be used for the table or bench top is steel sheet or injection cast material or extruded aluminum, such metal then more specially offering the highly useful effect, as noted hereinbefore, of making it simple for the bridge supports, like the sections running in the said first and second directions, to be simply produced to different sizes. As part of a still further outgrowth of the general teaching of the present invention, the sections running in the first and/or the second directions are locked to the table legs, that is to say joined therewith by positively interlocking parts. As has been seen in the development of the present invention this may be done by having, for example, grooves molded on the table legs to take up pins on the sections running in the first and/or second sections. Such interfitting of the parts that are to be joined together makes the table very much stronger. The same sort of effect may be produced if the table legs are placed at an obtuse angle to the table top. In the present invention this angle is to be that angle which is between the table legs and the middle point of the bridge support so that in fact the table legs will be running outwards in a downward direction from the bridge support and the table then is more stable. As a further development of the invention the table legs have teeth for decreasing the danger of slipping and/or have hinge joints for making transporting and/or storing the table simpler. As a last point, a material that is specially low in price for making the table legs is pressure cast metal. In the account now to be presented a number of different froms of the invention will be detailed using the figures. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a plan view of a table or workbench embodying the present invention. FIG. 2 is a side view of a further table embodying the invention. FIG. 3 is a sectional view of the table of FIG. 1, the section being taken on the line III--III. FIG. 4 is a sectional view similar to FIG. 3 but of a structure with a different way of fixing the table legs in position. DETAILED DESCRIPTIONED In FIG. 1 the reader will see spaced sections 1 or bars having a rectangular shape and being the same in size, the sections 1 stretching in a first or lengthways direction and having their ends resting head on against the sides of two sections 2 or bars which each have a rectangular shape and which are the same in size, such second sections running in a second direction normal to the first direction. Between the first sections 1 and the second sectiosn 2 a cover or bridge support 3 is fitted in place, the two first sections 1 and the two second sections together forming a rectangular and in fact almost square table top. In the middle the bridge support has a slot 4 running in the first direction to take up the blade of a tool such as the blade of a circular or keyhole saw. As is more specially to be seen from FIGS. 3 and 4 and as is marked in broken lines in FIGS. 3 and 4, the support 3 is supported on inwardly running ledges or steps 5 on inner sides of the first sections 1. The support plate 3 is then kept in position by its own weight or by screws and because of the generally large ledges or steps 5 is able to support great weights. At the corners of the table top which is formed by the first sections 1, the second sections 2 and the support 3, table legs 6 are fixed in position in such a way that they are at an obtuse angle to the table top 3. Having the table legs at a slope makes the table more stable so that it is not so likely to be pushed over by forces acting horizontally on the table top. In the present working example of the invention it will be seen that the table legs 6 extend out past the edge of the table top as formed by the first sections 1 and the second sections 12. Lastly it will be seen that on the lower side of the first sections 1 at the outer edge there is a downwardly projecting wall 7 stretching all the way along the length thereof, on which the table legs 6 are fixed at least in part. The present form of the invention has a great number of useful effects. On the one hand it is possible for tables of different sizes to be produced with only a little retooling, for example simply by using support plates that are different in breadth or length while using the first sections, the second sections and possibly the support plate with a different length, but without changing the cross section thereof. It is more specially in the case of parts produced by drawing that a change in the length of the parts of the table does not make necessary any change in the production machines. There is as well a further useful effect to be had from the form of the invention inasfar as the parts may be simply lifted out of position and other ones put in their place for making changes in the form of the table. In FIG. 2 a working example on generally the same lines as in FIG. 1 is to be seen in a side view. In this case the greater part of the end face of the support plate is shown, so that the direction of viewing would be from the right or the left in a view similar to FIG. 1. In the side view of FIG. 2 the reader will for this reason be able to see one of the two first sections 1 whose ends butt against the two second sections 2, whose end faces only can be seen. A table leg 6 is fixed to these parts so as to be running downwards at a slope, that is to say so as to make an obtuse angle with the table top. In the present view one first section will be seen with its wall 7. In this respect the thickness of the first section 1, taking into account the wall 7, is eqal to the breadth of the generally square-section second section 2. It breadth is again, unlike th form to be seen in FIG. 1, generally equal to the breadth of a table leg 6. Near the top of one of the first sections 1 a broken line has been marked parallel to the edge to make clear the position of the bridge board 3 which is behind the section 1. FIG. 3 is a section taken on the line III--III of the form of the invention to be seen in FIG. 1, which will be seen to have two first sections 1, at whose inner edges there are steps or ledges 5. These are used for supporting a bridge board 3 of the right size for fitting in place. The thickness of the board is preferably such that its top face is fully on the same level and flush with the top faces of the first sections 1. The first sections, namely the sections running in the first said direction or coordinate, have towards their loer face and on their outer edges a head 7 for connection with the table legs 6. In this case as well, unlike the form of FIG. 1, the thickness of the head 7 is made equal to the thickness of the table leg 6 so that the tops of the table legs are flush with the upper part of the table and are not sticking out beyond it as in FIG. 1. The table design to be seen in FIG. 4 is in many details the same as that of FIG. 3. However the important difference is to be seen in the way the first sections 1 are secured to the legs 6 of the table. In fact, to this end the first sections 1 and their heads 7 have a middle groove and a further groove 8 on the edge to take up pins 9 of opposite or mating form on the table legs 6. In this way one may be certain of a specially tightly fitting joint between the table legs 6 and the sections 1 running in the first direction, that is to say with the table top. Lastly the design is different inasfar as the table legs 6 each have one tooth 10 at their lower ends for stopping them from slipping on the floor. In the case of a further possible form of the table, not shown, there are pins on the sections running in the lengthways and/or transverse directions, such pins fitting into grooves in the legs of the table. It is possible for the table frame made up of two pairs of extrusions or sections 1 and 2 of aluminum for example to have side grooves possibly running all the way round the table frame. Such grooves 11 are best made with a section like that of a letter T or with a dovetail form so that they may be used for guiding and fixing stops and wings on the table for increasing the size thereof. Such grooves are produced in the sections when same are extruded.
Q: Thaw Sue's cold dog Start with the phrase Thaw Sue's cold dog and perform the following steps in some order to find a famous question: Add one letter Add one letter Change one letter Rearrange the letters of one word Rearrange the letters of one word Rearrange the words Remove one letter Note: You can ignore punctuation, and add/remove punctuation as required, at any point.  Thus, Sue’s is one word. A: Start with the phrase Thaw Sue’s cold dog and perform the following steps: Add one letter: Thaw Sue’s could dog Add one letter Thaw jSue’s could dog Change one letter Thaw jSue’s would dog Rearrange the letters of one word What jSue’s would dog Rearrange the letters of one word What Jesu’s would dog Note: adding/removing punctuation is not included in the steps. → What Jesus would dog? Rearrange the words What would Jesus dog? Remove one letter What would Jesus do? A: Original: Thaw Sue's cold dog Add one letter: Thaw Sue's could dog Change one letter: Thow Sue's could dog Rearrange the letters of one word Thow Sue's could God Rearrange the letters of one word Thow uses could God Rearrange the words Thow could God uses? Remove one letter How could God uses? Remove one letter How could God use?
“What one has never properly realized, one cannot properly be said to remember either.” Eustace H. Miles Most people go around in a daydream like state not really paying attention to what’s going on around them. You don’t have to be like that. Here are 4 exercises to improve your powers of observation today. Exercise 1: Remember Items in a Room - If you are in a familiar room right now take a piece of paper, and without looking around, list everything in the room. Write down everything you can think of, describe the entire room in detail. List every piece of furniture, electricals, pictures, decorations. - Now, look around the room and check your list. Notice all the things you did not put down on your list, or never really observed, although you see them all the time. - Now take a closer look round. Step out of the room and test yourself once more. Your list should now be longer. - Try the same thing with other rooms in your home. If you keep practicing, your observation will become keener. Exercise 2: Remember a Friend’s Face - Think of someone you know very well. Picture his or her face and see if you can describe the face on paper. List everything you can remember. Go into detail: list hair and eye colour, complexion, outstanding features, whether or not they wear glasses, what type of glasses, type of nose, ears, eyes, mouth, forehead, approximate height and weight etc. - The next time you see this person, check. Note the things you didn’t observe and those you observed incorrectly. Then try it again! You will rapidly improve. Exercise 3: Remember a Stranger’s Face - A good way to practice this is on the bus or train. Choose someone and look at them for a moment, close your eyes and try to mentally describe every detail of this person’s face. Pretend that you are a witness at a criminal investigation, and your description is of utmost importance. - Look at the person again and check. You’ll find your observation improving each time you try it. Exercise 4: The Shop Window - One last suggestion: Look at any shop window display and try to observe everything in it. - List all the items without looking at the display and go back to check. Note the items you left out and try it again. - When you think you’ve become proficient at it, try remembering the prices of the items as well. Credit: Harry Lorayne, How to Develop A SUPER-POWER MEMORY. To download the ebook go to the books page.
http://www.mentalismskills.com/4-exercises-improve-observation/
W&L WOMEN'S SOCCER EARNS ALL-STATE HONORS Dec 05, 2000 LEXINGTON, Va. -- Washington and Lee placed three players on the College Division All-State Women's Soccer Team that was announced on Tuesday morning as voted on by the Virginia Sports Information Directors (VaSID). One of only three freshmen named to the squad, Marcoux led Washington and Lee in goals (14), assists (9) and shots (90). Her nine assists constitutes the third-best single season total in school history, while her 37 single-season points are the fourth-highest in W&L history. Marcoux was also named First-Team All-ODAC. Harris, a First-Team All-ODAC and Second-Team All-South selection, started all 18 games in the net for W&L this season. She accumulated 133 saves and held a 0.864 save percentage. She finished her career with a school-record 1.00 goals- against-average and places second in the W&L record books with 27 career shutouts after posting six in 2000. Levine has been named to the All-State Team for the second consecutive season. She tallied one goal and two assists this season to bring her career totals to 15 goals and seven assists. Levine was also named to the First-Team All-ODAC, Third-Team All-South and ODAC All-Tournament Teams.
Notes: Please read carefully. At the registration table, all players must submit proof of having received a complete COVID vaccination at least 2 weeks prior to the date of the tournament (7/20/21) This requirement includes spectators and casual chess players. Mask wearing will be optional. Players and spectators are also required to sign a DCC liability waiver at the registration table. Date: Every Tuesday in August 2021 (8/3 - 8/10 - 8/17 - 8/24 - 9/31) Time Control: Game in 70 minutes + 5 second delay (G70;d5) Site: Hope Fellowship Christian Reformed Church. 2400 S. Ash Denver CO 80222. Please use the rear entrance. Directions: One block East of S. Colorado Blvd. on Wesley. Please use rear entrance. Sections: Open - U1900 - U1500 Entry Fees: DCC members: $20 per month or $6 per night. Non-DCC members: $30 per month or $8 per night Prizes: Based on entries. (the DCC deducts a set amount to cover expenses, and the remainder is the prize fund) 1st, 2nd , 3rd, and an Upset prize are awarded in each section. Registration: Online, or at the door from 6:00 to 6:50 PM. When registering online, players must still check in at the registration table before the close of registration. USCF membership is required. Round Time: ASAP after the close of registration. Bye Policy: Half-point byes available anytime for the first 4 rounds. A last round bye must be requested before the start of the penultimate round. Only one bye will count towards prize money.
https://denverchess.com/tournaments/upcoming/1051
Seagulls are recurring creatures in the Legend of Zelda series. In most of their appearances, they serve no special purpose other than helping to create mood and ambiance. Appearances The Legend of Zelda: Link's Awakening Seagulls are an object of fascination for Marin. She relates to Link how she dreams of becoming a seagull and fly away from Koholint Island. If Link manages to complete the game without dying, Marin will be shown flying across the screen with wings in the "The End" screen. This may be an indication that she fulfilled that dream. The Legend of Zelda: Majora's Mask - "Hey, what do you think that is? Out there in the bay, beneath where the gulls are flying..." - — Tatl Seagulls native to Termina appear in the Great Bay region. A flock of gulls can be seen flying over the dying Mikau when Link first enters the Great Bay Coast. Tatl will point this out, thus the gulls serve as a clue that leads Link to Mikau and ultimately the Zora Mask. The Legend of Zelda: The Wind Waker Seagulls, which can be found almost everywhere on the Great Sea, can be controlled if they take one of Link's Hyoi Pears for a while. Link can use a seagull to collect far off or out-of-reach Rupees or to hit switches. If the controlled seagull is hit by an enemy, Link loses control of it. When they are found while sailing, they will follow Link's boat. Seagulls are also known to gather in flocks on the Great Sea near Big Octos. Seagulls also tend to flock around Aryll, Link's sister. Seagulls can also be used to move Seahats away from Link, as using one will make the Seahat fly backwards. The Legend of Zelda: Phantom Hourglass Seagulls will fly alongside the S.S. Linebeck and can be shot with Link's cannon. The only result of this is that the seagull will fall back a few feet from the ship until it recovers. The Legend of Zelda: Spirit Tracks Seagulls will fly alongside the Spirit Train in the Ocean Realm, while doves, who behave very similarly to seagulls, appear in most other areas. If the Song of Birds is played, they will gather around Link and, although most of them will fly away once he moves, one bird will remain perched on his head and will not leave unless Link leaves the area. The Legend of Zelda: Breath of the Wild - "These birds live near the ocean. They eat mainly fish, so a flock of seagulls hovering over water is a good indication of where there are fish. Fishermen use this to their advantage when searching for a catch of their own." - — Hyrule Compendium Seagulls can be found along every coast throughout Hyrule. Faron, Necluda, Lanayru, and Akkala Seas' beaches will usually have a few seagulls flying overhead, and big flocks of them usually means there are fish nearby. For the first time in the series, they can be killed by Link, like most of the wild life in Breath of the Wild. They drop a Raw Bird Drumstick when Link hunts them. Wolf Link will attack them like he does with most wildlife though it is hard for him to attack them in the air. Other appearances Subseries warning: This article or section contains information on a subseries within the Legend of Zelda series and should be considered part of its own separate canon. Hyrule Warriors Legends Seagulls appear on the Great Sea Adventure Mode map. A Hyoi Pear Item Card can be used to lure seagulls to find hidden treasure. Subseries warning: Subseries information ends here.
https://zelda.fandom.com/wiki/Seagull
Tag Archives: Earth Since I missed updating last week, I’m going to dive into some updates to the rules for the Martian colony game that’s being slowly worked on. Spent part of today rethinking my rules for Sands of Mars. There were some parts I liked and some that felt clunky. On the clunky side, there wasn’t the meshing I wanted when it came to playing become action. There was plenty of action, but I felt that it wasn’t as complete as it could be. Something about what we were doing and what our goals were didn’t quite get there. This, I felt, affected how much fun we had. Therefore, I went and looked at what streamlining could be done as I reviewed my notes from the playtests. I felt that it’s been long enough since I first put things together that going back to the beginning and reviewing my thought process was the place to start. Shelter can consist of additional items, namely stuff that makes the shelter nice to live in. we now have electronics, appliances, and furniture. To translate into game terms: Devices needed to generate O2, and H2O, and provide an environment for living, and an environment for growing. humans on a colony are going to want to communicate with each other and with the earth; therefore additional items are needed. some sort of satellite communication gear and terrestrial system to do the same. And to run it all, power generation. Change this up, it can be nuke plants or solar panels. 1 Nuke plant == 5 Solar Panels <– That may change. will have to play with it some more. [this is about as far as i got last time, feels incomplete] Okay, so the premise is that the players control robots which go about building these things; shelters, o2 generators, H2O generators, communication gear. each of those are comprised of smaller components, namely mechanisms generated by nanofactories. The players search the map for places to put these items (stable locations), places to get materials to build the mechanisms. Collapse minerals and iron into a single field. That’s the ticket. Add the third type as Soil that has to be moved. A nanofactory will move it from current location, if it’s not Stable to a location that is. That takes a certain number of turns. Once moved (some token will have to be used here) then a Farm can be built. Get rid of everything but the designated Radio spot. Collapse it down, make it simpler and easy to put on a sector. Stable — supports one structure on it Soil — martian surface that can be easily converted into a arable earth Minable — There are raw minerals close enough or in the surface that make this an excellent location for planting a nanofactory to produce material Bedrock — Supports two regular structures on it or one tall structure (E.G. satellite uplink, Nuke Plant, or Radio Tower) Non — while the surface is stable enough to traverse regularly, there is insufficient support under the surface for building. Unstable — Player makes die roll when traversing, 1-3 nothing happens, 4 & 5 loses 1 action this turn, 6 turn ends —- Players can plant a nanofactory on an unstable or non region if it has soil or minable, but that factory will be destroyed in X turns due to the instability of the ground. There might even be room for a card that will allow for the temporary or permanent stabilization of a sector but at the cost of production. How long would this new game last? 1 turn == 1 month, 12 turns == 1 year. Each robot gets 4 actions per turn. Actions are Move, Probe, Start Nanofactory, Start Building Nanofactories and Buildings are done the same way, utilizing a machine colony that the robots have tucked away inside of them. Each turn, the robots can produce enough new colony material to divide it once — starting a new building or a new nanofactory. Nanofactorys produce only one thing now — materials — therefore end goals have to be changed up. But more than that — Building require a certain amount of material PER TURN to complete. (More cards in the deck that alter/enhance/detract from this function of the game) Make the Nuke plant optional and there we go. Materials have been reduced to a single resource, getting rid of something I felt was too complicated. Locations continue also be a resource, but this time, they’re generalized giving the players some freedom in planning but also giving me the ability to mess around with the game tiles and the distribution of usable sites across them. Some playtesting is needed now to see how well these ideas work and so I can get a feel for the number of items needed for a “win”.
CROSS-REFERENCE TO RELATED APPLICATIONS TECHNICAL FIELD This application is a continuation of and claims priority to PCT/EP2014/056584 filed Apr. 2, 2014 which claims the benefit of and priority to European Patent Application No. 13 162140.1 filed Apr. 3, 2013, the entire disclosures of which are incorporated by reference herein. The present disclosure relates to a packaging apparatus and to a method for packaging an item, such as a pack of cigarettes, cigarillos, cigars or the like, which for the sake of brevity and clarity will simply be collectively referred to herein as “smoking articles”. The disclosure herein also relates to a packaged item, such as a pack of smoking articles, which is produced by such an apparatus or method. BACKGROUND A pack of smoking articles, e.g. cigarette pack, is typically produced in a packing process in which a container is formed around a charge of the smoking articles. After the packing process, the packs are finally wrapped and sealed with a barrier film that is designed both to protect against external influences, such as moisture, and also to retain aroma and to maintain the freshness of the smoking articles. The barrier film is typically a flexible sheet, which is preferably transparent and may be formed of cellophane, polypropylene, or another similar synthetic film material, with the barrier film typically being heat-shrunk, heated and bonded around the pack. The barrier film can be configured to tear or break along a line to enable a consumer to access the smoking articles in the pack after purchase. In this regard, the line may be a line of weakness formed in the barrier film material, e.g. by scoring, or a line that follows a tear tape adhered to the barrier film. A tab or flap is typically located at an end of the line to allow a user to initiate the tearing or breaking of the barrier film. However, it has been found that bonding or sealing of the barrier film can interfere with the line of weakness. In particular, the tab or flap may stick to the barrier film such that the usual ease of tearing or breaking the film is compromised, making it more difficult for a consumer to remove the barrier film and access the smoking articles. SUMMARY In view of the above, one idea of the present disclosure is to provide an improved apparatus and method for packaging items, such as smoking articles, which addresses the above problems. Another idea of the disclosure herein is also to provide a correspondingly improved pack of smoking articles. According to one aspect, therefore, the disclosure herein provides a packaging apparatus, particularly for packaging smoking articles, which comprises a wrapping device for wrapping a barrier film around an item to be packaged, such as a pack of smoking articles; and at least one sealing head having a contact surface for contacting the barrier film wrapped around the item and for sealing the barrier film along a seam to form a protective enclosure around the item, wherein the contact surface of the sealing head is at least partially discontinuous to provide a non-sealing region in the sealing head. As noted above, in the present disclosure the barrier film is provided with a line or area of weakness (e.g. a frangible or breakable line or strip) which assists removal of the film and access to the item by the consumer. With the present disclosure, therefore, the sealing head can be arranged during bonding or sealing of the barrier film such that the non-sealing region at least partially, and preferably wholly, overlies the line or area of weakness in the film. In this way, the sealing head is able to perform the required bonding of the barrier film without negatively affecting a frangible or breakable line or strip in the film. In one embodiment, the wrapping device is configured to wrap the barrier film around the item to be packaged such that an area of overlap of the barrier film is provided where the sealing head contacts and seals the barrier film along the seam. This area of overlap of the barrier film may be formed, for example, by overlapping edge portions of the film and/or by folded edge portions of the film. In another embodiment, the sealing head includes heating or a heater for heating the contact surface to effect sealing of the barrier film along the seam by heat bonding or fusion, i.e. as the sealing head contacts the film wrapped around the item. In this regard, the non-sealing region makes no contact with the barrier film and thus effectively performs little or no heat bonding or fusion, and so does not comprise the frangible or breakable line or area of the film positioned in that region. Alternatively, the sealing may be achieved by adhesive bonding via pressure exerted by the sealing head over the contact surface. In that case, the non-sealing region has no contact with and exerts no pressure on the barrier film, which in this specific non-sealing region is desirably also free from adhesive. Thus, adhesive bonding in this region can be avoided. In a further embodiment, the sealing head has a generally elongate form and the non-sealing region may be provided as an intermediate region transverse to a longitudinal extent of the sealing head. In this regard, the non-sealing region may extend across at least 50 percent, preferably at least 75 percent, and more preferably at least 90 percent, of a full width of the sealing head. In one particular embodiment, the contact surface of the sealing head is fully discontinuous in the longitudinal direction such that the non-sealing region extends across the full width of the sealing head. However, in an alternative embodiment, the contact surface may be only partially discontinuous (and thus also partially continuous) in the longitudinal direction, so that the non-sealing region does not extend across the full width of the sealing head. In particular, the contact surface may span less than 25 percent (preferably equal to or less than 10 percent) of the width of the sealing head adjacent the non-sealing region. By ensuring that the seam is continuous at least in a small region, the barrier film can better protect against external influences, such as moisture, and also better retain aroma and maintain the freshness of the smoking articles. The continuous region is preferably at a periphery or edge portion of the seam, but may equally be in a middle or intermediate portion thereof. In a further embodiment, the geometry of the non-sealing region varies in two or in three dimensions. For example, the geometry of the non-sealing region may vary in the plane of the contact surface of the sealing head. More particularly, a breadth of the non-sealing region may vary (for example, linearly or in stepped fashion) across a width of the sealing head. Thus, where an activator, such as a tab or flap, is provided on or in the barrier film to be operated (e.g. grasped and pulled) by a user to activate or to break the line or area of weakness in the film, the sealing head may be arranged such that a broader part of the non-sealing region overlies the tab or flap when the barrier film is sealed or heat-bonded along the seam. The non-sealing region is preferably formed by a recess or cavity in the sealing head. Thus, as an alternative, or in addition to the variation in geometry of the non-sealing region in the plane of the contact surface, a height or depth of the recess or cavity (i.e. normal to the contact surface) may vary across a width of the sealing head. In this way, the amount of radiant heat imparted to the barrier film by the sealing head in the non-sealing region may also be regulated. In a further embodiment, the wrapping device comprises a holder, preferably in the form of a rotatable carousel, for holding one or more of the the item to be packaged and for transporting the item(s) wrapped in the barrier film to the sealing head. The sealing head is preferably movable relative to the holder to contact the barrier film wrapped around a respective item and to seal the barrier film along the seam. In the case of the holder being a rotatable carousel, the wrapped items to be packaged are transported by rotating the carousel about its axis, and the sealing head may be movable in a radial direction relative to the carousel to contact and seal the barrier film along the seam. In a further embodiment, the apparatus includes at least two sealing heads, each of which is configured to form a portion of the seam in the barrier film wrapped around the item to be packaged. In this regard, the sealing heads are preferably configured to engage the item separately and consecutively. The two sealing heads may, for example, be located spaced apart from one another around a periphery of the holder or carousel to engage each item separately and consecutively. Each of the sealing heads may thus have a non-sealing region, and the respective non-sealing regions may have different shapes or configurations, which together may cooperate and/or complement each other in forming the seam. In a further embodiment, the item to be packaged comprises a pack of smoking articles, which includes a charge of the smoking articles themselves supported on or in a frame of rigid or semi-rigid material, such as cardboard, stiffened paper, paperboard, fibreboard, polymer sheet material, or the like. While the rigid or semi-rigid material of the frame may be relatively flexible, regardless of its composition it should nevertheless also be dimensionally stable in order to impart structural stability to the pack of smoking articles, and thereby to protect the smoking articles from damage (such as inadvertent compression or crushing) during transport and storage. Thus, the charge of smoking articles and the frame are wrapped or enclosed with the barrier film by sealing overlapping portions of the barrier film together to form one or more seams. According to another aspect, the disclosure herein provides a method of packaging an item, such as a pack of smoking articles, comprising the steps of: providing a barrier film for wrapping around an item to be packaged, the barrier film including a frangible line or part; wrapping the barrier film around the item to be packaged; providing at least one sealing head having a contact surface for sealing the barrier film, wherein the contact surface is at least partially discontinuous to provide a non-sealing region in the sealing head; and contacting the barrier film wrapped around the item with the sealing head to seal the barrier film along a seam, whereby the non-sealing region is positioned to at least partially, and preferably substantially wholly, overlie the frangible line or part of the barrier film. In one embodiment, the method includes the step of heating the contact surface of the sealing head to effect sealing of the barrier film along the seam by heat bonding or fusion. Further, the step of contacting the barrier film with the sealing head preferably includes exerting pressure on the barrier film over the contact surface of the sealing head. In another embodiment, the frangible line or part of the barrier film includes one or more lines of weakness, such as a score line or line of perforations, and may further include an activator, such as a tab or flap, provided on or in the barrier film to be operated (e.g. grasped and pulled) by a user to activate or to break the frangible line or part (e.g. the line or area of weakness). In this regard, the non-sealing region is desirably positioned to at least partially, and more preferably fully, overlie the activator (e.g. the tab or flap). According to a further aspect, the present disclosure provides a packaged item, such as a pack of smoking articles, which is produced by the apparatus and/or method of the disclosure herein as described with respect to any one of the embodiments above. Thus, the disclosure herein also provides a packaged item, such as a pack of smoking articles, with an outer wrapping comprising a barrier film that is sealed along a seam to form a protective enclosure around the item. The barrier film has a frangible part or strip and a tab for activating the frangible part or strip, wherein the seam extends across the frangible part or strip and the seam does not overlie the tab. In a further embodiment, the frangible part or strip in the flexible barrier film includes at least one line of weakening such as a score line or a line of perforations, and the seam is at least partially discontinuous where it crosses the frangible part or strip of the film. In another aspect, therefore, the present disclosure provides a packaged item, such as a pack of smoking articles, with an outer wrapping comprising a barrier film that is sealed along a seam to form a protective enclosure around the item. The barrier film includes a frangible line or part and the seam extends across the frangible line or part, the seam being at least partially discontinuous where it crosses the frangible line or part of the film. In this discontinuous region of the seam, the barrier film is typically not sealed. In one embodiment, the seam is discontinuous over at least 50 percent, more preferably over at least 75 percent, and even more preferably over about 90 percent, of a full width of the seam, i.e. transverse to a longitudinal extent of the seam. In a further embodiment, the seam is sealed over at least a portion of its width; preferably at least about 10 percent of its width. BRIEF DESCRIPTION OF THE DRAWINGS For a more complete understanding of the disclosure herein and the advantages thereof, exemplary embodiments of the disclosure herein are explained in more detail in the following description with reference to the accompanying drawing figures, in which like reference characters designate like parts and in which: FIG. 1 shows a plan view of a flat sheet of barrier film for use in packaging smoking articles according to an embodiment; FIG. 2 shows a side view of an apparatus for packaging smoking articles according to an embodiment; and FIGS. 3A and 3B FIG. 2 show plan and cross-sectional views of sealing members for the packaging apparatus in . DETAILED DESCRIPTION The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate particular embodiments of the disclosure herein and together with the description serve to explain the principles of the disclosure herein. Other embodiments of the disclosure herein and many of the attendant advantages of the disclosure herein will be readily appreciated as they become better understood with reference to the following detailed description. It will be appreciated that common and well understood elements that may be useful or necessary in a commercially feasible embodiment are not necessarily depicted in order to facilitate a more abstracted view of the embodiments. The elements of the drawings are not necessarily illustrated to scale relative to each other. It will further be appreciated that certain actions and/or steps in an embodiment of a method may be described or depicted in a particular order of occurrences while those skilled in the art will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used in the present specification have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study, except where specific meanings have otherwise been set forth herein. FIGS. 1 and 2 FIG. 2 FIG. 1 FIG. 1 1 1 2 3 4 3 2 3 5 3 3 Referring to drawing , a packaging apparatus according to a preferred embodiment of the disclosure herein can be partially seen in side view in . The apparatus includes a wrapping device which is configured to wrap a sheet of barrier film around an item P to be packaged. The sheet of barrier film in this particular embodiment is explicitly illustrated in as a generally rectangular sheet, which is specifically designed to be wrapped around a generally rectangular pack P of cigarettes, cigarillos or other such smoking articles by the wrapping device . In this regard, identifies those portions of the barrier film sheet which correspond to the “front” and “back” faces of the pack P of smoking articles. Furthermore, a peripheral outer region of the barrier film sheet , which is cross-hatched or shaded, denotes the portions of the sheet that overlap after the pack P has been wrapped with the film. FIG. 1 3 4 6 3 7 8 7 7 4 6 As seen in , the sheet of barrier film includes two lines of perforations and/or scoring which extend parallel to one another across a width of the sheet to form a frangible or tearable strip in the barrier film. In this regard, a tab or flap is provided at one end region of this frangible strip as an activator for a user to grasp and pull for activating the frangible strip and thereby breaking the barrier film along the lines of perforations and/or scoring. 3 4 3 2 1 3 4 4 6 4 4 6 FIG. 1 FIG. 1 The individual sheet of barrier film illustrated in of the drawings may be provided in a continuous roll of barrier film material that includes a plurality of such sheets joined to one another in series along respective leading and trailing edges thereof L, T with reference to a travel direction (as indicated by the arrow “D”) for the film being fed from a bulk roll into the wrapping device of the packaging apparatus . Some typical dimensions for the sheet of barrier film are specifically denoted in millimetres in for application to a typical pack P of smoking articles. It will be appreciated by persons skilled in the art, however, that such dimensions are merely indicative of one particular embodiment of the disclosure herein and that these dimensions may differ in other embodiments without influencing the inventive concept. The barrier film in this example is preferably comprised of transparent polypropylene material, with the lines of weakness preferably being pre-formed in the film by laser perforation and/or scoring. Alternatively, the barrier film may include a tear-tape and the lines of weakness may be formed along the longitudinal edges of the tear-tape. FIG. 2 FIG. 2 2 10 3 4 10 11 10 3 4 5 3 4 With reference again to of the drawings, the wrapping device comprises a holder provided in the form of a rotatable carousel for holding and transporting a plurality of the packs P, each of which has been wrapped with a sheet of barrier film . In this regard, the carousel is mounted for rotation about a central axis A and includes a plurality of bays or recesses spaced apart around a periphery of the carousel for respectively receiving and holding a single pack P of smoking articles wrapped with a corresponding sheet of the barrier film . From the perspective of , the “bottom” ends of the respective packs P with their folded and overlapped portions of the sheet of barrier film face out of the page. FIG. 2 FIG. 2 1 20 20 21 21 3 4 21 22 23 5 4 11 10 21 22 23 4 9 4 10 11 20 20 20 20 12 13 14 10 10 21 21 12 13 4 23 23 Referring further to , the packaging apparatus can be seen to include two sealing stations , ′, each of which has a sealing head , ′ for sealing the sheet of barrier film (e.g. via heat bonding or fusion) around the respective packs P. In particular, as shown in the detail enlargement of , sealing head includes an elongate sealing member (shown in cross-section) having a contact surface for contacting the overlapped portions of the barrier film along a radially outwards facing side edge of the pack P held in the bay or recess of the carousel . The sealing head typically includes a heater (e.g. an electric heating element, not shown) for heating the sealing member such that the contact surface seals or fuses the barrier film along a seam over the length of the side edge of the pack P and the barrier film provides a protective enclosure around the pack. In this connection, the carousel transports the packs P held in the respective bays or recesses to each of two sealing stations , ′ by rotation in the direction R about the axis A. In each of the sealing stations , ′ the side edges of the packs P are aligned with a respective slot , formed in a curved, stationary cover plate around the carousel . The carousel then pauses with the packs P in these positions to enable the sealing head , ′ to move radially inwards through the slots , to engage and seal the barrier film via the contact surfaces , ′. FIGS. 3A and 3B FIGS. 3A and 3B 22 22 21 21 23 23 22 22 10 23 23 24 24 22 22 24 24 25 25 22 22 24 24 23 23 21 21 20 20 20 20 20 20 Referring now to drawing , the sealing members , ′ of the sealing heads , ′ are illustrated in more detail. The upper part of show the contact surface , ′ of each respective sealing member , ′ in plan or bottom view (i.e. in a radial direction with respect to the axis A of carousel ). Each of the contact surfaces , ′ has a non-sealing region , ′ which extends across a width of the respective sealing member , ′ transverse to its longitudinal extent. In this regard, the non-sealing regions , ′ are denoted by cross-hatching and are formed by a respective recess or cavity , ′ in the face of each of the sealing members , ′. Thus, each of the non-sealing regions , ′ forms an at least partial discontinuity in the contact surface , ′ of the respective sealing head , ′. For the purposes of clarity, the two sealing stations , ′ will be referred to in the following description as the first sealing station and the second sealing station ′, respectively. The various parts of the first and second sealing stations , ′ will also be denoted by the terms “first” and “second”, where this seems appropriate to enhance the clarity of description. 21 24 26 27 22 23 21 27 23 22 FIG. 3B In the first sealing head , for example, the non-sealing region is of substantially uniform or constant breadth and extends across approx. 90 percent of a width of the first sealing member . In so doing, the contact surface of the first sealing head maintains a narrow band or strip of continuity (i.e. approx. 10 percent of an overall width of the contact surface ) at the right-hand side of first sealing member as seen in . 24 21 27 22 23 26 24 23 FIG. 3A By contrast, the non-sealing region ′ of the second sealing head ′ extends transversely across the full width ′ of the second sealing member ′ to render the contact surface ′ fully discontinuous in this region. Furthermore, the breadth ′ of the non-sealing region ′ also varies in the plane of the contact surface ′ in a stepped fashion between a narrower breadth at the right-hand side and a greater breadth at the left-hand side as shown in . FIGS. 3A and 3B FIGS. 3A and 3B 22 22 21 21 10 24 24 21 21 23 21 4 22 4 4 24 4 9 28 25 24 22 4 24 22 21 22 28 25 4 24 27 22 28 28 27 27 22 22 27 27 22 22 Referring further to , the lower part of the drawings show the sealing members , ′ of the first and second sealing heads , ′ in cross-sections taken in directions X-X and X′-X′ in a plane perpendicular to the axis A of the carousel . In this way, the geometry of the non-sealing regions , ′ of the respective first and second sealing heads , ′ can be seen to vary in three dimensions. Thus, when the contact surface of the first sealing head is in contact with the barrier film overlapping along the side edge region of the pack P of smoking articles, the area denoted “a” at the right-hand side of the first sealing member is in direct contact with the barrier film , whereas the part denoted “b” is somewhat spaced from (e.g. by 0.1 mm) and does not contact the barrier film . Indeed, the entire non-sealing region does not contact the barrier film and therefore does not contribute to the formation of the seam . The greater a height of the recess or cavity forming the non-sealing region , the greater the distance between the first sealing member and the barrier film in the non-sealing region and the less the radiant heat which emanates from the sealing member may negatively impact upon the barrier film in that non-sealing region. Regarding the second sealing head ′, it is apparent that the second sealing member is even further spaced (i.e. due to height ′ of the cavity ′) from the barrier film in the non-sealing region ′. At the right-hand side denoted by “c”, the spacing may be about 0.1 mm, whereas this increases approximately linearly over the width ′ of the second sealing member ′ to about 0.2 mm at the side denoted by “d”. It will be noted here that while the heights , ′ are represented in as varying across the widths , ′ of the sealing members , ′, respectively, they could also be of substantially constant value across the widths , ′ of the sealing members , ′ without significantly altering the sealing performance of the sealing members. 22 22 24 24 7 3 4 20 20 4 9 9 7 22 22 21 21 9 9 24 24 7 3 4 20 24 8 7 4 7 By arranging the first and second sealing members , ′ so that the non-sealing regions , ′ thereof are aligned with the frangible strip in the sheet of barrier film at each of the respective sealing stations , ′, the barrier film can be sealed or heat bonded along the seams , ′ at the side edge of the pack P without negatively or adversely affecting subsequent operation of the frangible strip by a user. That is, by configuring the sealing members , ′ in this way, the first and second sealing heads , ′ ensure that a desirable seal is created along the seams , ′ while the non-sealing regions , ′ simultaneously ensure that frangible strip in the sheet of barrier film retains its operational efficacy. In this regard, the second sealing head ′ is preferably arranged such that the broader part of the non-sealing region ′ overlies the tab or flap of the frangible strip , thereby ensuring that the activator is least affected by the heat sealing or bonding of the barrier film and thereby also ensuring proper commencement to the activation of the frangible strip . Although specific embodiments of the disclosure herein have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternative and/or equivalent implementations exist. In this regard, it will be noted that the shape of the item P to be packaged is not critical to the disclosure herein. Cigarette packs will typically have a rectangular shape, but other shapes are also conceivable. FIGS. 3A and 3B 22 22 9 9 1 21 22 9 It should be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration in any way. Rather, the foregoing summary and detailed description will provide those skilled in the art with a convenient road map for implementing at least one exemplary embodiment, it being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope as set forth in the appended claims and their legal equivalents. For example, while the specific embodiments described with respect to involve two sealing members , ′ forming two parallel seams , ′ along the side of the pack P, it will be appreciated that the apparatus and method of the disclosure herein may also include a single sealing head with one sealing member to form a single seam along the side of each item or pack P. Generally, this application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Also, it will be appreciated that in this document, the terms “comprise”, “comprising”, “include”, “including”, “contain”, “containing”, “have”, “having”, and any variations thereof, are intended to be understood in an inclusive (i.e. non-exclusive) sense, such that the process, method, device, apparatus or system described herein is not limited to those features or parts or elements or steps recited but may include other elements, features, parts or steps not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the terms “a” and “an” used herein are intended to be understood as meaning one or more unless explicitly stated otherwise. Moreover, the terms “first”, “second”, “third”, etc. are used merely as labels, and are not intended to impose numerical requirements on or to establish a certain ranking of importance of their objects. While at least one exemplary embodiment of the present disclosure(s) is disclosed herein, it should be understood that modifications, substitutions and alternatives may be apparent to one of ordinary skill in the art and can be made without departing from the scope of this disclosure. This disclosure is intended to cover any adaptations or variations of the exemplary embodiment(s). In addition, in this disclosure, the terms “comprise” or “comprising” do not exclude other elements or steps, the terms “a” or “one” do not exclude a plural number, and the term “or” means either or both. Furthermore, characteristics or steps which have been described may also be used in combination with other characteristics or steps and in any order unless the disclosure or context suggests otherwise. This disclosure hereby incorporates by reference the complete disclosure of any patent or application from which it claims benefit or priority.
Qalandiya, Mon 12.9.11, Afternoon Translator: Charles K. We returned to the Qalandiya checkpoint after not having been here for almost the whole summer. The situation hasn’t improved. The place is neglected, dirty and smelly – very repulsive. The Palestinians forced to come here are long-suffering; they joke about the situation rather than complaining, waiting submissively for the soldiers to do their job and be good enough to allow them to cross. The soldiers, for their part, work slowly, take breaks whenever they feel like it regardless of how many people are waiting on line in the heat and the discomfort of the checkpoint. 16:30: When we arrived at Qalandiya there were two inner lanes open and a line of more than 50 people in the northern shed. 16:50 We joined the line in the shed, waited 20 minutes for the revolving gate to open and managed to get in and reach the inner lanes. At 17:10 they opened a third lane. The soldiers at Lane 4 (where we stood) took a break, and didn’t let anyone in for inspection for a long time. We finally gave up. Netanya called the humanitarian office and I called the crossings authority to report that nothing was moving. And then something began moving and we got through ten minutes later. We waited for 50 minutes on lines to go through the checkpoint! Just the thought that people must go through this obstacle course twice a day, a few times a week, makes one sick! When we exited on the south side we walked over to see what was happening at the crossing for people arriving by bus. People told us the line was short and moved quickly. We returned to the northern shed and saw that the revolving gate was open and the lines had disappeared, and also from the inner lanes. We left Qalandiya for Jerusalem at 17:30.
https://machsomwatch.org/en/node/18037
I was looking online for wine since I am thinking I might want to serve alcohol at the wedding. My question that I need answered from the hive is this, “How many bottles of wine should I get if I am having 61 guests?” For the people invited they aren’t heavy drinkers, probably three little glasses of wine would be the top, and 8 out of my 61 guests are under age. How many bottles do you think I should have? 20 bottles? A million bottles?! LOL.
https://boards.weddingbee.com/topic/ervbody-in-the-club-get-tipsay/
As I evaluated my reading habits at the end of last year, I began to recognize a diminishing ability to give sustained attention to the longer and more involved discourses and arguments contained in books. So one of my goals this year was to read more books. Cover to cover. I set a specific goal and ended up surpassing it by 60%. It was a good year for reading. Here are some of the best things I read this year. This is not nearly the complete list and it is a mixture of both old and new, and secular and sacred. It is not an endorsement of everything that is in these books, but all these books encouraged, helped and challenged me in particular ways. Biblical/Theological books: - Heath Lambert, Finally Free: Fighting for Purity with the Power of Grace - Denny Burk, What is the Meaning of Sex? - J. D. Greear, Stop Asking Jesus Into Your Heart: How to Know for Sure You Are Saved - Thomas Boston, The Crook in the Lot - Jonathan Aitken, John Newton: From Disgrace to Amazing Grace - John Dyer, From the Garden to the City: The Redeeming and Corrupting Power of Technology - Robert Jones, Pursuing Peace: A Christian Guide to Handling Our Conflicts Non-biblical books:
https://wordsofgrace.blog/2013/12/29/good-reads-from-2013/
Alg. Vw. General Terms and Conditions Progimpex Article 1: Applicability 1.1 Unless explicitly agreed otherwise in writing, these general terms and conditions apply to all offers based on and agreements concluded with PROGIMPEX. By placing an order and / or signing any contract whatsoever, the customer acknowledges the applicability of these general terms and conditions. The customer expressly agrees to our terms and conditions, to the exclusion of all others, and explicitly declines to rely on his. Any clause that is in conflict with these general terms and conditions will not be accepted by us. 1.2 A course of action contrary to the current general terms and conditions, even occurring several times, does not entitle the customer to invoke this and is not an acquired right on his part. 1.3 The possible nullity of one of the present provisions does not entail the nullity of the other provisions. Article 2: Establishment of agreements 2.1 Unless explicitly stated otherwise, the offers issued by PROGIMPEX are binding for a period of 30 days after the date of the offer. All agreements are deemed to have been concluded at the registered office of PROGIMPEX. 2.2 All orders are irrevocable. The full or partial cancellation by the customer of an order, without prejudice to PROGIMPEX's right to claim the entire execution of the order, gives rise to payment of a fixed compensation of 30% on the total price or the part that has not been purchased, respectively. and more if there is reason to do so. 2.3 Prototypes. Each prototype or any modification to the prototype is invoiced separately and paid in full in advance. The customer may have the prototype modified up to five times. The approval of the prototype is a condition precedent for the entry into force of the agreement. In case of approval of the prototype, the costs for (the modifications to) the prototype for 50 % will be deducted from the final order, with the exception of the transport costs, which are fully borne by the customer. In the absence of approval, the prototypes will be returned to PROGIMPEX on first request. Article 3: Delivery and term of delivery 3.1 Unless otherwise agreed in writing, the stated delivery times are purely indicative and the exceeding thereof does not give rise to compensation by PROGIMPEX. 3.2. The ordered goods are delivered ex works in the warehouse of PROGIMPEX and must be collected by the customer within eight days of notification of arrival, failing which PROGIMPEX reserves the right to charge storage costs by operation of law and without notice of default. 3.3 Partial deliveries are permitted and can already give rise to invoicing. Article 4: Price 4.1 All prices and rates are net, ex warehouse, including normal packaging, excluding any, direct or indirect, current or future, tax, VAT, levy, tax, duty, cost, fine, fee for reprography, authors, publishers, or other, including all exchange rate risks, are always borne by the customer, who the customer declares in an irrevocable and special manner to bear and, if necessary, to indemnify PROGIMPEX. 4.2. Each order to Progimpex, for the design, illustration of a particular good or for making a prototype of a particular good is subject to payment of the costs by the customer, unless the customer places an order for the good to be designed within the unanimous minimum order quantity. The costs for the shipment of the designed prototypes are always at the expense of the customer, regardless of an order within the minimum agreed order quantity. Article 5: Payment 5.1 Unless otherwise agreed in writing, the payment methods are as follows: 50% for final order, 40% for shipment from the Far East and 10% for delivery to the customer. Invoices are payable at the registered office of PROGIMPEX. 5.2 The invoices are deemed to have been definitively accepted in the absence of a registered protest letter within 8 days of the invoice date. 5.3 In the event of late payment, the amount will be increased by operation of law and without any prior notice of default: - with a lump sum indemnity of 12 % on the principal amount; - as well as default interest with 1% per month, whereby part of a month counts as a whole month, until the day of full payment. PROGIMPEX is also entitled to reimbursement of court costs and to compensation of all relevant recovery costs. Any payment received by PROGIMPEX will first be charged on the accrued interest and damages, then on the principal sum of the first due invoice. 5.4 In the event of late payment of an invoice, all other outstanding claims against the customer will become due and payable by operation of law and without prior notice of default. 5.5 In the event that the customer's solvency is called into question, such as, for example, non-payment or late payment of the invoices, the seller has the right to impose a prior payment or to request security for the deliveries still to be made. failure of which PROGIMPEX is entitled to immediately unilaterally dissolve the agreement at the expense of the customer with payment of compensation of 30 % on the total price or more if there is reason to do so. PROGIMPEX also reserves the right to suspend the execution of all current orders, without prior notice of default and without compensation. Article 6: Retention of title The customer irrevocably and in a special way accepts and acknowledges that the delivered goods remain the property of PROGIMPEX until the purchase price in principal and accessories has been paid in full. Until then, the customer is not entitled, under penalty of liability, to pledge or transfer the goods to third parties and must oppose any seizure and notify PROGIMPEX thereof immediately. Article 7: Liability 7.1 Visible defects must be protested to PROGIMPEX by registered letter within 8 days after receipt of the goods by the customer, under penalty of forfeiture of the claim. 7.2 The customer accepts minor differences between the order (based on the proofs, samples, models and / or demonstrated goods) and the actual delivery, so that the above-mentioned elements are given as an indication only. Pantone colors are performed according to best practice, but the result is always approximate and depends on the wearer. The customer accepts differences between the ordered colors and also in terms of materials and overprint. This also applies to the quantities delivered for which, unless stated otherwise in the order confirmation, a margin of 5% more or less is accepted, even if this results in a higher or lower price. Our customer expressly waives any legal claim with regard to these elements. 7.3 In all cases, the liability of PROGIMPEX towards the customer is limited to [20%] of the total purchase price, excluding all costs (transport,…) and all taxes (VAT, customs,…) of whatever nature. Article 8: Intellectual rights - property right 8.1 The intellectual and / or industrial rights to works, models, drawings or other material made by order of the customer and by PROGIMPEX remain with PROGIMPEX, unless otherwise agreed in writing. 8.2 Insofar as the work is supplied by the customer to PROGIMPEX for further exploitation, this customer declares to have the necessary intellectual and / or industrial rights to make use of it or have it used. The customer will fully indemnify PROGIMPEX against any claim in this regard from a third party. 8.3 Unless explicitly prohibited by the customer, PROGIMPEX is allowed to mention the delivered products as a reference and to depict them in its publicity. Article 9: Force majeure Cases of force majeure, and for whatever reasons, all malfunctions and obstacles in the company and in the deliveries, all unforeseen events at PROGIMPEX or at the companies where we obtain our goods, all transport obstacles or delays, and the non-delivery of the goods by suppliers, strikes, lockouts, export or import bans or restrictions, fire or accidents, mobilization, war or disturbances or legal provisions, give us the right to partially and definitively or temporarily cancel or suspend our delivery and performance obligations, without PROGIMPEX being liable can be held responsible for the damage caused by this.
https://plucheknuffelsmaken.be/en/alg-vw-progimpex-webshop/
Why does my iPhone show the wrong battery percentage? Why does my iPhone show the wrong battery percentage? The bug is caused by changing the time either manually or by traveling to different time zones. APPLE IS INVESTIGATING a bug that causes the iPhone 6s and 6s Plus to display the incorrect battery percentage. Then go into Settings > General > Date & Time and turn ‘Set Automatically’ on. Why my phone is not showing correct battery percentage? It’s likely that the battery indicator won’t say 100 percent, so plug the charger back in (leave your phone on) and continue charging until it says 100 percent on-screen as well. Unplug your phone and restart it. If it doesn’t say 100 percent, plug the charger back in until it says 100 percent on screen. How do I calibrate my iPhone 5s battery? Step By Step Battery Calibration - Use your iPhone until it shuts off automatically. - Let your iPhone sit overnight to drain the battery further. - Plug your iPhone in and wait for it to power up. - Hold down the sleep/wake button and swipe “slide to power off”. - Let your iPhone charge for at least 3 hours. Why does my iPhone battery say 1%? If you power when it says 1% and the battery is still has charge your not resetting anything. If its still saying 1% then its not shutting itself down. Keep restarting when it powers off until you see a black scree with a battery on it and nothing else. then do a full charge, and leave the phone to charge overnight. Why is my battery percentage not increasing? If your computer is showing a charger being plugged and yet the battery percentage is not increasing, it could just be a case of software malfunction or the battery may be too old and might be charging too slowly. It is also possible that the charger itself is faulty and you may need to replace it. What do you do when your battery percentage won’t change? Leave the phone off while it is charging. Turn your phone on and wait for it to boot. On the home screen, if the battery meter shows 100%, unplug it from the charger, if does not then switch it off and continue to charge until the battery charge displays 100%. What to do when your iPhone battery percentage is 0%? In other words, let the iPhone run through a charge cycle: fully charge the iPhone and use the phone throughout the day until the battery reaches 0%. If while using the iPhone you don’t see that same issue, great; but I would still let the battery go to empty. How can I tell how much battery is left on my iPhone 5? Knowing how much battery you have left on your iPhone 5 is very important, and the default setting on the device will show the iPhone 5 battery percentage as an image. But this default icon view of the remaining battery can be a little vague. What’s the percentage of battery life on a cell phone? Normally i do 2 -3 days with a full battery, now suddenly not 1 day anymore. Then sometimes I’ve 30% and in 10sec it drops 20%, i restart and suddenly it’s up to 50%, I leave it and it drops in a few minutes to 10%. Isn’t that wrong? Also when he has less than 20%, the phone dies, I try to start it up and he starts up with 30%. Is the battery life on the iPhone 5 good? While the iPhone 5 does have a very good battery life, the usability and convenience of the device means that you will probably be using it heavily throughout the day. Heavy usage will drain the battery faster and, as a result, you might not be able to go an entire day away from home without a charge.
https://www.pursuantmedia.com/2020/05/20/why-does-my-iphone-show-the-wrong-battery-percentage/
Artificial Intelligence (AI) is steadily growing as a policy issue. This requires building an understanding of the benefits (medical diagnostics, environmental efficiency) and the challenges that it might bring. Amidst a lot of hype, there should be caution about the state of technological development and implementation. The OECD held a high-level conference on AI in Paris on 26-27 October. With the main business leaders and experts in attendance, it is clear that it is not only the trade unions which are concerned about its employment and societal effects. We are still far away from general artificial intelligence (i.e. applications that can perform tasks at a comparable or higher level of cognitive capacities and judgement as humans). No AI system has flexible cognition or the capacity to make inferences. Instead, we are dealing with “narrow AI”: online translation services or predictive data analytics (e.g. financial services). Yet, there is no doubt that big data (and cross-border data flows) and computational power reinforce each other, and machine learning thrives on sophisticated algorithms. It will be important to keep track of new milestones in AI research and applications, as well as to look at the immediate impact of “narrow” AI on all economic sectors along business value chains. Keeping a human-centred approach over its introduction, design and use is pivotal. Trade unions need to assume a central role in industrial relations to prevent high societal costs including security risks, job displacements and discriminative algorithms. The mainstreaming of AI should not deepen inequalities in income and opportunities as any productivity gains from AI, and digitalisation at large, should be shared fairly. Trade unions made clear that there will be no public acceptance for radical widespread disruption led by a few. Instead policy makers, the social partners, the technical and the academic communities should strive for a digital diffusion with a strong social dimension. To be able to anticipate and devise strategies, it is important to look at the drivers, key players, elements for scenario building and policy needs: Drivers - Reinforcing dynamics between machine and deep learning, big data and computing power; - The convergence between mutually-reinforcing technologies such as the Internet, digitisation, big data analytics, cloud computing, and AI; - An exponential rise in investments: the OECD estimates that the “AI market” will be at around USD 70bn by 2020, with the number of AI acquisitions doubling from 2015 to 2016. Key players - A handful of firms, namely the six biggest digital businesses from the United States and a parallel thriving market in China, are driving AI corporate research, acquiring start-ups and implementing systems. They own the majority of data, the very building blocks of AI; - Partnerships discussing the application of AI are mostly disconnected from one another and/ or co-sponsored by major industry players – there is a lack of publicly led dialogue with exceptions in a few OECD member states. ScenariosTo anticipate the impact of AI and understand its network effects, the following aspects need to be considered across policy silos: - the broader effects and level of adaptation of digital technologies and the combination thereof; - predictions on automation, its net effect on job growth and income inequality; - changing occupational tasks and skills needs (complexity, routine content, level of collaboration); - changing business models affecting organisational environments – including the increase of cross-border operations, servicification and the value of data all influencing working conditions and overall employment; - the underlying market structure and investment streams; - the scope of application with regard to functionality and spread, including by differentiating sectors, global regions and along time scales (short-term, medium term – up until 2030, and long-term); - the costs of implementation and maintenance. Policy needsPublic policy needs to look into the economic, social including labour market, ethical and legal aspects of AI as several risks arise and regulatory frameworks are not keeping up and: - develop operational, legal and ethical standards and avoid a fragmentation of rules and regulations; - set human-in-command requirements including the right of explanation and that robots and AI must never be “humanised”; - devise and finance transition strategies for workers to retain or change their job if the occupational task content is significantly altered by AI; - engage social partners in industrial and innovation dialogue processes towards ensuring the appropriate parameters for standardisation, fairer outcomes through collective bargaining, and the autonomy of workers in machine-to-human interactions; - anticipate what competencies are needed to complement tasks performed by cognitive technologies and develop training policies and the underlying financing under a lifelong learning prism with the participation of trade unions in governance, design, implementation and oversight; - ensure the quality of data sets that AI is built on: bad algorithms may lead to detrimental outcomes: among others, challenges to data ownership arise in view of the opacity of data processing and re-purposing, and look into ways towards the anonymization of personal data (including privacy impact assessments); - audit machine learning techniques against bias and security risks and discuss liability and consumer protection, as well as Occupational Health and Safety (OHS); - support public R&D that currently lacks the resources needed to pursue longer-term goals compared to corporate laboratories, and to this effect encourage innovation eco-systems and clusters in regions; - create incentives for and make simulations and validation systems when testing AI obligatory. Role for trade unions and social dialogueTaking all of the above into account – the challenges to occupational content, the apparent market concentration and lack of public dialogue and regulatory standards – call for a stronger involvement of trade unions: through collective bargaining to set and maintain wage levels (also for emerging occupations), agree on the design and implementation of training programmes as well as of data usage and protection, and by bringing the workers’ voice from the shop floor to alert to security, safety and working time challenges. From recent global framework agreements to technological agreements at firm level, social dialogue is becoming essential to ensure the empowerment of workers, to secure responsible business conduct and to implement an overall long-term vision for sustainable business operations. Towards a social dimension for AI diffusionAs set out in the TUAC recommendations on Digitalisation and the Digital Economy (February 2017), all technological transformations including the diffusion of AI should be accompanied by “just transition” principles for workers. Such policy framework should address the uncertainties regarding job impacts, risks of job losses, of undemocratic decision-making processes and of lowering rights at work, as well as of regional or local economic downturn, among others. While the framework was initially developed by trade unions in the context of climate change and endorsed in the COP21 agreement, its principles are valid to address the digitalisation of economies including: - Research and early assessment of social and employment impacts; - Social dialogue and democratic consultation with social partners and stakeholders; - Active labour market policies and regulation, including training and skills development; - Social protection, including securing of pensions; - Economic diversification plans; - Sound investments leading to high quality, decent jobs.
https://members.tuac.org/en/public/e-docs/00/00/13/EC/document_news.phtml
34:8-45.1: Consideration as Health Care Service Firm; terms defined. 34:8-45.1a: Memorandum of understanding. 34:8-45.1b: Report to Governor, Legislature. 34:8-45.1c: Rules, regulations. 34:8-45.2: Rules, regulations. 34:8-46: Cases where act not applicable 34:8-47: Application for employment agency license 34:8-48: Application for agent's license; cancellation of license; issuance of new license; conditional license 34:8-49: Posting of bond as surety; suit on bond; revocation of license 34:8-50: Annual fees 34:8-51: Requirements 34:8-52: Violations 34:8-53: Refusal or revocation; suspension; renewal 34:8-54: Powers of director 34:8-55: Investigation 34:8-56: Service of notice or subpoena 34:8-57: Order from Superior Court 34:8-58: Injunction; other court actions 34:8-59: Action authorized after finding of violation 34:8-60: Penalties for violation of cease and desist order 34:8-61: Additional penalties 34:8-62: Director to recover attorneys' fees and costs 34:8-63: Certificate of indebtedness to clerk 34:8-64: Registration of consulting firm; revocation; suspension 34:8-65: Registration of career consulting or outplacement organization; fee; bond; explanation of product or services; cancellation of contract; complaint 34:8-66: Registration of prepaid computer job matching service or job listing service; annual fee; bond; contract; refund conditions; violations 34:8-67: Definitions relative to employee leasing companies. 34:8-68: Provisions of leasing agreements. 34:8-68.1: Responsibilities of client company. 34:8-69: Relationship between leasing company, client company. 34:8-70: Registration of leasing company. 34:8-71: Registration, annual reporting. 34:8-72: Co-employment of covered employees. 34:8-73: Actions upon entry, dissolution of leasing agreement. 34:8-74: Calculation of unemployment benefit experience. 34:8-75: Inapplicability to temporary help service firms, unit operating as cooperative. 34:8-76: Noncompliance, rescinding of registration. 34:8-77: Compliance with C.17:22A-1 et seq. 34:8-78: Rules, regulations.
https://njlaw.rutgers.edu/collections/njstats/showsections.php?title=34&chapt=8
Q: Using cgexec vs cgroup.procs for memory accounting using cgroups I ran into an interesting situation yesterday with the cgroups memory controller. I have always thought that the memory reported by cgroups was the processes' total memory consumption, but it seems like that is not the case. I wrote the following Java programming for testing: import java.util.Scanner; class TestApp { public static void main(String args[]) { int[] arr; Scanner in = new Scanner(System.in); System.out.println("Press enter to allocate memory"); in.nextLine(); arr = new int[1024*1024]; System.out.println("Allocated memory"); while(true); } } When running the above with cgexec, the memory usage is vastly different from when echoing the PID of the JVM into the cgroup.procs file of the cgroup. It seems like cgroups report memory usage for the process after it has been placed inside the cgroup. How does cgroup account for memory? It seems like when using cgexec, the JVMs consumption is accounted for. On the other hand, when starting the JVM outside of the cgroup, and moving it into it later by writing the PID into the cgroup.procs file, the memory consumption reported in memory.usage_in_bytes remains zero, until I hit enter and consumption goes up to 1024 * 1024 * 4 as expected. Furthermore, the memory consumption reported by cgroups is not entirely the same as the memory consumption reported by top, for example. Edit: Created the following C program and used it for testing. I am seeing the same results. If using cgclassify, memory utilization remains 0 until hitting enter. On the other hand, when using cgexec, memory utilization is > 0 before hitting enter. #include <stdio.h> #include <stdlib.h> int main() { printf("Press ENTER to consume memory\n"); getchar(); char *ptr = malloc(1024*1024); if (ptr == NULL) { printf("Out of memory"); exit(1); } memset(ptr, 0, 1024*1024); printf("Press ENTER to quit\n"); getchar(); return(0); } A: When you allocate a page and it is paged in by a process, the allocated memory is tagged with an identifier, telling the kernel which specific memory controller cgroup this memory belongs to (obviously the memory will also belong to any parent of the cgroup). When you migrate a process to a new cgroup, the memory already allocated doesn't change its tag. It would be very expensive to "retag" everything, and it wouldn't even make sense (suppose that a page is shared by two processes and you migrate only one to a different cgroup. What would the "new" tag need to be? It's now being used by two processes in different cgroups...) So if you're sitting in the /sys/fs/cgroup/memory cgroup (i.e. your task group ID is mentioned in /sys/fs/cgroup/memory/tasks and not in the tasks file of any children of that cgroup), anything you allocate is accounted against that cgroup and that cgroup only. When you migrate to a different cgroup (or a child cgroup) only new memory allocations are tagged to belong to that new cgroup. cgexec will start the JVM in a cgroup, so anything allocated at initialisation time will already belong to the cgroup created especially for what you execute. If you start a JVM in the root cgroup for the memory controller, then anything allocated and touched when initialising the JVM will belong to the root cgroup. Once you migrate the JVM to its own private cgroup (with either mechanism) and then you allocate and touch some pages, then obviously these will belong to the new cgroup.
Introduction ============ Obtaining a syntactic parse is an important step in analyzing a sentence. Syntactic parsers are typically built using supervised learning methods. Several hundred or thousand sentences are first manually annotated with syntactic parses. Then some learning method is employed that learns from this annotated data how to parse novel sentences. A major drawback of this approach is that it requires a lot of manual effort from trained linguists to annotate sentences. Also, a parser trained in one domain does not do well on another domain without adapting it with extra annotations from the new domain. This drawback becomes even more severe when the domain is that of clinical reports or medical text, because the annotators need to be not only trained linguists but also need to have sufficient clinical knowledge to understand the clinical terms and the sentence forms. This is a rare combination of expertise, which makes the annotation process for clinical reports even more expensive. Also, different genres of clinical reports, like discharge summaries, radiology notes, cardiology reports etc., are different from each other and hence will require separate annotations. On top of that, different hospitals or medical centers may be using their own convention of clinical terms and sentence styles in writing clinical reports which may require separate annotation effort to adapt a syntactic parser to work for clinical reports across institutions. Besides the annotation effort required, another drawback of supervised syntactic parsing is that it forces a particular \"gold standard\" of syntactic parses which may not be best suited for the end-application in which the syntactic parses will be used. For example, in the application of semantic parsing, the task of converting a sentence into an executable meaning representation, it was found that the conventional gold standard syntactic parse trees were not always isomorphic with their semantic trees \[[@B1]\] which lowered the performance of semantic parsing. In the domain of clinical reports, where sentences are often succinct and may not follow typical English grammar, it is not easy to decide the gold standard parses in advance. For example, a sentence like \"Vitamin B12 250 mcg daily\" could be parsed with brackets such as \"((Vitamin B12 250 mcg) daily)\" or such as \"((Vitamin B12) (250 mcg daily))\" depending upon whether the end-application emphasizes the \"250 mcg\" quantity with \"Vitamin B12\" or with \"daily\". However, during the annotation process, a particular form will get forced as part of the annotation convention without regards to what may be better suited for the end-application down the road. Syntactic parses are not an end in themselves but an intermediate form which is supposed to help an end-application, hence it will be best if such an intermediate form is not set in advance but gets decided based on the end-application. An alternate to supervised learning for building parsers is unsupervised learning. In this framework, a large set of unannotated sentences, which are often easily obtainable, are given to an unsupervised learning method. Using some criteria or bias, for example, simplicity of grammar and corresponding sentence derivations, the method tries to induce a grammar that best fits all the sentences. Novel sentences are then parsed using this learned grammar. While unsupervised parsing methods are not as accurate as supervised methods, their no demand of manual supervision makes them an attractive alternative, especially for the domain of clinical reports for the reasons pointed out earlier. An additional advantage of unsupervised parsing is that the grammar induction process itself may be adapted so as to do best on the end-application. For example, instead of using a simplicity bias to guide the grammar induction process, a criterion to maximize accuracy on the end-application may be used. This way the induced grammar may choose one parse over another for the \"Vitamin B12 250 mcg daily\" depending upon which way is more helpful for the end-application. In \[[@B2]\], an analogous approach was used to transform a semantic grammar to best suit the semantic parsing application. In this paper, we present an approach for unsupervised grammar induction for clinical reports, which to our knowledge is the first such attempt. We adapt and extend the simplicity bias (or cost reduction) method \[[@B3]\] of unsupervised grammar induction. We chose to use this method because its iterative grammar-modifying process using grammar transformation operators is amenable for adaptation to any criterion besides simplicity bias. This could be useful for adapting the grammar induction process to maximally benefit some end-application. Another advantage of this method is that it directly gives the grammar in terms of non-terminals it creates on its own; some other existing methods only give bracketing \[[@B4],[@B5]\] or force the user to specify the number of non-terminals \[[@B6]\]. The induced grammar is also not restricted to binary form unlike in some previous methods \[[@B6],[@B7]\]. After inducing the grammar, in order to do statistical parsing, the probabilities for its production are obtained using an instance of the expectation-maximization (EM) algorithm \[[@B8]\] run over the unannotated training sentences. We used sentences from discharge summaries of the Pittsburgh corpus \[[@B9]\] as our unannotated data. Most unsupervised grammar induction methods work with part-of-speech tags because large vocabularies make it difficult to directly induce grammars using the words themselves. Since the language used in clinical reports is a domain-specific sublanguage \[[@B10]-[@B12]\], it uses several terms, like disease names, medications etc., not generally found in normal language. This makes the vocabulary even larger. We also note that for parsing clinical reports, besides using part-of-speech tags, it will be a good idea to also use semantic classes of words because they often affect syntactic structure of a sentence. Hence we decided to also utilize UMLS semantic types \[[@B13]\] of the clinical terms (for example, disease, sign or symptom, finding etc.), which, in a way, are treated like additional part-of-speech tags in the grammar induction process (Figure [1(d)](#F1){ref-type="fig"} shows an example). These semantic types and the part-of-speech tags are obtained using MetaMap \[[@B14]\]. The grammar is then learned in terms of part-of-speech tags and the semantic types of clinical terms. ![**(a) An original sentence, (b) some of its words replaced by UMLS semantic types (bold), (c) its words replaced by part-of-speech tags, (d) its words replaced by part-of-speech tags and UMLS semantic types (bold) wherever applicable**.](2041-1480-3-S3-S4-1){#F1} In the experiments, we first show that the learned grammar is able to parse novel sentences. Measuring accuracy of parses obtained through an unsupervised parsing method is always challenging, because the parses obtained by the unsupervised method may be good in some way even though they may not match the correct parses. The ideal way to measure the performance of unsupervised parsing is to measure how well it helps in an end-application. However, at present, in order to measure the parsing accuracy, we annotated one hundred sentences with parsing brackets and measured how well they match against the brackets obtained when parsed with the induced grammar. Cost reduction method for grammar induction =========================================== For inducing a context-free grammar from training sentences, we adapted the cost reduction method \[[@B3]\] which was based on Wolff\'s idea of language and data compression \[[@B15]\], also known as simplicity bias method or minimum description length method. The method starts with a large trivial grammar which has a separate production corresponding to each training sentence. It then heuristically searches for a smaller grammar as well as simpler sentence derivations by repeatedly applying grammar transformation operators of combining and merging non-terminals. The size of the grammar and derivations is measured in terms of their encoding cost. We have extended this method in a few ways. We describe the method and our extensions in this section. We first describe how the cost is computed in Subsection and then describe the search procedure that searches for the grammar that leads to the minimum cost in Subsection. In Subsection, we describe how the probabilities associated with the productions of the induced grammar are computed. Computing the cost ------------------ The method uses ideas from information theory and views the grammar as a means to compress the description of the given set of unannotated training sentences. It measures the compression in terms of two types of costs. The first is the cost (in bits) of encoding the grammar itself. The second is the cost of encoding the sentence derivations using that grammar. In the following description we make use of some of the notations from \[[@B16]\]. ### Cost of grammar A production in a context-free grammar (CFG) is written in the form of *A*→ *β*, where *A*is a non-terminal and *β*is a non-empty sequence of terminals and non-terminals. The cost, *C~P~*, of encoding this production is: $$\left. C_{P} = \left( \left. 1 + \middle| \beta \right| \right)log \middle| \Sigma \right|$$ where \|*β*\| is the length of the right-hand-side (RHS) of the production, and \|Σ\| is the number of terminals and non-terminals in the symbol set Σ. Since it will take *log*\|Σ\| bits to encode each symbol and there are (1 + \|*β*\|) symbols in the production (including left-hand-side (LHS)), hence the cost *C~P~*of encoding the production is as given in the above equation. Thus the cost of encoding the entire grammar, *C~G~*, is: $$\left. \left. C_{G} = \sum\limits_{i = 1}^{p}\left( \left. 1 + \middle| \beta_{i} \right| \right)log \middle| \Sigma \right| \right)$$ where *p*is the number of productions and *β~i~*is the RHS of the *i*th production. ### Cost of derivations Given the grammar, a derivation of a sentence proceeds by first expanding the start symbol of the grammar with an appropriate production and then subsequently recursively expanding each of the RHS non-terminals until all the symbols of the sentence are found as a sequence of terminals. At every step in the derivation process, an appropriate production needs to be selected to expand a non-terminal. This is the only information that needs to be encoded in order to encode the sentence. Hence the information to be encoded at every step of the derivation is: which of the \|*P*(*s~k~*)\| productions was used to expand the *k*th non-terminal, *s~k~*, in the derivation process, *P*(*s~k~*) being the set of productions in which *s~k~*is the LHS. This information can be encoded in *log*(\|*P*(*s~k~*)\|) bits. For example, if there is only one way to expand a non-terminal then this information is obvious and would require zero bits to encode. Hence the cost of an entire derivation, $C_{D_{j}}$ of the *j*th sentence will be: $${C_{D}}_{{}_{j}} = \Sigma_{k = 1}^{m_{j}}\left( {log\left( \left| P\left( s_{k} \right) \right| \right)} \right)$$ where *m~j~*is the length of derivation of the *j*th sentence. Thus the cost, *C~D~*, of encoding all *q*sentences in the training set is: $$C_{D} = \Sigma_{j = 1}^{q}\sum\limits_{k = 1}^{m_{j}}\left( {log\left( \left| P\left( s_{k} \right) \right| \right)} \right)$$ ### Total cost In previous work, like \[[@B3]\] and \[[@B16]\], the total cost of grammar and derivation was taken as simply the sum of the individual costs. However, as we show in the experiments, this does not always lead to good results. The reason, we believe, is that the total cost of derivations depends on the number of sentences and simply adding this cost to the grammar\'s cost may lead to an unequal weighting. To remedy this, we introduce a parameter, *f*, that takes values between 0 and 1, to separately weigh the two components of the total weight *C*as follows: $$C = f*C_{G} + \left( {1 - f} \right)*C_{D}$$ where *C~G~*is the cost of the grammar and *C~D~*is the cost of all derivations as described before. Note that *f*= 0.5 is equivalent to adding the two components as in the previous work. In the experiments, we vary this parameter and empirically measure the performance. Grammar search for minimum cost ------------------------------- It is important to point out that there is a trade-off between the cost of the grammar and the cost of the derivations. At one extreme is a simplest grammar in which there are productions like *NT*→ *t~i~*, i.e. a non-terminal *NT*that expands to every terminal *t~i~*, and two more productions *S*→ *NT*and *S*→ *SS*, (*S*being the start symbol) which will have a very little cost. However, this grammar will lead to very long and expensive derivations. It is also worth pointing out that this grammar is overly general and will parse any sequence of terminals. On the other extreme is a grammar in which each production encodes an entire sentence from the training set, for example, *S*→ *w*~1~*w*~2~..*w~n~*, where *w*~1~, *w*~2~etc. are words of a sentence. The derivations of this grammar will have very little cost, however, the grammar will be very expensive as it will have long productions and as many of them as the number of sentences. It is also worth pointing out that this grammar is overly specific and will not parse any other sentence besides the ones in the training set. Hence the best grammar lies in between the two extremes, which will be general enough to parse novel sentences but at the same time not too general to parse almost any sequence of terminals. This grammar will also have a smaller cost than either extreme. According to the minimum description length principle as well as Occam\'s razor principle, a grammar with minimum cost is likely to have the best generalization. We use the following search procedure to find the grammar which gives the minimum total cost where the total cost is as defined in equation 5. We note that by varying the value of the parameter *f*in that definition, the minimum cost search procedure can find different extremes of the grammars. For example, with *f*= 1, it will find the first type of extreme grammar with the least grammar cost, and with *f*= 0, it will find the second type of extreme grammar with the least derivation cost. The search procedure begins with a trivial grammar which is similar to the second extreme type of grammar mentioned before. A separate production is included for each unique sentence in the training data. If the sentence is *w*~1~*w*~2~..*w~n~*, a production *S*→ *W*~1~*W*~2~..*W~n~*is included along with productions *W*~1~→ *w*~1~, *W*~2~→ *w*~2~, etc., where *W*~1~, *W*~2~, etc. are new non-terminals corresponding to the respective terminals *w*~1~, *w*~2~, etc. The new non-terminals are introduced because the grammar transformation operators described below do not directly work with terminals. Instances of the two grammar transformation operators described below are then applied in a sequence in a greedy manner, each time reducing the total cost. We first describe the two operators, *combine*and *merge*, and then describe the greedy procedure that applies them. While the *merge*operator is same as in \[[@B3]\], we have generalized the *combine*operator (which they called *create*operator). The search procedure is analogous to theirs but we first efficiently estimate the reductions in cost obtained by different instances of the operators and then apply the one which gives the most reduction in cost. They on the other hand do not estimate the reductions in cost but actually generate new grammars for all instances of the operators and then calculate the reductions in cost. They also follow separate loops of applying a series of merge and combine operators, but we follow only one loop for both the operators. ### Combine operator This operator combines two or more non-terminals to form a new non-terminal. For example, if the non-terminals \"DT ADJ NN\" appear very often in the current grammar, then the cost (equivalently size) of the grammar can be reduced by introducing a new production *C*1 → *DT ADJ NN*, where *C*1 is a system generated non-terminal. Next, all the occurrences of *DT ADJ NN*on the RHS of the productions will be replaced by *C*1. As can be seen, this reduces the size of all those productions but at the same time adds a new production and a new non-terminal. In \[[@B3]\], the corresponding operator only combined two non-terminals at a time and could combine more than two non-terminals only upon multiple applications of the operator (for example, first combine DT and ADJ into C1 and then combine C1 and NN into C2). But we found this was less cost-effective in the search procedure than directly combining multiple non-terminals, hence we generalized the operator. It may be noted that this operator only changes the cost of the grammar and not the cost of the derivation. This is so because in the derivations, the only change will be the application of the extra production (like *C*1 → *DT ADJ NN*), and since there is only one way to expand the new non-terminal *C*1, there is no need to encode it (i.e. \|*P*(*C*1)\| is 1, hence its log is zero in equation 4). It is also interesting to note that this operator does not increase the coverage of the grammar, i.e., the new grammar obtained after applying the *combine*operator will not be able to parse any new sentence that it could not parse before. The coverage does not decrease either. The reduction in cost due to applying any instance of this operator can be estimated easily in terms of the number of non-terminals being combined and how many times they occur adjacently on the RHS of current productions in the grammar. Note that if the non-terminals do not appear adjacent enough number of times then this operator can, in fact, increase the cost. ### Merge operator This operator merges two non-terminals into one. For example, it may replace all instances of *NNP*and *NNS*non-terminals in the grammar by a new non-terminal *M*1. This operator is the same as in \[[@B3]\]; we did not generalize it to merging more than two non-terminals, because unlike the *combine*operator, it is combinatorially expensive to find the right combination of non-terminals to merge (for the *combine*operator, we describe this procedure in the next subsection). The *merge*operator could eliminate some productions. For example, if there were two productions *NP*→ *DT NNP*and *NP*→ *DT NNS*, then upon merging *NNP*and *NNS*into *M*1, both the productions reduce to the same production *NP*→ *DT M*1. This not only reduces the cost of the grammar by reducing its size, but also reduces the \|*P*(*NP*)\| value (how many productions have *NP*on LHS) which results into a further decrease in the derivation cost (equation 4). However, if there were productions with *NNP*and *NNS*on the LHS, then their getting combined will make the cost of \|*P*(*M*1)\| equal to the sum of \|*P*(*NNP*)\| and \|*P*(*NNS*)\| and replacing *NNP*and *NNS*by *M*1 everywhere in the derivations will increase the cost of the derivations. To estimate the reduction in cost due to applying any instance of this operator, one needs to estimate which productions will get merged (hence eliminated) and in how many other productions does the non-terminal on the LHS of these productions appear on LHS. In our implementation, we efficiently do this by maintaining data structures relating non-terminals and the productions they appear in, and relating the productions and the derivations they appear in. We are not describing those details here due to lack of space. As mentioned before, while the cost may decrease for some reasons, it could also increase for other reasons. Hence an application of an instance of this operator can also increase the overall cost. It is important to mention that application of this operator can only increase the coverage of the grammar. For example, given productions *NNS*→ *apple*, *V B*→ *eat*and *V P*→ *V B NNP*, but not a production *V P*→ *V B NNS*, then \"*eat apple*\" cannot be parsed into into *V P*. However, merging *NNP*and *NNS*into *M*1 will result in new productions *M*1 → *apple*and *V P*→ *V B M*1 which will parse \"*eat apple*\" into *V P*. Hence this operator generalizes the grammar. ### Search procedure Our method follows a greedy search procedure to find the grammar which results in the minimum overall cost of the grammar and the derivations (equation 5). Given a set of unannotated training sentences, it starts with the trivial, overly specific, extreme type of grammar in which a production is included for each unique sentence in the training set, as mentioned before. Next, all applicable instances of both the *combine*and *merge*operators are considered and the reduction in cost upon applying them is estimated. The instance of the operator which results into the greatest cost reduction is then applied. This process continues iteratively until no instance of the operator results in any decrease in cost. The resultant grammar is then returned as the induced grammar. In order to find all the applicable instances of the *combine*operator, all \"n-grams\" of the non-terminals on the RHS are considered (the maximum value of n was 4 in the experiments). There is no reason to consider an exponentially large number of every combination of non-terminals which do not even appear once in the grammar. However, in order to find all the applicable instances of the *merge*operator, there is no such simple way but to consider merging every two non-terminals in the grammar (it is not obvious that any other way will be significantly more efficient with regards to estimating the reductions in cost). The start symbol of the grammar is preserved and is not merged with any other symbol. Note that this search procedure is greedy and may only give an approximate solution which could be a local minima. Obtaining production probabilities ---------------------------------- The method described in the previous subsections induces a grammar but does not give the probabilities associated with its productions. If there are multiple ways in which a sentence can be parsed using a grammar, then having probabilities associated with its productions provide a principled way to choose one parse over another in a probabilistic context-free grammar parsing setting \[[@B17]\]. In this subsection, we describe an augmentation to our method to obtain these probabilities using an instance of the expectation-maximization (EM) \[[@B8]\] algorithm. As an initialization step of this algorithm, the probabilities are first uniformly assigned to all the productions that expand a non-terminal so that they sum up to a total of one. For example, if there are four productions that expand a non-terminal, say *NP*, then all those four productions will be assigned an equal probability of 0.25. Next, using these probabilities, the training sentences are parsed and the most probable parse is obtained for each of them. In the implementation, we used a probabilistic version \[[@B18]\] of the well known Earley\'s parsing algorithm for context-free grammars \[[@B19]\]. In the following iteration, assuming as if these parses are the correct parses for the sentences, the method counts how many times a production is used in the parses and how many times its LHS non-terminal is expanded in them. The corresponding fraction is then assigned as the probability of that production, similar to the way probabilities are computed in a supervised parsing setting from sentences annotated with correct parses. Using these as the new probabilities, the entire process is repeated under a new iteration. Experimentally, we found that this process converges within five iterations. Instead of only choosing the most probable parse for every sentence in each iteration, we also experimented with choosing all parses for a sentence and counting fractional counts proportional to the probabilities of the parses. However, this did not make any big difference. Experiments =========== Methodology ----------- To create a dataset, we took the first 5000 sentences from the discharge summaries section of the Pittsburgh corpus \[[@B9]\] using Stanford CoreNLP\'s sentence segmentation utility. We ran MetaMap \[[@B14]\] on these sentences to get part-of-speech tags and UMLS semantic types of words and phrases. MetaMap appeared to run endlessly on some long sentences hence we restricted to sentences with maximum length 20 (i.e. all 5000 sentences were of maximum length 20). Since many UMLS semantic types seemed very fine grained, we chose only 27 of them which seemed relevant for clinical reports (these included \"disease or syndrome\", \"finding\", \"body part, organ, or organ component\", \"pathologic function\", \"medical device\" etc.). All the occurrences of these semantic types were substituted for the actual words and phrases in the sentence. Figure [1(a)](#F1){ref-type="fig"} shows an original sentence from the corpus and 1(b) shows the same sentence in which some words and phrases have been substituted by their UMLS semantic types. Figure [1(c)](#F1){ref-type="fig"} shows the part-of-speech tags of the words of the original sentence as obtained by MetaMap. Note that the part-of-speech tags that MetaMap outputs are not as fine-grained as are in the Penn treebank. Also note that the entire last phrase \"ventricular assist device\" is tagged as a single noun. Finally, the words which were not replaced by the chosen UMLS semantic types were replaced by their part-of-speech tags. Figure [1(d)](#F1){ref-type="fig"} shows the original sentence from 1(a) with words and phrases replaced by part-of-speech tags and UMLS semantic types. We ran all our experiments on sentences transformed into this final form. We note that our experiments are obviously limited due to the accuracy of MetaMap in determining the correct part-of-speech tags and UMLS semantic types. We separated 1000 sentences from the 5000 sentences and used them as test sentences to determine how well the induced grammar works on novel sentences. The rest were used as the training data to induce grammar. Out of these 1000 test sentences, we manually put correct parsing brackets on 100 sentences to test the performance of the parses obtained by the induced grammar. We are not aware of any annotated corpus of clinical report domain which we could have used to measure this performance. Results and discussion ====================== We first show that by varying the parameter *f*of the total cost (equation 5), which weighs the relative contribution of the cost of grammar and the cost of derivations, the grammar induction method is capable of inducing a range of grammars, from very general ones to very restrictive ones. In this experiment, we only considered sentences which have at most 10 words for both training and test sentences (80% of the sentences in the corpus were of maximum length 10). We later show that performance decreases as we increase the maximum length of the sentence. We apply the method of inducing grammar on 4000 training sentences and measure how many novel sentences in the test data (i.e. which are not same as any of the training sentences) were parsable using the induced grammar. Since the grammar induction process starts with the grammar that can parse the training sentences and the grammar transformation operators never reduce its coverage, the induced grammar will always parse the sentences which are present in the training set. Out of the 791 sentences in the test data with maximum length of ten, 554 sentences overlapped with the training sentences (clinical reports have many repeated sentences). The remaining 237 sentences were novel. In Figure [2](#F2){ref-type="fig"}, we plot what percentage of these novel sentences the induced grammar was able to parse as we varied the *f*parameter from 0 to 0.5. The parses were obtained using Earley\'s context-free grammar parsing algorithm \[[@B19]\]. If *f*is more than 0.5, the induced grammar becomes so restrictive that it almost never parses any novel sentence. It can be seen that with smaller *f*value, the induction method tries to minimize the cost of the grammar more than the cost of the derivations and hence comes up with a small grammar that is very general and is able to parse almost any sentence. But with larger *f*values, the induction method tries to minimize the cost of the derivation more and comes up with a large grammar which is not very different from a production for every training sentence, and hence cannot parse many novel sentences. In this experiment we disallowed learning recursive productions by making sure a grammar transformation operator does not lead to a recursive production. We did so because we found that by allowing recursive productions, the percentage of novel parses go from near 100% down to near 0% with nothing in between. Recursions in productions drastically reduce the size of the grammar, hence the process otherwise prefers recursive productions which often leads to small overly general grammars (similar to an example given at the beginning of Subsection ). ![**Percentage of novel sentences parsed by different grammars induced by minimizing the total cost of the grammar and the derivations as the value of parameter f, which relatively weighs the two costs, is varied**.](2041-1480-3-S3-S4-2){#F2} Please note that being able to parse a sentence does not necessarily mean that the parse is correct; we show this accuracy in Figure [3](#F3){ref-type="fig"}. Out of the 100 sentences that we manually annotated with correct parse brackets, 70 sentences were of maximum 10 words. We measured *precision*, how many of the brackets (start and end word) in the parses obtained by the induced grammar were present in the correct parses, *recall*, how many of the correct brackets were present in the obtained parses, and *F-measure*, the harmonic mean of precision and recall. We did not consider brackets containing one word and the entire sentence since they will be always correct. We measured these on novel sentences as well as the sentences which are also present in the training data because the system is not given the correct parses of the training sentences and so it makes sense to also measure the parsing performance on them. We measure the performance of bracketing because labeling accuracy cannot be measured since the system can only come-up with system-generated labels (like *M*1, *C*1 etc.). As it can be seen from Figure [3](#F3){ref-type="fig"}, the precision increases with higher values of *f*(more restrictive grammars) but the recall overall decreases. The F-measure is found to be maximum in between (60.5% at 0.15 as the value of the parameter *f*). ![**Precision, recall and F-measure of the parsing brackets while varying the value of parameter f which relatively weighs the cost of the grammar and cost of the derivations during the grammar induction process**.](2041-1480-3-S3-S4-3){#F3} Earlier, in the above results, the F-measure had a maximum around 45%. Upon error analysis, we noticed that most of the errors were because the induced grammar would incorrectly pair subject and verb instead of the traditional way of pairing verb and object. There were also errors of pairing nouns followed by a preposition. Similar errors have been reported previously \[[@B6]\]. These errors could either be because the search for an optimum grammar is only approximate or it could be because these are in fact reasonable alternate parses. Nevertheless, in order to see its effect we introduced hard rules in the system to never let the part-of-speech tags of verb, det, prep, aux and conj be the second or later RHS term in any production. This increased the F-measure. But as can be seen from the wide range of F-measures in Figure [3](#F3){ref-type="fig"}, these rules alone are not sufficient to guarantee good performance. In future, perhaps these biases could be learned from a small amount of supervised data in a semi-supervised grammar induction setting. In the results shown in Figure [3](#F3){ref-type="fig"}, five iterations of the EM algorithm were performed to obtain the probabilities of the productions. Figure [4](#F4){ref-type="fig"} shows how this performance changes with increasing number of iterations. Only the F-measure is shown for simplicity. The curve with iteration zero shows the performance when no probability is assigned to the productions and simply the first parse is returned when multiple parses are possible. The curve with one iteration shows the performance when uniform probabilities are assigned to the production under the initialization step. The remaining curves show the performances after subsequent iterations. It can be seen that a huge improvement is obtained once probabilities are used even when they are simply uniform. The performance shows a small improvement with a few more iterations but converges well within five iterations. ![**F-measure of the parsing brackets with increasing number of iterations of the EM algorithm for estimating probabilities of the productions of the induced grammar**.](2041-1480-3-S3-S4-4){#F4} Finally, we present results on increasing the size of the training data and increasing the maximum length of the sentences (for both training and test sets) in Figure [5](#F5){ref-type="fig"}. The performance was measured on the same bracket-annotated corpus of 100 sentences all of which were of maximum length 20, 70 of these sentences had maximum length of 10 and 92 had maximum length of 15. For each point in the graph, we are showing the maximum F-measure that was found upon varying the parameter *f*. As can be seen, the accuracy decreases with longer lengths of sentences. It is interesting to note that the performance seems to have plateaued with the number of training examples. ![**F-measure of the parsing brackets with different amounts of training data and different maximum length of sentences**.](2041-1480-3-S3-S4-5){#F5} Future work =========== An obvious future task is to apply this approach to other genres of clinical report present in the Pittsburgh corpus. We, in fact, already did this, except for manually creating a corresponding bracket-annotated corpus for measuring parsing performance. Also, a bigger annotated corpus for evaluating the current results on discharge summaries genre is desirable. Another avenue of future work is to improve the search procedure for finding the optimum grammar. One way would be to do a beam search. Besides using the UMLS semantic types, in future, one may decide additional semantic types which could help in some application, for example, a negation class of words, a class of words representing patients etc. Currently the method first induces the grammar and then estimates the probabilities of its productions from the same data. An interesting possibility for future work will be to integrate the two steps so that the probabilities are computed and employed even during the grammar induction process. This will be a more elegant method and will likely lead to an improvement in the parsing performance. Conclusions =========== Unsupervised parsing is particularly suitable for clinical domains because it does not require expensive annotation effort to adapt to different genres and styles of clinical reporting. We presented an unsupervised approach for inducing grammar for clinical report sublanguage in terms of part-of-speech tags and UMLS semantic types. We showed that using the cost-reduction principle, the approach is capable of learning a range of grammars from very specific to very general and achieves the best parsing performance in between. Competing interests =================== The author declares that they have no competing interests.
The change from the Mesolithic to the Neolithic can be regarded as a change in the way of life and economy with the status of the individual growing in emphasis through the period. The transition to the Bronze Age is somewhat artificial and harks back to a now outdated theory that an influx of newcomers from the continent known as the Beaker People displaced the indigenous Neolithic people. They take their name from pottery beakers specially made for a new burial rite. High status individuals were buried in a stone box called a short cist. The body was placed in the foetal position with a beaker close to the face; other grave goods were also placed in the grave. Residues found in the beakers suggest that a special drink formed part of the burial ceremony; perhaps shared by the mourners or used as a libation. Current theories see the change in material culture (burial rite, monuments and artifacts) in terms of an influx of new ideas in ritual practice and monumentality. It is likely that the reality is a mix of both immigration and the spread of new ideas and beliefs.. It also marks the use of copper and copper alloys, which, in the beginning of the Bronze Age, would have been rare, precious and possessed only by individuals of the highest status. The earliest metalworking was probably limited to native copper (metallic copper that occurs naturally in small quantities). Native copper resources would have soon run out and copper would have to be obtained by smelting copper ore. This process might have been regarded as a magical process as, for example the green rock Malachite (copper carbonate), was transformed by fire into metallic copper. To make Bronze from copper it is necessary to alloy it with tin; lead was also added to increase the ease of casting. Metal workers might have been afforded a special position in society. Tin does not occur in Scotland in significant quantities the main deposits being in Cornwall, Bronze must therefore have been an item of trade. Stone circles, stone rows, henges and standing stones are thought to have originated towards the end of the Neolithic between 3000 and 2500 BC. Nobody really knows why these monuments were built but some believe they are associated with observing the movements of the sun, moon and stars. It has been speculated that monuments of this type may indicate a transition from an essentially egalitarian society to an increasingly ranked society and from ancestor based worship to more metaphysically based ritual perhaps associated with priesthood. One such type of monument, The Stone Row is particular to Caithness and Sutherland. The monuments consist of a number of rows of relatively small stones set on edge. In some examples the rows are parallel while in other they fan out slightly. Bronze Age cist burials are sometimes found close to the axis of such settings. There are twenty-three known examples of this site type in Caithness and Sutherland. The largest of these, The Hill O’ Many Stains in Caithness, has 22 rows set in a fan shape but there is no known associated cist burial. Despite much analytical work no one has yet come up with a convincing interpretation of these enigmatic monuments, to this end an excavation of the Battle Moss stone row at Yarrows was conducted in 2003 by a joint team from Cardiff and Glasgow universities. More information about Battle Moss can be accessed in the Cultural Heritage section of the website. In Scotland the Bronze Age is marked by the appearance of a new burial rite, the so-called beaker burial, the earliest of which date to about 2500 BC. It is a high status burial in which the deceased was placed on his or her side, in a crouched or foetal position, within a slab-built short cist (box), which was covered by a capstone (lid). A stone or earthen cairn was usually raised over the cist, which in some areas may have been enclosed within a stone kerb or a ditch and bank. Grave goods would have accompanied the body and were more formally ritualised than those of the Neolithic. Both male and female high status burials included a specially made funerary beaker, often placed opposite the face as if for the deceased to drink from. Organic deposits recovered from these beakers, suggest they contained a ritual drink that may have been shared by the mourners or used as a libation. A male burial may include archer's accoutrements, barbed and tanged arrowheads and a wrist guard for example. Jewellery might accompany high status female burials. One such burial was found to include what is called a spacer-plate necklace. The finest of these were made from jet, the lesser ones from cannel coal or oil shale. Some cist burials contain a ritual pot called a food vessel rather than a beaker. These are morphologically distinct from beakers and may indicate a lower status burial. Such burials tell us something of the status of women in the Bronze Age. Bronze Age cist burials sometimes occur as insertions into existing monuments such as stone settings or chambered cairns. The cist found in the left hand side of the anti-chamber of South Yarrows North is an example of this practice. The short cist inhumation was not the only Bronze Age funerary rite cremation was also practiced. Many Bronze Age cairns also contained evidence of cremation. The Bronze Age farmers appear to have lived as extended families in large roundhouses that date from about 2000 BC, the remains of which are the hut-circles found in great numbers all over Scotland. By about 1500 BC. the intensification of agriculture is evident, marked by the widespread clearance of woodland, and the establishment of large enclosed fields. By the later Bronze Age, c.a. 900 BC non-ferrous metalworking had become highly developed both in terms of technology and decorative style. Possession of such items was associated with high status within an increasingly warrior-based pan-European Celtic culture.
http://www.yarrowsheritagetrust.co.uk/bronze_age_2.html
BACKGROUND OF THE INVENTION Field of the Invention Description of Related Art The present invention relates to a coil component and a circuit board having the same and, more particularly, to a coil component capable of being used as a pulse transformer and a circuit board having such a coil component. As a coil component capable of being used as a pulse transformer, one described in JP 2014-199906A is known, for example. The coil component described in JP 2014-199906A is an eight-terminal-type coil component in which flange parts on both sides thereof each have four terminal electrodes, and four wires are wound around a winding core part thereof. One end of one of two paired wires and one end of the other one thereof are short-circuited on a circuit board to constitute a center tap of a primary side coil, and one end of one of the remaining two paired wires and one end of the other one thereof are short-circuited on the circuit board to constitute a center tap of a secondary side coil. However, in the coil component described in JP 2014-199906A, two terminal electrodes constituting the center tap are provided on the same flange part, so that it is necessary to wind the two paired wires in mutually opposite directions, complicating the winding operation. In addition, when the number of turns of a primary side coil and that of a secondary side coil are different, one of the two wires constituting the primary side coil and one of the two wires constituting the secondary coil cannot be wound simultaneously. It follows that the four wires need to be wound one by one, thus taking time in the winding operation. SUMMARY It is therefore an object of the present invention to provide a coil component in which wire winding operation is facilitated even when the primary side coil and the secondary side coil differ in the number of turns and a circuit board having such a coil component. A coil component according to the present invention includes: a core including a first flange part, a second flange part, and a winding core part positioned between the first and second flange parts; first, second, third, and fourth terminal electrodes provided on the first flange part; fifth, sixth, seventh, and eighth terminal electrodes provided on the second flange part; first and second wires bifilar wound around the winding core part; and third and fourth wires bifilar wound around the winding core part. One and the other ends of the first wire are connected respectively to the first and sixth terminal electrodes, one and the other ends of the second wire are connected respectively to the second and fifth terminal electrodes, one and the other ends of the third wire are connected respectively to the third and eighth terminal electrodes, and one and the other ends of the fourth wire are connected respectively to the fourth and seventh terminal electrodes. The first and second wires cross each other in a first crossing area, and the third and fourth wires cross each other in a second crossing area different from the first crossing area. According to the present invention, the first and second wires are bifilar wound, and third and fourth wires are bifilar wound, thus facilitating wire winding operation. Further, the first and second wires cross each other, and the third and fourth wires cross each other, so that, when a center tap is constituted by short-circuiting two terminal electrodes on a circuit board, it is possible to simplify the pattern shape of a connection pattern for short-circuiting the two terminal electrodes. In addition, the first crossing area where the first and second wires cross each other and the second crossing area where the third and fourth wires cross each other differ in position, so that it is possible to prevent breakage, winding collapse, and the like of the wires due to interference between the two crossing areas. In the present invention, the turn number of the first and second wires in the first crossing area when the first flange part is defined as the starting point of winding and the turn number of the third and fourth wires in the second crossing area when the first flange part is defined as the starting point of winding may differ from each other. With this configuration, it is possible to easily prevent interference between the two crossing areas. In this case, the first turns of the first and second wires may cross each other in the first crossing area when the first flange part is defined as the starting point of winding, and the first turns of the third and fourth wires may cross each other in the second crossing area when the second flange part is defined as the starting point of winding. This further facilitates wire winding operation. In the present invention, the winding core part may have a plurality of winding surfaces, and the first and second crossing areas may be positioned on the mutually different winding surfaces. This can reliably prevent interference between the two crossing areas. In this case, the plurality of winding surfaces may include first, second, and third winding surfaces, wherein the extending length of the first, second, third, and fourth wires on the first winding surface may be longer than the extending length of the first, second, third, and fourth wires on the second and third winding surfaces, and the first and second crossing areas may be positioned on the second and third winding surfaces, respectively. With this configuration, two wires cross each other on the winding surface on which the wire extending length is shorter, so that the positions of the wires near the crossing point can be fixed at the corners of the winding core part. In the present invention, the number of turns of the first and second wires and the number of turns of the third and fourth wires may differ from each other. Even in this case, the first and second wires can be bifilar wound, and the third and fourth wires can be bifilar wound, thus facilitating wire winding operation. In this case, the number of turns of the first and second wires may be larger than the number of turns of the third and fourth wires, and the third and fourth wires may be wound around the winding core part with the first and second wires interposed therebetween. This allows the first to fourth wires to be wound in an aligned manner. In the present invention, the first, second, third, and fourth terminal electrodes may be arranged in this order in a direction perpendicular to the axial direction of the winding core part, and the fifth, sixth, seventh, and eighth terminal electrodes may be arranged in this order in a direction perpendicular to the axial direction of the winding core part. This allows the paired two wires to be brought close to each other. A circuit board according to the present invention includes a substrate and the above-described coil component mounted on the substrate. The substrate includes: first to eighth land patterns connected respectively to the first to eighth terminal electrodes; a connection pattern for short-circuiting the first and fifth land patterns; and a connection pattern for short-circuiting the fourth and eighth land patterns. According to the present invention, it is possible to simplify the pattern shape of the connection pattern for short-circuiting two terminal electrodes. As described above, according to the present invention, there can be provided a coil component in which wire winding operation is facilitated even when the primary side coil and the secondary side coil differ in the number of turns and a circuit board having such a coil component. BRIEF DESCRIPTION OF THE DRAWINGS The above features and advantages of the present invention will be more apparent from the following description of certain preferred embodiments taken in conjunction with the accompanying drawings, in which: FIG. 1 1 is a schematic perspective view illustrating the outer appearance of a coil component according to an embodiment of the present invention; FIG. 2 1 4 1 8 is a schematic view for explaining the connection relationship between the wires W to W and the terminal electrodes E to E; FIG. 3 1 4 13 is a schematic view for explaining the winding pattern of the wires W to W on the winding core part ; FIGS. 4A and 4B FIG. 4A FIG. 4B 13 1 1 2 2 3 4 are each a developed view of the winding core part , where shows a position of the crossing area A of the wires W and W, and shows a position of the crossing area A of the wires W and W; FIG. 5 2 1 a is a schematic plan view illustrating the pattern shape of a substrate on which the coil component according to the present embodiment is mounted; FIG. 6 2 b is a schematic plan view illustrating the pattern shape of a substrate according to a modification. FIGS. 7A and 7B FIG. 7A FIG. 7B 1 4 1 2 3 4 are each a schematic view for explaining the winding pattern of the wires W to W according to a first modification, where illustrates the winding pattern of the wires W and W, and illustrates the winding pattern of the wires W and W. FIG. 8 3 4 is a schematic view for explaining the winding pattern of the wires W and W according to a second modification; FIG. 9 3 4 is a schematic view for explaining the winding pattern of the wires W and W according to a third modification; and FIG. 10 1 4 1 8 is a schematic view for explaining the connection relationship between the wires W to W and the terminal electrodes E to E according to a modification. DETAILED DESCRIPTION OF THE EMBODIMENTS Preferred embodiments of the present invention will be explained below in detail with reference to the accompanying drawings. FIG. 1 1 is a schematic perspective view illustrating the outer appearance of a coil component according to an embodiment of the present invention. 1 10 20 1 8 1 4 10 20 10 20 FIG. 1 The coil component according to the present embodiment is a pulse transformer and includes a drum-shaped core , a plate core , terminal electrodes E to E, and wires W to W, as illustrated in . As materials for the drum-shaped core and plate core , magnetic materials having a high permeability such as ferrite are used. The magnetic materials used for the drum-shaped core and plate core may be mutually the same or different, and these preferably have a permeability μ of 10 H/m to 4000 H/m. FIG. 1 10 13 11 13 12 13 1 4 11 5 8 12 1 4 13 1 4 1 8 13 As illustrated in , the drum-shaped core includes a winding core part whose axial direction is the x-direction, a flange part provided at one end of the winding core part in the x-direction, and a flange part provided at the other end of the winding core part in the x-direction. The terminal electrodes E to E are provided on the flange part and arranged in this order in the y-direction. The terminal electrodes E to E are provided on the flange part and arranged in this order in the y-direction. The wires W to W are wound around the winding core part , and one and the other ends of each of the wires W to W are connected respectively to corresponding ones of the terminal electrodes E to E. The yz cross section of the winding core part is a chamfered rectangle. FIG. 2 1 4 1 8 is a schematic view for explaining the connection relationship between the wires W to W and the terminal electrodes E to E. FIG. 2 1 1 6 2 2 5 3 3 8 4 4 7 1 4 1 2 13 3 4 13 1 2 3 4 1 2 3 4 1 2 3 4 As illustrated in , one and the other ends of the wire W are connected respectively to the terminal electrodes E and E, one and the other ends of the wire W are connected respectively to the terminal electrodes E and E, one and the other ends of the wire W are connected respectively to the terminal electrodes E and E, and one and the other ends of the wire W are connected respectively to the terminal electrodes E and E. Of the wires W to W, the wires W and W are paired and bifilar wound around the winding core part . Similarly, the wires W and W are paired and bifilar wound around the winding core part . It follows that the wires W and W are the same in the number of turns and the winding direction, and the wires W and W are the same in the number of turns and the winding direction. Although not particularly limited, in the present embodiment, the number of turns differs between the paired wires W and W and the paired wires W and W. The winding direction is the same between the paired wires W and W and the paired wires W and W. FIG. 2 FIG. 2 1 2 1 3 4 2 1 2 13 1 2 11 1 2 3 4 12 7 8 Further, as illustrated in , the wires W and W cross each other in a crossing area A, and the wires W and W cross each other in a crossing area A. The crossing area A and the crossing area A are different in position on the winding core part . In the example illustrated in , the first turns of the wires W and W cross each other when the flange part (terminal electrodes E and E) is defined as the starting point of winding, and the first turns of the wires W and W cross each other when the flange part (terminal electrodes E and E) is defined as the starting point of winding. FIG. 3 1 4 13 is a schematic view for explaining the winding pattern of the wires W to W on the winding core part . FIG. 3 1 2 3 4 1 2 3 4 3 4 13 1 2 1 2 3 4 1 2 13 3 4 1 2 As illustrated in , in the present embodiment, the wires W and W are larger in the number of turns than the wires W and W. Further, the wires W and W are wound in the lower layer, and the wires W and W are wound in the upper layer. That is, the wires W and W are wound around the winding core part with the wires W and W interposed therebetween. Such a winding pattern is obtained by bifilar winding the wires W and W with a larger number of turns first and then bifilar winding the wirers W and W with a smaller number of turns. With this winding pattern, the wires W and W with a larger number of turns can be wound on the surface of the winding core part in an aligned manner, and the wires W and W with a smaller number of turns can be wound along the valley line between the wires W and the wires W in an aligned manner, which makes winding collapse unlikely to occur. FIGS. 4A and 4B FIG. 4A FIG. 4B 13 1 1 2 2 3 4 are each a developed view of the winding core part . The position of the crossing area A of the wires W and W is indicated in , and the position of the crossing area A of the wires W and W is indicated in . FIGS. 4A and 4B 13 13 13 13 13 13 13 13 13 13 13 1 4 13 13 13 13 a d a c b d a c b c a c c d. As illustrated in , the winding core part has four winding surfaces to . The winding surfaces and each constitute the xy plane, and the winding surfaces and each constitute the xz plane. In the present embodiment, the winding surfaces and are larger in area than the winding surfaces and , so that the extending length of the wires W and W is longer on the winding surfaces and than on the winding surfaces and 1 1 2 13 2 3 4 13 1 4 13 13 1 14 14 13 2 14 14 13 1 4 14 14 1 2 13 1 2 1 2 13 1 7 8 13 3 4 7 8 13 2 b d b d a b c d a d b b d d The crossing area A of the wires W and W is positioned on the winding surface , and the crossing area A of the wires W and W is positioned on the winding surface . Since the extending length of the wires W to W is shorter on the winding surfaces and as described above, the distance between the crossing area A and the corners and of the winding core part and the distance between the crossing area A and the corners and of the winding core part are small, so that the x-direction positions of the wires W to W near the crossing points can be easily fixed at the corners to . Further, the terminal electrodes E and E are positioned closer to the winding surface , and the wires W and W connected respectively to the terminal electrodes E and E cross each other on the winding surface , so that a crossing angle θ is comparatively large. Similarly, the terminal electrodes E and E are positioned closer to the winding surface , and the wires W and W connected respectively to the terminal electrodes E and E cross each other on the winding surface , so that a crossing angle θ is comparatively large. As a result, the length of an area required for the wire crossing is reduced. FIG. 5 2 1 a is a schematic plan view illustrating the pattern shape of a substrate on which the coil component according to the present embodiment is mounted. 2 1 3 1 3 1 8 1 3 1 8 1 8 1 2 6 1 1 3 7 2 2 a a b a b FIG. 5 The substrate illustrated in is used as a circuit board including a pulse transformer when the coil component is mounted thereon and has a mounting area for the coil component . The mounting area has eight land patterns P to P. When the coil component is mounted on the mounting area , the land patterns P to P are connected respectively to the terminal electrodes E to E of the coil component . The land patterns P and P constitute a pair of primary side terminals and are connected respectively to primary side wiring patterns Land L. On the other hand, the land patterns P and P constitute a pair of secondary side terminals and are connected respectively to secondary side wiring patterns Land L. However, the distinction between the primary side and the secondary side is relative, and they are interchangeable. 1 5 1 3 4 8 2 3 The land pattern P and the land pattern P are short-circuited to each other through a connection pattern C provided in the mounting area and function as a primary side center tap. Similarly, the land pattern P and the land pattern P are short-circuited to each other through a connection pattern C provided in the mounting area and function as a secondary side center tap. 1 2 1 5 1 4 8 2 1 a Thus, when the coil component according to the present embodiment is mounted on the substrate , the terminal electrodes E and E are short-circuited through the connection pattern C, and the terminal electrodes E and E are short-circuited through the connection pattern C, whereby the coil component functions as a pulse transformer having the primary side center tap and secondary side center tap. FIG. 6 2 b is a schematic plan view illustrating the pattern shape of a substrate according to a modification. 2 2 6 1 3 7 2 1 5 4 8 1 3 2 b b FIG. 6 In the substrate illustrated in , the land pattern P and the land pattern P are short-circuited to each other through the connection pattern C, and the land pattern P and the land pattern P are short-circuited to each other through the connection pattern C. On the other hand, the land patterns P and P constitute a pair of primary side terminals, and the land patterns P and P constitute a pair of secondary side terminals. Even when the coil component is mounted on the mounting area of the thus configured substrate , it functions as a pulse transformer having the primary side center tap and secondary side center tap. 1 2 1 2 1 2 3 4 1 2 3 4 1 2 3 4 1 2 1 2 1 1 2 2 3 4 1 2 a As described above, the coil component according to the present embodiment can be used as a pulse transformer when being mounted on the substrate having the connection patterns C and C. Further, the wires W and W can be bifilar wound, and the paired wires W and W can be bifilar wound even the number of turns differs between the paired wires W and W and the paired wires W and W, facilitating wire winding operation. Further, the wires W and W are made to cross each other, and the wires W and W are made to cross each other, whereby the connection patterns C and C can each be formed to have a shape linearly extending in the x-direction, contributing to a reduction in the wiring length of the connection patterns C and C. In addition, the crossing area A of the wires W and W and the crossing area A of the wires W and W are different in position, so that it is possible to prevent breakage, winding collapse, and the like of the wires due to interference between the crossing areas A and A. FIGS. 7A and 7B FIG. 7A FIG. 7B 1 4 1 2 3 4 are each a schematic view for explaining the winding pattern of the wires W to W according to a first modification. illustrates the winding pattern of the wires W and W, and illustrates the winding pattern of the wires W and W. FIGS. 7A and 7B FIGS. 4A and 4B FIG. 4 1 2 13 12 5 6 3 4 13 11 3 4 3 1 2 13 4 3 4 13 1 2 d b d b In the winding pattern illustrated in , the first turns of the wires W and W cross each other on the winding surface when the flange part (terminal electrodes E and E) is defined as the starting point of winding, and the first turns of wires W and W cross each other on the winding surface when the flange part (terminal electrodes E and E) is defined as the starting point of winding. Even with such a winding pattern, substantially the same effects as those obtained by the winding pattern illustrated in can be achieved. However, a crossing angle θ between the wires W and W on the winding surface and a crossing angle θ between the wires W and W on the winding surface are likely to be smaller than the crossing angles θ and θ and, in this case, the length of an area required for the wire crossing is increased. Considering this, it is preferable to adopt the winding pattern illustrated in . FIG. 8 FIG. 4A 3 4 1 2 is a schematic view for explaining the winding pattern of the wires W and W according to a second modification. The corresponding winding pattern of the wires W and W is as illustrated in . FIG. 8 3 4 13 11 3 4 1 2 1 4 11 1 2 d In the winding pattern illustrated in , the first turns of the wires W and W cross each other on the winding surface when the flange part (terminal electrodes E and E) is defined as the starting point of winding. Thus, even when both the crossing areas A and A are located at positions corresponding to the first turns of the wires W to W when the flange part is defined as the starting point of winding, the crossing areas A and A are located on different winding surfaces, so that it is possible to prevent interference therebetween. FIG. 9 FIG. 4A 3 4 1 2 is a schematic view for explaining the winding pattern of the wires W and W according to a third modification. The corresponding winding pattern of the wires W and W is as illustrated in . FIG. 9 3 4 13 11 3 4 1 2 13 1 2 1 3 4 2 1 2 b b In the winding pattern illustrated in , the second turns of the wires W and W cross each other on the winding surface when the flange part (terminal electrodes E and E) is defined as the starting point of winding. Thus, even when both the crossing areas A and A are positioned on the same winding surface , the turn position of the wires W and W corresponding to the crossing area A and the turn position of the wires W and W corresponding to the crossing area A differ from each other, so that it is possible to prevent interference between the crossing areas A and A. FIG. 10 1 4 1 8 is a schematic view for explaining the connection relationship between the wires W to W and the terminal electrodes E to E according to a modification. FIG. 10 FIG. 5 FIG. 6 1 1 7 2 3 5 3 2 8 4 4 6 1 2 2 a b In the example illustrated in , one and the other ends of the wire W are connected respectively to the terminal electrodes E and E, one and the other ends of the wire W are connected respectively to the terminal electrodes E and E, one and the other ends of the wire W are connected respectively to the terminal electrodes E and E, and one and the other ends of the wire W are connected respectively to the terminal electrodes E and E. The coil component having such a connection relationship also functions as a pulse transformer having the primary side center tap and secondary side center tap, like the coil component according to the above-described embodiment, when it is mounted on the substrate illustrated in or the substrate illustrated in . While the preferred embodiments of the present invention have been described, the present invention is not limited to the above embodiments, and various modifications may be made within the scope of the present invention. Accordingly, all such modifications are included in the present invention. It is apparent that the present invention is not limited to the above embodiments, but may be modified and changed without departing from the scope and spirit of the invention. 1 2 13 13 13 13 b d a c. Further, although the crossing areas A and A are positioned on the winding surface or in the above embodiment, they may be positioned on the winding surface or
Calculate the areas of all the possible quadrilaterals that can be constructed by joining together dots on this grid: Teacher: Click on the dots above to show joining lines. Assume that the vertical and horizontal distances between adjacent dots is one unit. A printable sheet for student use is available here. A Flash version of the grid is here. Topics: Starter | Area | Geometry | Investigations | Mensuration | Shape How did you use this starter? Can you suggest how teachers could present or develop this resource? Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for Maths teachers anywhere in the world. Click here to enter your comments. If you don't have the time to provide feedback we'd really appreciate it if you could give this page a score! We are constantly improving and adding to these starters so it would be really helpful to know which ones are most useful. Simply click on a button below: This starter has scored a mean of 3.1 out of 5 based on 418 votes. Previous Day | This starter is for 22 March | Next Day | | 1 square unit | | 1.5 square units | | 2 square units | | 2.5 square units | | 3 square units | | 4 square units Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page. Below is an Amazon search box and some items chosen and recommended by Transum Mathematics to get you started. | | Numbers and the Making of Us I initially heard this book described on the Grammar Girl podcast and immediately went to find out more about it. I now have it on my Christmas present wish list and am looking forward to receiving a copy (hint!). "Caleb Everett provides a fascinating account of the development of human numeracy, from innate abilities to the complexities of agricultural and trading societies, all viewed against the general background of human cultural evolution. He successfully draws together insights from linguistics, cognitive psychology, anthropology, and archaeology in a way that is accessible to the general reader as well as to specialists." more... | | Teacher, do your students have access to computers? | | Here a concise URL for a version of this page without the comments. Here is the URL which will take them to a related student activity.
https://www.transum.org/Software/SW/Starter_of_the_day/starter_March22.ASP
Melting occurs when a solid is heated and turns to liquid. The particles in a solid gain enough energy to overcome the bonding forces holding them firmly in place. ... This is called freezing and occurs at the same temperature as melting. Hence, the melting point and freezing point of a substance are the same temperature. Follow this link for full answer Ever, what is it called when a solid turn to liquid? Melting point as a chemistry concept The solid begins to go from a solid state to a liquid state — a process called melting. The temperature at which melting occurs is the melting point (mp) of the substance. The melting point for ice is 32° Fahrenheit, or 0° Celsius. Well, how does a solid turn into a liquid then to a gas? We can change a solid into a liquid or gas by changing its temperature. ... Water is a liquid at room temperature, but becomes a solid (called ice) if it is cooled down. The same water turns into a gas (called water vapor) if it is heated up. The changes only happen when the substance reaches a particular temperature. And, what type of change occurs when water changes from a solid to a liquid? Melting- When a chemical substance changes its state from solid to liquid, it indicates a melting process which is a phase and physical change. How do you turn water into a solid? For example, at a 100:1 ratio of water to gel powder, you get a solid. Just put 1 teaspoon (about 2.4g) of this powder in a container and add 1 cup of water and the water turns into a solid almost instantly.
https://amaanswers.com/what-happens-when-a-solid-turns-into-a-liquid
You are encouraged to use materials in this section to provide information related to BCERP studies and promote awareness of the environmental risks associated with breast cancer. Most of the materials are in a format that can be adapted locally. Individuals and organizations may wish to tailor the wording and/or images in these materials to better meet the needs of local populations. To assist with this adaptation, we have developed a set of guidelines for tailoring the BCERP materials for use in your local community. These materials are based on scientific findings and may need review by someone familiar with breast cancer research if altered significantly from the original. This brochure summarizes some of the potential environmental risk factors being researched (puberty, endocrine disruptors, and obesity) and information on suggested lifestyle changes being provided to parents and health providers. This double-sided brochure is meant to be folded in thirds. There is space on the back of the brochure where outreach organizations may add their name, logo, and contact information. This information provides talking points for members of community organizations that can be used when writing to or speaking with community leaders and decision makers. Topics include breast cancer incidence, chemical exposures, and childhood risks. The press kit includes a customizable template with information about the BCERP studies that can be used as a press release. The kit also includes background information about the BCERP, story ideas, and a list of contacts and web links for more information.
https://bcerp.org/educational-materials/materials-for-outreach-organizations/
The invention discloses a single-pass multi-direction cooperative traffic method, the definition of single-pass multi-directional in the present invention is fundamental, On this basis, through the algorithm to the crossroads traffic time, traffic time setting, through the coordinated speed, so that the city traffic can have a reasonable cycle and rhythm, so that the crossroads traffic can be coordinated, as far as possible to meet a red light when vehicles travel, the next is a green light. In the invention, if the traffic lights at each intersection are networked and programmed, the trafficcommand center can adjust the passage time and the passage time of each intersection at any time according to the traffic flow situation, so as to improve the passage efficiency. Compared with the prior art, the invention can make the whole city traffic cooperate, and conveniently adjust the driving time in each direction of each intersection according to the traffic situation, thereby greatly improving the urban traffic efficiency, alleviating the congestion situation of the current urban traffic, and achieving inestimable economic and social benefits.
Flavor Notes & Uses: Cultivated first in East Asia, the baby shiitake mushroom has become one of the most popular cultivated varieties and encompasses 25% of the cultivated mushroom supply in the world. They are widely used in nearly all Asian cuisines but have adapted to many other styles as well. They can be can be paired with a wide variety of flavors when sautéed, roasted, skewered, grilled, and made into paste. Species: Lentinula Edodes Origin: Pennsylvania Spec: Cultivated Season: Available year-round.
https://www.regalischefs.com/new-products-5/baby-shiitake
Q: Sed substitution possible with arithmetic involved? File I need to modify contains the following: block: 16, size: 16, start: 8, length: 4 I'd like the file so that values for block, size will be divided by 2 while values for start, length will be multiplied by 2. Since I have to make such modifications for a whole bunch of files, I am considering using Sed to do the substitution work for me. But I'm not sure calculations are allowed in the matching and substituting process. A: I always try to solve every problem tagged with sed using sed. But here it would be so easy to accomplish what you are trying to do with awk. (And the use of sed in this case is too difficult.) So, here is my solution using awk: $ echo "block: 16, size: 16, start: 8, length: 4" | awk '{ printf "%s %d, %s %d, %s %d, %s %d\n", $1, $2/2, $3, $4/2, $5, $6*2, $7, $8*2 }' block: 8, size: 8, start: 16, length: 8 A: (The right tool to do this is awk, but for the fun of a sed exercise...) It is possible in sed. After all, a multiplication by 2 is set of substitution of the last digit according to some simple rules: 0 --> 0 1 --> 2 2 --> 4 3 --> 6 ... 8 --> 16 9 --> 18 To take care of the carry digit, each rules should be written twice. This sed script, that can be run with sed -f script, do the multiplication by 2 of all the numbers on the input lines: s/$/\n\n/ :loop s/0\n1\n/\n\n1/;t loop s/0\n\n/\n\n0/;t loop s/1\n1\n/\n\n3/;t loop s/1\n\n/\n\n2/;t loop s/2\n1\n/\n\n5/;t loop s/2\n\n/\n\n4/;t loop s/3\n1\n/\n\n7/;t loop s/3\n\n/\n\n6/;t loop s/4\n1\n/\n\n9/;t loop s/4\n\n/\n\n8/;t loop s/5\n1\n/\n1\n1/;t loop s/5\n\n/\n1\n0/;t loop s/6\n1\n/\n1\n3/;t loop s/6\n\n/\n1\n2/;t loop s/7\n1\n/\n1\n5/;t loop s/7\n\n/\n1\n4/;t loop s/8\n1\n/\n1\n7/;t loop s/8\n\n/\n1\n6/;t loop s/9\n1\n/\n1\n9/;t loop s/9\n\n/\n1\n8/;t loop s/\n1\n/\n\n1/;t loop s/\(.\)\n\n/\n\n\1/;t loop s/^\n\n// Dividing an even number by 2, is the same logic, but from left to right instead of right to left: s/^/\n\n/ :loop s/\n1\n0/5\n\n/;t loop s/\n\n0/0\n\n/;t loop s/\n1\n1/5\n1\n/;t loop s/\n\n1/\n1\n/;t loop s/\n1\n2/6\n\n/;t loop s/\n\n2/1\n\n/;t loop s/\n1\n3/6\n1\n/;t loop s/\n\n3/2\n1\n/;t loop s/\n1\n4/7\n\n/;t loop s/\n\n4/2\n\n/;t loop s/\n1\n5/7\n1\n/;t loop s/\n\n5/2\n1\n/;t loop s/\n1\n6/8\n\n/;t loop s/\n\n6/3\n\n/;t loop s/\n1\n7/8\n\n/;t loop s/\n\n7/3\n1\n/;t loop s/\n1\n8/9\n\n/;t loop s/\n\n8/4\n\n/;t loop s/\n1\n9/9\n1\n/;t loop s/\n\n9/4\n1\n/;t loop s/\n1\n/5\n\n/;t loop s/\n\n\(.\)/\1\n\n/;t loop s/\n\n$// Combining those, this script do the job: h s/, start.*// s/^/\n\n/ t loopa :loopa s/\n1\n0/5\n\n/;t loopa s/\n\n0/0\n\n/;t loopa s/\n1\n1/5\n1\n/;t loopa s/\n\n1/\n1\n/;t loopa s/\n1\n2/6\n\n/;t loopa s/\n\n2/1\n\n/;t loopa s/\n1\n3/6\n1\n/;t loopa s/\n\n3/2\n1\n/;t loopa s/\n1\n4/7\n\n/;t loopa s/\n\n4/2\n\n/;t loopa s/\n1\n5/7\n1\n/;t loopa s/\n\n5/2\n1\n/;t loopa s/\n1\n6/8\n\n/;t loopa s/\n\n6/3\n\n/;t loopa s/\n1\n7/8\n\n/;t loopa s/\n\n7/3\n1\n/;t loopa s/\n1\n8/9\n\n/;t loopa s/\n\n8/4\n\n/;t loopa s/\n1\n9/9\n1\n/;t loopa s/\n\n9/4\n1\n/;t loopa s/\n1\n/5\n\n/;t loopa s/\n\n\(.\)/\1\n\n/;t loopa s/\n\n$// H g s/.*, start/, start/ s/\n.*// s/$/\n\n/ t loopb :loopb s/0\n1\n/\n\n1/;t loopb s/0\n\n/\n\n0/;t loopb s/1\n1\n/\n\n3/;t loopb s/1\n\n/\n\n2/;t loopb s/2\n1\n/\n\n5/;t loopb s/2\n\n/\n\n4/;t loopb s/3\n1\n/\n\n7/;t loopb s/3\n\n/\n\n6/;t loopb s/4\n1\n/\n\n9/;t loopb s/4\n\n/\n\n8/;t loopb s/5\n1\n/\n1\n1/;t loopb s/5\n\n/\n1\n0/;t loopb s/6\n1\n/\n1\n3/;t loopb s/6\n\n/\n1\n2/;t loopb s/7\n1\n/\n1\n5/;t loopb s/7\n\n/\n1\n4/;t loopb s/8\n1\n/\n1\n7/;t loopb s/8\n\n/\n1\n6/;t loopb s/9\n1\n/\n1\n9/;t loopb s/9\n\n/\n1\n8/;t loopb s/\n1\n/\n\n1/;t loopb s/\(.\)\n\n/\n\n\1/;t loopb s/^\n\n// H g s/[^\n]*\n// s/\n// (Much easier in awk thought.) Note: I once saw a Turing Machine implementation is sed, so I try to remember that anything that can be done with a programming language can be done in sed. That of course does not mean that sed is the good tool in all situations. A: Perl is useful here: perl -pe ' s{(\D+)(\d+)(\D+)(\d+)(\D+)(\d+)(\D+)(\d+)} {$1 . $2/2 . $3 . $4/2 . $5 . $6*2 . $7 . $8*2}e ' file If you want to edit your files in-place, perl has a -i option like sed.
FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION 0001 This invention generally relates to the art of optical fibers and, particularly, to a holding assembly for use in a system of cross-connecting or reorganizing the individual optical fibers of a plurality of fiber optic ribbons. 0002 Fiber optic circuitry is increasingly being used in electronics systems where circuit density is ever-increasing and is difficult to provide with known electrically wired circuitry. An optical fiber circuit is formed by a plurality of optical fibers carried by a dielectric, and the ends of the fibers are interconnected to various forms of connectors or other optical transmission devices. A fiber optic circuit may range from a simple cable which includes a plurality of optical fibers surrounded by an outer cladding or tubular dielectric to a more sophisticated optical backplane or flat fiber optic circuit formed by a plurality of optical fibers mounted on a substrate in a given pattern or circuit geometry. 0003 One type of optical fiber circuit is produced in a ribbonized configuration wherein a row of optical fibers are disposed in a side-by-side parallel array and coated with a matrix to hold the fibers in the ribbonized configuration. In the United States, a twelve-fiber ribbon has fairly become the standard. In other foreign countries, the standard may range from as a low as four to as high as twenty-four fibers per ribbon. Multi-fibers ribbons and connectors have a wide range of applications in fiber optic communication systems. For instance, optical splitters, optical switches, routers, combiners and other systems have input fiber optic ribbons and output fiber optic ribbons. 0004 With various applications such as those described above, the individual optical fibers of input fiber optic ribbons and output fiber optic ribbons are cross-connected or reorganized whereby the individual optical fibers of a single input ribbon may be separated and reorganized into multiple or different output ribbons. The individual optical fibers are cross-connected or reorganized in what has been called a mixing zone between the input and output ribbons. The present invention is directed to various improvements in this concept of cross-connecting or reorganizing the individual optical fibers of a plurality of input and output ribbons. 0005 An object, therefore, of the invention is to provide a new and improved holding assembly for cross-connected or reorganized optical fibers of a plurality of fiber optic ribbons, wherein a plurality of input ribbons lead to an input end of a reorganizing section of loose fibers, and with a plurality of output ribbons leading from an output end of the reorganizing section. 0006 In the exemplary embodiment of the invention, the holding assembly includes a sleeve element surrounding the loose fibers at the reorganizing section. A ribbon holder is disposed at least at one end of the sleeve element. The ribbon holder has an interior rectangular through passage in which a plurality of ribbons can be placed in a side-by-side parallel arrangement. An exterior datum means is provided at one side of the ribbon holder to identify one side of the interior rectangular through passage, whereby the ribbons can be placed in the ribbon holder in specific orientations relative to the datum means. 0007 As disclosed herein, the ribbon holder includes a cover to allow access to the through passage whereby the ribbons can be placed into the passage transversely thereof. The cover is hinged to the ribbon holder. Preferably, the ribbon holder is molded of plastic material, with the cover being molded integrally therewith by a living hinge. 0008 According to an aspect of the invention, the datum means is formed by a flat surface on the exterior of the ribbon holder. The exterior flat surface is generally parallel to the one side of the interior rectangular through passage. The exterior of the ribbon holder is generally cylindrical except for the flat surface. 0009 Other features of the invention include retaining means to hold the ribbon holder at the one end of the sleeve element. As disclosed herein, the ribbon holder is disposed within the one end of the sleeve element, and gripping means are provided on the outside of the sleeve element to clamp the sleeve element onto the ribbon holder. In the preferred embodiment, one of the ribbon holders are provided at each opposite end of the sleeve element for both the input and output ribbons. 0010 Other objects, features and advantages of the invention will be apparent from the following detailed description taken in connection with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS 0011 The features of this invention which are believed to be novel are set forth with particularity in the appended claims. The invention, together with its objects and the advantages thereof, may be best understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements in the figures and in which: 0012FIG. 1 is a plan view of a cross-connected optical fiber harness according to the invention; 2 2 0013FIG. 2 is an enlarged axial section through the ribbon holding assembly taken generally along line - of FIG. 1; 3 3 0014FIG. 3 is an enlarged section through the left-hand ribbon holder of the assembly, taken generally along line - of FIG. 1; 4 4 0015FIG. 4 is a view similar to that of FIG. 3, but of the right-hand ribbon holder, taken generally along line - of FIG. 1; 0016FIG. 5 is a side elevational view of one of the ribbon holders; 0017FIG. 6 is an end elevational view of the ribbon holder in closed condition and holding twelve ribbons therewithin; 0018FIG. 7 is a section taken transversely through the ribbon holder in its open position; 0019FIG. 8 is a view of the cross-connected optical fiber harness of FIG. 1, with the fiber optic ribbons terminated to a plurality of connectors; 0020FIG. 9 is a plan view of a substrate on which a plurality of fiber optic ribbons have been cross-connected or reorganized by a mechanical routing apparatus; and 0021FIG. 10 is an elevational view of the routing head of the routing apparatus. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT 12 14 16 18 20 24 14 20 26 0022 Referring to the drawings in greater detail, and first to FIG. 1, a cross-connected optical fiber harness, generally designated , is shown fabricated according to the invention. Basically, the harness is involved in a system for cross-connecting or reorganizing the individual optical fibers of a plurality of fiber optic ribbons. In FIG. 1, a plurality (six) of input ribbons lead to an input end, generally designated , of a reorganizing section . Although not visible in FIG. 1, the fibers in the reorganizing section are maintained loose. A plurality (eight) of output ribbons lead away from an output end, generally designated , of the reorganizing section. In the reorganizing section, the individual optical fibers from any given input ribbon may be cross-connected into more than one output ribbon . Once all of the individual fibers of the input ribbons are reorganized and cross-connected into the output ribbons, a ribbon holding assembly, generally designated , is positioned about the loose fibers in the reorganizing section and clamping the input and output ribbons at opposite ends of the reorganizing section. 26 28 28 30 28 28 18 32 28 28 34 30 28 28 30 36 30 0023FIG. 2 shows a longitudinal section through ribbon holding assembly to show the various components thereof. Specifically, a pair of ribbon holders, generally designated A and B, are disposed at opposite ends of the assembly and clamp onto the ribbons as will be described in greater detail hereinafter. A sleeve , such as of fiberglass material, extends between ribbon holders A and B spanning reorganizing section , and within which loose individual optical fibers cross-connected between the input and output ribbons are protected. The fiberglass sleeve may be split lengthwise to facilitate positioning the sleeve around the loose fibers and around ribbon holders A and B. A pair of thermally shrinkable tubes are positioned about opposite ends of sleeve to surround ribbon holders A and B. The shrinkable tubes are shrunk in response to heat to clamp sleeve onto the ribbon holders. Finally, for identification purposes, a cylindrical label may be placed about sleeve . 28 28 30 34 38 14 18 20 28 14 28 20 38 40 0024FIGS. 3 and 4 show left-hand ribbon holder A and right-hand ribbon holder B as viewed in FIG. 1, surrounded by fiberglass sleeve and shrink tubes . Each ribbon holder defines a rectangular or square through passage for receiving the fiber optic ribbons. As stated above in relation to FIG. 1, six input ribbons enter reorganizing section and eight output ribbons leave the reorganizing section. Therefore, ribbon holder A (FIG. 3) holds the six input ribbons , and ribbon holder B (FIG. 4) holds the eight output ribbons . In order accommodate the different numbers of ribbons within passages and to maintain the ribbons in side-by-side parallel arrays, filler elements are placed at opposite sides of the bundle of ribbons to completely fill the passages. These filler elements may be of a variety of materials, but sections of foam tape have proven effective. 28 28 5 7 14 42 1 6 20 42 1 8 12 14 20 18 20 14 26 2 1 5 6 3 4 20 1 5 2 6 3 7 4 8 26 0025 Before proceeding with the details of ribbon holders A and B in FIGS. -, reference is made back to FIG. 1. It can be seen that input ribbons have been identified with labels having the indicia P-P to identify the six input ribbons. Similarly, output ribbons have been identified with labels having the indicia A-A corresponding to the eight output ribbons. Optical fiber harness is used in a particular overall circuit scheme wherein it is desirable for input ribbons to be maintained in a given sequence, and it is particularly important for output ribbons to leave reorganizing section in a particular sequence. For instance, output ribbons may be connected at various physical locations in a backplane system and it is not desirable to have the ribbons twisted back and forth over each other in order to connect the ribbons. It can be seen that input ribbons are maintained by ribbon holding assembly in a given sequence (top-to-bottom) P-P-P-P-P-P in order to conveniently arrange the input ribbons according to the circuit scheme. Similarly, output ribbons are arranged top-to-bottom A-A-A-A-A-A-A-A. Ribbon holding assembly allows easy maintenance of this or any other particular sequential arrangement of the ribbons. 1 12 1 12 26 28 28 0026 In addition, and still referring to FIG. 1, as pointed out in the Background, above, each fiber optic ribbon has twelve individual optical fibers as represented by - in the drawings. It is important that an operator be able to know which tiny individual fiber of each ribbon is the or the fiber within the ribbon, and ribbon holding assembly , particularly ribbon holders A or B, allow for this important organization. 5 7 28 28 28 28 40 38 0027 With that understanding, reference is made to FIGS. - in conjunction with FIGS. 3 and 4. It should be noted that ribbon holder in FIG. 6 contains twelve fiber optic ribbons R. This is for illustration purposes only to show that the holder is capable of holding that many ribbons, versus ribbon holder A (FIG. 3) and ribbon holder B (FIG. 4) which hold six and eight ribbons, respectively. In other words, ribbon holder in FIG. 6 does not need to have any filler elements (FIGS. 3 and 4), because the twelve ribbons completely fill through passage . 5 7 28 44 46 38 46 44 48 50 52 38 44 46 54 30 34 0028 As best seen in FIGS. -, ribbon holder includes a body and a cover which combine in their closed position of FIG. 6 to form interior rectangular through passage . The entire ribbon holder may be fabricated in one piece of molded plastic material, for instance. Cover is attached to body by an integral living hinge formed during the molding process. The cover includes a latch boss , and the body includes a latch recess for receiving the latch boss to hold the cover in a closed position about ribbons R as seen in FIG. 6. The cover can be opened as seen in FIG. 7 to allow access to through passage whereby the ribbons can be placed into the passage transversely thereof. The exterior of body and cover are molded with serrations or circumferential ribs which help sleeve (FIGS. 3 and 4) and shrink tubes to grip the ribbon holders. 38 28 56 44 38 38 56 38 0029 Generally, an exterior datum means is provided at one side of the ribbon holder to identify one side of the interior rectangular through passage , whereby ribbons R can be placed in the holder in specific orientations relative to the datum means. Specifically, the datum means of ribbon holder is provided by a flat surface molded on the exterior of body generally parallel to one side a of rectangular through passage . In essence, flat surface defines a datum plane generally parallel to side a of the through passage. 56 1 1 38 38 12 56 38 1 56 0030 With the provision of flat surface or datum plane , reference is made to FIG. 6 wherein the top individual optical fibers of all of the plurality of fiber optic ribbons R are identified as . It can be seen that all of the fibers are juxtaposed against interior side a of through passage , with the fibers of all of the ribbons located against the opposite interior side or wall of the through passage. With flat surface being parallel to and at the same side as interior wall a of the through passage, an operator knows the location of all of the individual optical fibers of all of the ribbons inside the ribbon holder simply by looking at the outside of the holder. In fact, flat surface not only gives a visual indication of the location of the individual fibers but a tactile indication as well. 12 14 60 20 62 0031FIG. 8 simply shows the cross-connected optical fiber harness of FIG. 1 fully terminated in a harness/connector assembly. Specifically, input ribbons are terminated to a plurality of fiber optic connectors . Output ribbons are terminated to a plurality of fiber optic connectors . 64 66 66 66 68 64 32 64 68 14 66 66 20 66 14 20 12 a b 0032FIGS. 9 and 10 show a unique method of cross-connecting or reorganizing the individual optical fibers of a plurality of fiber optic ribbons and may be used to form the cross-connected optical fiber harness of FIG. 1. Specifically, FIG. 9 shows a substrate having an adhesive thereon. A mixing zone is defined within the boundaries of the substrate. For explanation purposes, the mixing zone has an input side and an output side . Actually, a smaller substrate is adhered to larger substrate and encompasses the mixing zone. The smaller substrate also has an adhesive thereon. The invention contemplates using a mechanical routing apparatus (described hereinafter) for routing a plurality of individual optical fibers onto substrates and to form a plurality of fiber optic input ribbons leading to input side a of mixing zone , reorganizing the individual fibers in the mixing zone, and forming a plurality of fiber optic output ribbons leading away from output side b of the mixing zone. In other words, input ribbons and output ribbons correspond to the input and output ribbons described above in relation to the cross-connected optical fiber harness of FIG. 1. For illustrative purpose, only three input ribbons and four output ribbons are shown. Of course, two of such arrangements, as shown in FIG. 9, could be combined to make the arrangement as shown in FIG. 1. 32 66 14 20 14 14 20 20 32 14 14 20 32 14 32 14 20 14 14 20 20 64 68 70 72 74 76 32 78 80 64 68 20 82 66 a c a d a b b b c c a c a d 0033 In order to understand the reorganizing or mixing of individuals fibers in mixing zone between input ribbons and output ribbons , the input ribbons have been labeled -and the output ribbons have been labeled -. It can be seen that there are three input ribbons and four output ribbons. It also can be seen in FIG. 9 that four fibers from input ribbon and six fibers from input ribbons are mixed or combined to form output ribbon . Six individual optical fibers from each of input ribbons and three fibers from input ribbon are mixed or combined to form output ribbon . Eight individual optical fibers from input ribbon and eight fibers from input ribbon form output ribbons and , respectively. All of these fibers are mechanically routed onto substrates and by a mechanical routing apparatus, generally designated in FIG. 10, which includes a routing head . The apparatus including the routing head can pivot about an axis as it moves in the direction of arrow . An individual optical fiber A is fed into a funnel of the apparatus and is fed to a needle which applies the fiber to substrates and , whereby the fibers are held onto the substrates by the adhesive material on the substrates. The apparatus includes a cut-off mechanism as is known in the art. Further details of such a routing apparatus can be derived from copending application Ser. No. 09/645,624, filed Aug. 24, 2000, assigned to the assignee of the present invention, and which is incorporated herein by reference. Lastly, for purposes described hereinafter, some of the individual fibers of output ribbons are cut-off as at (FIG. 9) before entering mixing zone . 64 68 14 20 66 68 66 66 a b 0034 After the fibers are mechanically routed onto substrates and as seen in FIG. 9, input and output ribbons and , respectively, are coated with a curable plastic material on the substrates at least outside mixing zone to hold the routed fibers in ribbon form. The coating may cover the fibers over opposite ends of smaller substrate up to input and output sides and , respectively, of the mixing zone. 14 20 64 68 26 32 66 26 1 7 42 60 62 0035 After fiber optic ribbons and are coated and the coating is cured to hold the fibers in ribbonized form, the coated fibers are stripped from substrates and so that ribbon holding assembly (FIGS. 1 and 2) can be assembled over the loose fibers between the input and output ribbons thereof. In other words, individual optical fibers that were within mixing zone were uncoated and, therefore, remain loose as seen in FIG. 2. Otherwise, ribbon holding assembly is installed over the ribbons and loose fibers as described above in relation to FIGS. -. Labels (FIG. 1) and/or connectors / (FIG. 8) may be applied or terminated to the fiber optic ribbons. 68 64 26 14 20 64 68 66 68 26 32 68 26 68 26 0036 The reason that smaller substrate is installed on top of larger substrate is to provide a subassembly which can be stored prior to installing ribbon holding assembly . In other words, the coated and cured input and output ribbons and , respectively, may be stripped from larger substrate and still be adhered to smaller substrate outside the bounds of mixing zone . This subassembly of substrate and the cross-connected and ribbonized ribbons may then be shipped to another processing station or stored in inventory before installing ribbon holding assembly . During the transport or storing of the subassembly, loose individual optical fibers still remain adhesively secured to smaller substrate and the ribbons, themselves, are maintained manageable for subsequent installation of ribbon holding assembly . Substrate is removed for installation of ribbon holding assembly . 20 82 66 20 14 38 28 38 0037 Finally, as stated above, some of the individual optical fibers of output ribbons are cut-off, as at in FIG. 9, before extending into mixing zone . This is easily accomplished with mechanical routing apparatus, but it would be extremely difficult if the tiny individual fibers are routed or otherwise handled by manual manipulation. By routing twelve fibers in each input ribbon and cutting the individual fibers off even though they are not cross-connected into output ribbons , input ribbons are maintained with twelve fibers in each ribbon. The cut-off of course could also be done on the input side. If reference is made back to FIG. 6, it can be understood that by keeping twelve fibers in each ribbon, the ribbons will fill the space within passage of ribbon holder between inside wall a and the opposite wall of the passage. 0038 Additionally, the cut-off fibers, also known as dummy fibers, are designed into fiber routing scheme because of the ease of installation of twelve fiber ribbons into twelve channel connector ferrules. 0039 It will be understood that the invention may be embodied in other specific forms without departing from the spirit or central characteristics thereof. The present examples and embodiments, therefore, are to be considered in all respects as illustrative and not restrictive, and the invention is not to be limited to the details given herein.
TECHNICAL FIELD BACKGROUND ART CITATION LIST Patent Literature SUMMARY OF INVENTION Solution to Problem(s) Advantageous Effects of Invention DESCRIPTION OF EMBODIMENTS INDUSTRIAL APPLICABILITY REFERENCE SIGNS LIST The present invention relates to a projection device for projecting an image onto a predetermined position and a head-up display device for throwing the image projected by the projection device onto a reflective transmissive surface to cause the image to be visually recognized with a scenery. A conventional head-up display device is disclosed in, for example, PTL 1. Such a head-up display device includes first and second displays and a half mirror and is a device for overlapping and projecting transmitted light and reflected light with the use of the half mirror, thereby causing a user to visually recognize display images (virtual images) at different display distances. PTL 1: JP-A-2003-237412 Technical Problem(s) However, the head-up display device disclosed in PTL 1 includes a plurality of displays, and therefore a volume of the head-up display device may be increased and a cost thereof may be increased. Further, because the half mirror is used, use efficiency of display light emitted from the displays may be reduced. The invention has been made in view of the above problems and provides a compact and inexpensive projection device and head-up display device having a high light efficiency and capable of displaying display images having a plurality of display distances. In order to solve the above problems, a projection device according to a first viewpoint of the invention is a projection device including: a display configured to emit projection light for displaying a display image at a predetermined position; and an image formation position adjusting mirror configured to receive the projection light emitted from the display, convert the projection light into a plurality of beams of projection light having different image formation distances by changing an image formation distance of at least a part of the incident projection light, and reflect the plurality of beams of projection light. Further, a head-up display device according to a second viewpoint of the invention is a head-up display device for projecting a display image onto a projection surface to cause the display image to be visually recognized as a virtual image, the head-up display device including: a projection unit configured to emit projection light; an image formation position adjusting mirror configured to receive the projection light emitted from the projection unit, convert the projection light into a plurality of beams of projection light having different image formation distances by changing an image formation distance of at least apart of the incident projection light, and reflect the plurality of beams of projection light; a first screen configured to form a part of the projection light having a longer image formation distance; and a second screen configured to form a part of the projection light having a shorter image formation distance, the second screen being placed at a position away from the projection surface than the first screen. It is possible to provide a compact and inexpensive projection device and head-up display device having a high light efficiency and capable of displaying display images having a plurality of display distances. 100 20 Hereinafter, an embodiment of a head-up display device (hereinafter, referred to as “HUD device”) and a projection device according to the invention will be described with reference to attached drawings. 100 10 20 30 40 50 60 100 1 20 30 2 20 40 200 50 60 1 1 2 2 FIG. 1 The HUD device is provided in, for example, an automobile and, as shown in , includes a housing , the projection device , a first screen (first image formation unit) , a second screen (second image formation unit) , a plane mirror (relay optical system) , a concave mirror (relay optical system) , and a control circuit board (not shown). The HUD device reflects a first display image M projected by the projection device onto the first screen and a second display image M projected by the projection device as well onto the second screen toward a windshield of a vehicle with the use of the plane mirror and the concave mirror , thereby displaying a first virtual image V of the first display image M and a second virtual image V of the second display image M to a user E. 10 20 30 40 50 60 The housing is made of, for example, a black light-shielding synthetic resin and stores the projection device , the first screen , the second screen , the plane mirror , and the concave mirror therein, and the control circuit board (not shown) is attached to the exterior thereof. 10 10 200 10 10 a a b. The housing has an opening portion allowing display light N described below to pass therethrough toward the windshield , and the opening portion is covered with a light transmitting cover 20 1 1 2 2 30 40 1 2 30 40 20 The projection device emits first projection light L showing the first display image M described below and second projection light L showing the second display image M toward the first screen and the second screen described below, thereby forming the first display image M and the second display image M on the first screen and the second screen . A detailed configuration of the projection device will be described in detail below. 30 1 20 1 30 1 1 1 200 50 60 200 1 200 30 30 1 2 30 30 40 FIG. 2 FIG. 2 a a The first screen (first image formation unit) is a transmitting screen for receiving the first projection light L emitted from the projection device on a rear surface and displaying the first display image M on a surface side and is made up of, for example, a holographic diffuser, a microlens array, or a diffusion plate. When the first screen displays the first display image M, first display light N showing the first display image M is projected onto the windshield by the plane mirror and the concave mirror described below and is reflected by the windshield toward a direction of the user E (eye box). With this, the user E can visually recognize the first virtual image V on the other side of the windshield . Note that, as shown in , the first screen in this embodiment has a recessed display area having a cut-out portion obtained by cutting out a part of an edge portion of a substantially rectangular shape so that the part thereof has a rectangular shape. Therefore, the first virtual image V also has a recessed display area. Note that, as shown in , the second projection light L described below passes through the cut-out portion of the first screen to reach the second screen described below. 40 30 30 2 20 2 40 30 40 2 2 2 200 50 60 2 200 a The second screen (second image formation unit) is formed to have a rectangular shape substantially similar to that of the cut-out portion of the first screen and is a transmitting screen for receiving the second projection light L emitted from the projection device on a rear surface and displaying the second display image M on a surface side. The second screen , as well as the first screen , is made up of, for example, a holographic diffuser, a microlens array, or a diffusion plate. When the second screen displays the second display image M, second display light N showing the second display image M is projected onto the windshield by the plane mirror and the concave mirror described below, and the second virtual image V is displayed on the other side of the windshield , seen from the user E. FIG. 1 30 20 40 1 30 2 40 1 2 100 1 2 1 2 As shown in , the first screen is placed to be closer to the projection device than the second screen . That is, an optical path length of the first display light N travelling toward the user E from the first screen is longer than an optical path length of the second display light N travelling toward the user E from the second screen . Therefore, a distance (display distance) between the user E and a position at which the first virtual image V is displayed is longer than a distance (display distance) between the user E and a position at which the second virtual image V is displayed, and thus the HUD device in this embodiment can perform display so that the first virtual image V positions farther than the second virtual image V. Note that, in this embodiment, the display distance of the first virtual image V is 5 meters, and the display distance of the second virtual image V is 2 meters. 30 1 30 40 2 40 30 40 1 1 2 60 1 2 1 2 1 2 1 2 The first screen is placed to have a predetermined angle (including 0 degree) with respect to an optical axis of the first display light N travelling to the user E from the first screen , and, similarly, the second screen is placed to have a predetermined angle (including 0 degree) with respect to an optical axis of the second display light N travelling to the user E from the second screen . Note that, even in the case where the first screen (second screen ) has the predetermined angle with respect to the optical axis of the first display light N, the first virtual image V and the second virtual image V are formed by a free-form surface of the concave mirror described below so that the first virtual image V and the second virtual image V face to each other while being substantially perpendicular to a line of forward sight of the user E. In the case where the user E visually recognizes the first virtual image V (second virtual image V), a display distance is constant from any area in the first virtual image V (second virtual image V), and therefore the user can visually recognize the whole first virtual image V (second virtual image V) with ease without moving the user's focal point. 50 1 2 30 40 60 The plane mirror (relay optical system) is obtained by forming a reflective film on a surface of a base made of, for example, a synthetic resin or a glass material by using depositing or other means and reflects the first display light N and the second display light N emitted from the first screen and the second screen toward the concave mirror . 60 1 2 50 1 2 200 1 2 60 10 10 10 200 1 2 200 1 2 200 100 1 2 200 60 20 200 1 2 1 2 20 60 1 2 200 20 b a The concave mirror (relay optical system) is obtained by forming a reflective film on a surface of a base made of, for example, a synthetic resin material by using depositing or other means and is a mirror having a recessed free-form surface that further reflects the first display light N and the second display light N reflected by the plane mirror to emit the first display light N and the second display light N toward the windshield . The first display light N and the second display light N reflected by the concave mirror are transmitted through the light transmitting cover provided in the opening portion of the housing and reach the windshield . The first display light N and the second display light N reflected by the windshield form the first virtual image V and the second virtual image V at positions in front of the windshield . With this, the HUD device can cause the user E to visually recognize both the virtual images V (first virtual image V and second virtual image V) and outside scenery or the like actually existing in front of the windshield . Note that the concave mirror has a function of a magnifying glass and magnifies the display images M displayed by the projection device to reflect the display images M toward the windshield . That is, the first virtual image V and the second virtual image V visually recognized by the user E are enlarged images of the first display image M and the second display image M displayed by the projection device . The concave mirror also has a function of reducing distortion of the first virtual image V and the second virtual image V caused by the windshield which is a curved surface. Hereinafter, a specific configuration of the projection device will be described. FIG. 1 20 21 1 2 22 1 2 21 1 2 23 22 1 2 30 40 20 As shown in , the projection device includes a display for generating and emitting the first projection light L and the second projection light L, a fold mirror for reflecting the first projection light L and the second projection light L incident thereon from the display to turn back the first projection light L and the second projection light L, and an image formation position adjusting mirror for adjusting image formation distances of the light incident thereon from the fold mirror and forms the first projection light L and the second projection light L on the first screen and the second screen , respectively, which are away from the projection device at different distances. 21 1 2 1 2 22 21 1 2 1 2 30 40 50 60 200 The display has a reflective display element such as a DMD (Digital MicroMirror Device) or LCOS (registered trademark: Liquid Crystal On Silicon) or a transmissive display element such as a TFT (Thin Film transistor) liquid crystal panel and emits the first projection light L and the second projection light L for displaying the first display image M and the second display image M toward the fold mirror on the basis of control signals from the control circuit board (not shown). Note that the display is controlled to display the display images M (first display image M and second display image M) distorted in advance in consideration of an optical characteristic, placement, and the like of each optical member so as to prevent the virtual images V (first virtual image V and second virtual image V) from being distorted when the virtual images V are visually recognized by the user E via the first screen , the second screen , the plane mirror , the concave mirror , the windshield , and the like. 22 1 2 21 23 22 20 22 21 23 22 The fold mirror is obtained by forming a reflective film on a surface of a base made of, for example, a synthetic resin or a glass material by using depositing or other means and is a plane mirror for reflecting the first projection light L and the second projection light L emitted from the display toward the image formation position adjusting mirror described below. Because the fold mirror is provided, a package of the projection device can be more compact. Note that a plurality of fold mirrors may be provided between the display and the image formation position adjusting mirror , or no fold mirror may be provided. 23 231 1 232 2 231 1 30 1 30 232 2 40 2 40 The image formation position adjusting mirror is obtained by forming a reflective film on a surface of a base made of, for example, a synthetic resin material or a glass material by using depositing or other means and has a first reflection surface for receiving the first projection light L and a second reflection surface for receiving the second projection light L on the same base. In this embodiment, the first reflection surface has a reflection surface that is a flat surface and reflects the received first projection light L toward the first screen without changing the image formation distance, thereby forming the first display image M on the surface side of the first screen . Meanwhile, the second reflection surface has a reflection surface that is a projected free-form surface and reflects the received second projection light L toward the second screen while changing the image formation distance so that the image formation distance is increased, thereby forming the second display image M on the surface side of the second screen . 23 231 1 232 2 1 2 21 1 2 1 2 1 2 21 That is, in the image formation position adjusting mirror in this embodiment, the first reflection surface for reflecting the first projection light L and the second reflection surface for reflecting the second projection light L have different curved surface shapes, and therefore the image formation distances of the first projection light L and the second projection light L can be made different only by receiving the projection light L from the single display . Therefore, the first virtual image V and the second virtual image V visually recognized by the user E can be displayed at different display distances, and therefore it is possible to differentiate between information displayed as the first virtual image V and information displayed as the second virtual image V, which improves distinguishability of the information. Further, the image formation distances of at least the first projection light L and the second projection light L emitted from the same display can be made different, and therefore it is possible to reduce a cost, as compared with the case where a plurality of displays are provided. 231 232 23 1 2 23 21 Because the first reflection surface and the second reflection surface in the image formation position adjusting mirror are formed on the same base, the image formation distances of the at least first projection light L and second projection light L can be made different only by irradiating the image formation position adjusting mirror with the projection light L from the display . Therefore, it is possible to save a space without complicating an optical path of the projection light L. 23 231 232 1 2 231 232 1 2 30 40 The image formation position adjusting mirror in this embodiment has, on the same base, the first reflection surface and the second reflection surface for making different image formation distances between the first projection light L and the second projection light L, and therefore relative positions between the first reflection surface and the second reflection surface are hardly shifted due to an assembly error or the like, and it is possible to accurately form the first projection light L and the second projection light L on the first screen and the second screen . 21 231 232 23 23 100 1 30 40 The display in this embodiment does not project the projection light L for generating an image onto the vicinity of a boundary between the first reflection surface and the second reflection surface of the image formation position adjusting mirror . With this configuration, even in the case where a projection position of the projection light L onto the image formation position adjusting mirror is shifted due to an assembly error, vibration, or the like of the HUD device , it is possible to prevent the first display image M to be thrown to the first screen from being thrown to the second screen . 231 23 21 23 A part (first reflection surface ) of the image formation position adjusting mirror in this embodiment has a flat surface, and therefore it is possible to reflect the projection light L projected from the display without distorting the projection light L. Further, it is possible to easily design and manufacture the image formation position adjusting mirror and reduce design and manufacturing costs. 100 Hereinabove, the HUD device in this embodiment has been described, but the invention is not limited by the above embodiment and drawings. Needless to say, the above embodiment and drawings can be changed (including deletion of constituent elements). Hereinafter, modification examples will be described. 231 232 231 232 231 232 1 2 231 232 In the above embodiment, the first reflection surface has been described as a flat surface, and the second reflection surface has been described as a projected free-form surface, but the first reflection surface and the second reflection surface are not limited thereto because the first reflection surface and the second reflection surface only need to have shapes that can make different image formation distances between the first projection light L and the second projection light L. When the reflection surface is formed to have a projected shape, the image formation distance can be increased, and, when the reflection surface is formed to have a recessed shape, the image formation distance can be reduced. Note that the first reflection surface and the second reflection surface do not need to have the same curved surface shape in the whole reflection areas and may have different shapes in the respective reflection areas. 30 1 30 40 2 40 30 40 30 40 1 2 1 30 30 1 231 1 30 21 30 1 30 1 FIG. 3 In the above embodiment, the first screen is placed to have the predetermined angle with respect to the optical axis of the first display light N travelling to the user E from the first screen , and, similarly, the second screen is placed to have a predetermined angle with respect to the optical axis of the second display light N travelling to the user E from the second screen , but the first screen and the second screen are not limited thereto. The first screen or/and the second screen may be placed to be inclined at an angle equal to or larger than the predetermined angle with respect to the optical axis of the first display light N (second display light N) travelling to the user E. Specifically, as shown in , it is possible to gradually change the image formation distance of the first projection light L by placing the first screen so that the first screen is inclined at an angle equal to or larger than the predetermined angle with respect to the optical axis of the first display light N and gradually changing the curved surface shape of the first reflection surface in consideration of an optical path length of the first projection light L between the first screen placed to be inclined and the display . Therefore, even in the case where the first screen is inclined at an angle equal to or larger than the predetermined angle with respect to the optical axis, it is possible to form the first display image M in a wide range (including the whole area) of the first screen , and it is possible to cause the user E to visually recognize the first virtual image V which is not blurred and causes the user E to feel the sense of depth. 30 1 40 2 1 2 Inclination of the first screen to the optical axis of the first projection light L may be different from inclination of the second screen to the optical axis of the second projection light L. With this configuration, it is possible to three-dimensionally differentiate between two virtual images (first virtual image V and second virtual image V), and therefore it is possible to cause the user E to distinctively recognize pieces of information with ease. 23 1 2 21 23 23 a b FIG. 4 In the above embodiment, the image formation position adjusting mirror for adjusting the image formation distance(s) of the first projection light L or/and the second projection light L emitted by the display may be made up of a plurality of image formation position adjusting mirrors and as shown in . 231 232 231 232 In the above embodiment, the first reflection surface and the second reflection surface are placed on the same base, but the first reflection surface and the second reflection surface may be placed on different bases. 231 232 231 232 The first reflection surface and the second reflection surface may be made of a continuous reflective film, and the reflective film may not be formed in the vicinity of the boundary between the first reflection surface and the second reflection surface . 30 40 30 40 In the above embodiment, the first screen and the second screen have a substantially rectangular shape, but the first screen and the second screen may have a polygonal shape such as a hexagonal shape or an octagonal shape. In the above description, in order to easily understand the invention, description of publicly-known unimportant technical matters has been omitted as appropriate. The invention is applicable to, for example, a head-up display device for vehicles. 100 10 20 21 22 23 30 40 50 60 1 2 1 2 1 2 1 2 HUD device (head-up display device), housing, projection device, display, fold mirror, image formation position adjusting mirror, first screen, second screen, plane mirror, concave mirror, L projection light, L first projection light, L second projection light, M first display image, M second display image, N first display light, N second display light, V virtual image, V first virtual image, V second virtual image BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a schematic view showing an embodiment of the invention. FIG. 2 shows a configuration of a first screen and a second screen in the above embodiment. FIG. 3 is a schematic view showing a modification example of the invention. FIG. 4 shows a modification example of an image formation position adjusting mirror of the invention.
Spaceflight is a relatively new phenomenon, dating back to the The '50s at the very earliest, yet it has captured the human imagination in a way that is unlike almost anything else. Dreams of going to space and visiting other planets have lived in our imaginations long before they became a reality, but now that we have visited many of the worlds in our solar system with robotic spacecraft and landed human beings on one (the Moon), the truth has begun to settle in a little bit. Spaceflight is hard, dangerous, and (ironically) very slow for human purposes. Science Fiction deals with this in a number of ways. The Mohs Scale of Science Fiction Hardness discusses the degrees to which writers fudge the details of space and space travel, from complete fantasy to as close to reality as possible. Along the way, a lot of authors make mistakes (knowingly or not) about the actual technology of rockets and their various propulsion methods. This article discusses the following topics: - Terminology associated with rockets - The realistic portrayal of distances in space - Propulsion methods: real, proposed or in development, and completely hypothetical - Historical, modern, and future rockets: their capabilities and missions Completely fictional rockets and propulsion methods are matters best left for the many works that utilize them, but understanding what can (or could possibly) be done should help you come up with plausible alternatives. For rocket history, see The Space Race. Note: Units in this article are in metric. If you need help converting them, let me Google that for you. To get you started, one mile (mi) is about 1.6 kilometers (km). Rocket TerminologyHere we will discuss basic terms associated with rockets and rocket propulsion. - Fuel: - For chemical rockets, the substance that burns along with an oxidizer - For other types of rockets, the substance that the power source consumes to produce energy that pushes the propellant - Oxidizer: Fuel needs oxygen (or an equivalent) to burn. This must be carried along with the fuel since there's no oxygen in space. - Propellant: The material that a rocket throws out of its engines to achieve thrust via Newton's third law, usually the byproduct of chemical combustion - Dry mass: How much a rocket weighs without any propellant - Wet mass: How much a rocket weighs with a full load of propellant. - Liftoff mass: How much a rocket weighs at liftoff, including propellant and payload. - Payload mass: How much payload a rocket can deliver into orbit (or farther), not counting the rocket itself. Sometimes referred to as "upmass" - Delta-V: This is a measurement of how much total change in velocity a rocket can achieve. A rocket with 8 km/s of delta-V can speed up (or slow down) by a total of eight kilometers per second with its available fuel. If that's not enough to get where you want to go, you're hosed. - Specific impulse (isp): Measured in seconds, this is a numerical value stating how efficiently a rocket engine converts its propellant into thrust. It's like the gas mileage of a rocket: a higher number means you get more total delta-V from a given amount of propellant. - Thrust-to-weight ratio (TWR): The ratio of a rocket's thrust to its weight, typically measured at liftoff. If this value is less than one, the rocket cannot get off the ground. Rockets with a liftoff TWR of less than one (and sometimes even more than one) will use detachable solid rocket boosters (SRBs) for an initial kick to the point where their main engines can take over. TWR increases as a rocket burns off its propellant supply. - Orbital velocity: How much total velocity you need to achieve and remain in orbit. For Earth, this is between 6.9 and 7.8 km/s depending on the shape of the orbit. - Note that orbital velocity is not necessarily how much delta-V you need to reach orbit from the ground. Gravity and atmospheric drag have to be accounted for as well, adding between 1 and 2 km/s to the requirement depending on the flight profile. - Escape velocity: How much total velocity you need to leave orbit and no longer be gravitationally bound to an object. For Earth, this is about 11.2 km/s. - Kerolox: Refers to a rocket engine burning kerosene (aka RP-1) and liquid oxygen. - Methalox: Refers to a rocket engine burning liquid methane and liquid oxygen. - Hydrolox: Refers to a rocket engine burning liquid hydrogen and liquid oxygen. - Hypergolic: Refers to a rocket engine burning a mixture of fuel and oxidizer that combusts spontaneously when they come into contact, or to said chemicals. - Stages: The various parts of a rocket that burn in sequence so that the parts that aren't needed can be thrown away - Booster: The stage of a rocket that burns first, used to get the vehicle out of the Earth's atmosphere - Second stage: The stage of a rocket that burns after the booster separates and accelerates it to orbital velocity. Some vehicles use third or even fourth stages. - Kick stage: A stage deployed once a rocket reaches orbit that sends the payload to another orbit. - Ignition: The moment when a rocket's engines ignite, usually a few seconds prior to liftoff - Liftoff: The moment when a rocket leaves its launch pad or platform - Max Q: Maximum aerodynamic pressure ('q' is the symbol for dynamic pressure) is the moment during ascent when the combination of air resistance and acceleration produces the greatest stress on a rocket's structure. Surviving max q is seen as an indicator that the mission will likely be successful. - BECO: Booster engine cutoff, used variously to describe the burnout of solid rocket boosters or the main stage, depending on the convention of the launch provider - MECO: Main engine cutoff, usually describing the shutdown of the rocket's first stage engine(s) in preparation for staging. Some launch providers describe the second stage as the main engine instead. - SECO: Second engine cutoff, typically describing the shutdown of the second or orbital stage's engine(s). Its use depends on launch provider. - Staging: Stage separation, the moment when a rocket's stages come apart. This is a dynamic phase as the risk of failures and collisions is relatively high. - Fairing: An oblate or ovoid shell used to encapsulate a rocket's payload during ascent. This protects it against air resistance, heating due to friction, and noise/vibrations while the rocket is moving through the atmosphere. After reaching space, it is no longer needed and is typically jettisoned. - Reentry: When an object in orbit (typically a spacecraft) descends to an altitude where atmospheric drag begins to slow it down significantly. Most heating during reentry is caused not by friction but by compression of air molecules while the object is moving at hypersonic velocity. To survive reentry, a spacecraft needs a heat shield. - Perigee: The lowest point of an orbit with respect to the Earth (the general term "periapsis" refers to orbits around any body) - Apogee: The highest point of an orbit with respect to the Earth (the general term "apoapsis" refers to orbits around any body) - LEO: Low Earth Orbit: up to 2,000 km - MEO: Medium Earth Orbit: between 2,000 km and 35,786 km - GEO: Geosynchronous Equatorial Orbit, aka Geostationary Orbit: a circular orbit exactly 35,786 km in altitude, in which a satellite remains over a fixed spot on Earth's equator - HEO: High Earth Orbit: above 35,786 km but still in Earth orbit - GTO: Geosynchronous Transfer Orbit: an elliptical orbit with LEO at its perigee and above GEO at its apogee, allows a GEO satellite to circularize its own orbit at the cost of some fuel - Polar orbit: An orbit that takes a satellite over the Earth's poles - Retrograde orbit: An orbit that goes against the Earth's rotation instead of with it - SSO: Sun-Synchronous Orbit: a special type of polar orbit that is always at the same time of day relative to the ground at each point in its orbit - LTO: Lunar Transfer Orbit: an orbit that intersects the Moon's orbit at apogee - NRHO: Near-Rectilinear Halo Orbit: an eccentric orbit planned for the Lunar Gateway to allow easy transfers between LTO and LLO - LLO: Low Lunar Orbit: A lunar orbit below 100 km in altitude, allowing for easy descent to the surface - Injection burn: Firing a rocket's engines to put it into a transfer orbit (GTO, LTO, etc.) - Circularization burn: Firing a rocket's engines to turn an elliptical orbit into a circular orbit - Deorbit burn: Firing a rocket's engines to bring its periapsis low enough to enter the atmosphere (if applicable) or hit the surface - Entry burn: Firing a rocket's engines to slow it down enough that it can enter atmosphere without being destroyed - Boostback burn: Firing a rocket booster's engines after ascent and staging to put it on a desired descent trajectory - Landing burn: For retropropulsive landings, firing a rocket's engines to bring it to a stop at the ground - Launch Abort: May be called during countdown at any time prior to liftoff if there is a problem preventing the rocket from safely flying - In-Flight Abort: An emergency situation in which a crewed vehicle must escape a rocket that is not operating properly - Launch Scrub: A total abort or cancellation of a launch, which can happen anytime between hours to mere seconds before liftoff due to a variety of factors (like weather or mechanical failures) Distances (How far away are things in space?)Before we start, some notes on distances. Scifi Writers Have No Sense Of Scale, and this can sometimes even be considered an Enforced Trope when you need to write an exciting story and don't want to have to wait months or years for your characters to get anywhere interesting. The point of all this isn't to intimidate you, by the way. It's to point out that if you're going to write a believable science fiction story, you need to accept these facts and work with them, intentionally ignore them (in which case you're probably writing Space Opera), or come up with some way to get places a lot faster than we currently can. A LOT lot faster. - Earth's radius is 6,371.1 km. - The tallest artificial structure ever built by humans is the Burj Khalifa in Dubai, which stands at a total height of 829 meters (just 171 meters shy of 1 km in the sky). - The highest point on Earth's surface is Mount Everest, about 8.8 km above mean sea level. - Most commercial aircraft fly at a maximum altitude of about 11 km. - The edge of the Earth's atmosphere is not a definite barrier, since it decreases in density gradually as altitude increases and there are detectable atmospheric particles thousands of km above the surface. However, "space" is commonly defined as beginning at around 100 km in altitude, known as the Kármán line. A human being reaching this altitude is considered an astronaut. - Technically, you could orbit the Earth at any altitude, at least as long as you don't run into something like a building or a mountain. But below the Kármán line, atmospheric drag will slow you down rapidly. Even above it, satellites have to expend propellant occasionally to maintain altitude against the constant braking due to drag. - To reach orbit, it's not enough to go up; you have to go sideways very fast. This is why rockets arc over as they ascend until they are almost entirely horizontal by the time they are out of the atmosphere. Objects in orbit are falling, still pulled by the Earth's gravity, but they are moving so rapidly that they miss the ground. (The Hitchhiker's Guide to the Galaxy is right!) - Most satellites hang out in what is called Low Earth Orbit (LEO). This is arbitrarily defined as anywhere from the boundary of space (100 km) to 2,000 km. LEO satellites typically remain above 250 km to prevent atmospheric drag from pulling them down. The International Space Station orbits at 400 km. - In LEO, each orbit of Earth takes about 90 minutes. It takes most rockets between 4 and 10 minutes to reach this orbit from the ground, depending on their design. Getting back to Earth from LEO takes about the same amount of time depending on trajectory and how much acceleration your spacecraft and its passengers can tolerate. - Geostationary orbit, also called Geosynchronous Equatorial Orbit (GEO), is exactly 35,786 km above the surface. A satellite in GEO will remain more or less above the same point on Earth's surface because its orbital period is exactly the same as the Earth's rotation. - Our Moon's orbit is slightly eccentric but averages 384,399 km from the Earth. That is ten times farther away than a geostationary satellite and sixty times the Earth's radius. It took the Apollo 11 astronauts 76 hours to go from Earth orbit to lunar orbit. - The next nearest major body to Earth is the planet Venus (not Mars). It can be as close as 40 million km and as far away as 261 million km, depending on the relative positions of the planets in their orbits. - Mars can get as close as 57 million km and as far away as 401 million km. At its closest approach, it is 150 times farther away than the Moon. At the same relative speed, the Apollo astronauts would have taken 470 days to get there. In fact, you'd go much faster on a Mars transfer orbit, but the minimum transit time with current propulsion methods varies from six to nine months. - One astronomical unit (AU) is the average distance from the Earth to the Sun, defined as approximately 150 million km, or 1.495978707×1011 m. - The farthest body visited by any human spacecraft is 486958 Arrokoth, a Kuiper belt body beyond the orbit of Pluto. It was imaged by the New Horizons probe on January 1, 2019. Its average distance from the Sun is 44.6 AU, or 6.6 billion km. New Horizons launched on January 19, 2006, so it took just under 13 years to get there. - One light-year (ly) is the distance light travels in one year. This is approximately nine trillion (9.46×1012) km. - You may hear the term "parsec" (pc) mentioned. This is defined as 3.26 ly and is a unit used by astronomers in parallax calculations. It is not a unit of time! - The distance to the nearest known star, Proxima Centauri, is 4.244 ly (40 trillion km). At the speed New Horizons is traveling relative to the Sun, it would take almost 100,000 years to get there if it were aimed in the right direction (it is not). - The distance to the center of the Milky Way galaxy is approximately 25,000 ly. If it were capable of getting there at all (it isn't), New Horizons would take about 577 million years to do so. The distance to the edge of the galaxy is about the same. - The observable universe is estimated to be approximately 93 billion ly in diameter. This is larger than the age of the universe because it is expanding, so something that emitted light 13 billion years ago would be ~45 billion ly away at the moment it reaches Earth. Propulsion Methods (How fast can we go, really?) So you want to get somewhere interesting in space. How quickly you get there, and whether you can get there at all, depends a lot on your propulsion method. First, the basics. Then we'll cover real propulsion methods, starting with the simplest and going on to more complicated and theoretical methods. On Earth, you can push off of things to generate acceleration. These things include the ground, the water, and the air. Technically, you are taking advantage of static and fluid friction. In space, there is nothing to push off of or grab onto (with some extremely speculative exceptions at the edge of known physics). All objects must obey conservation laws; the most important being the law of conservation of momentum. The only way to make yourself move in one direction is to take some part of yourself and throw it in the other direction. This is, at a fundamental level, how all rockets work. Momentum equals mass times velocity. In general, the faster you can throw something away, the more momentum is transferred and the more acceleration you get from the deal. It is generally easier to make lighter materials go faster. Thus, a rocket engine is literally throwing the lightest stuff possible (gas particles) away from the rocket as fast as possible. What you choose to throw and how you make it go fast defines the type of engine you are using. We will start with the simplest engines: compressed gas, then work our way through chemical-thermal, nuclear-thermal, electric-thermal, ion, and finally exotic drives. Note that all but the "exotic" drives are either in use or are proposed for use, with substantial engineering work already done. First, however... The Rocket EquationAll rockets are limited by the amount of propellant they can carry. Propellant is heavy: a typical orbital rocket is over 90 percent propellant by mass. The more propellant you have, the more powerful your engines have to be to lift it and the larger the rocket will be. Make a bigger rocket and you need more propellant to get the same delta-V. This fundamental principle, enshrined in the Tsiolkovsky rocket equation, tells you the maximum delta-V you can get out of any particular rocket. The choice of engine (and thus propellant) is extremely important because it gives you the parameters for the equation. To get off of a planet, you need enormously powerful engines, since you need to overcome gravity (and atmosphere, in many cases) simply to get high enough to reach orbit. However, once you are in orbit, you can use less powerful but more efficient engines to get to your ultimate destination. Compressed Gas Engines (aka Inert Gas Thrusters)If you've ever filled a balloon with air and released it without tying the end, you've seen a compressed gas rocket engine. Of course, a balloon can only hold so much air before it bursts. The compressed gas engines (more commonly termed "thrusters") found on rockets and spacecraft use tanks ("pressure vessels") that can withstand hundreds of times Earth's atmosphere. Since pressure wants to move from high to low, the compressed gas (typically nitrogen) will rapidly escape through the nozzle once the valve is opened. Because of the extreme simplicity and fast response time of such rockets, they are often used for maneuvering systems, which need to be as light as possible. However, it is impossible to reach orbit on compressed gas because the efficiency is too low. Another type of inert gas engine is the steam rocket. These are not talked about often for a reason: steam is a very poor propellant for launching from Earth. That said, steam has been proposed as a propellant for interplanetary rockets, using water ice found on asteroids and moons to refuel. - The first and second stages of the Falcon 9 rocket use nitrogen cold gas thrusters for maneuvering once they are out of the atmosphere and their engines are shut down. - Perhaps the most well-known steam rocket of the late 2010s is the one flown by "Mad" Mike Hughes in his efforts to prove the Earth to be flat (possibly for publicity). His attempt to launch such a rocket in February 2020 resulted in his death. Chemical Thermal Rocket Engines (aka "Conventional" Engines)Compressed gas is cheap and simple, but not very efficient, and difficult to store at extremely high pressure. What if, instead, we took a liquid or solid with high energy potential and ignited it? The resulting gas would expand extremely rapidly, and we can use that to get thrust. This is basically how chemical rocket engines work. Of course, there's no oxygen in space to sustain combustion, so it is also necessary to bring along an oxidizer. There's still the problem of how you get the propellant, under pressure but not extremely so, to go through a nozzle at very high velocity. Pressure goes from high to low, so if we just burn it, part of it will go back into the fuel tanks, right? So let's discuss the three main ways to accomplish this. Solid Fuel Rocket MotorsSolid-fuel rockets are typically referred to as "motors" rather than "engines", since there are virtually no mechanical parts and there's no combustion chamber. A solid-fuel rocket packs the fuel and oxidizer into a mixture that is stable until ignited under particular conditions. Think of it as a relatively slow-burning explosive... in fact, some missiles literally use properly prepared mixtures of explosives as propellants. The most popular solid fuels in modern space rockets involve a mixture of synthetic rubber with ammonium perchlorate and powdered aluminum. The binder provides structure and fuel, perchlorate an oxidizer, and aluminum is a dense, high energy fuel. The solid motors we see on full-scale rockets typically involve a casing with the propellant mixture packed into a specially-shaped mold with a channel down the center. They ignite from the top down and the channel forces the hot gas out a rocket nozzle. This nozzle may be steerable, as on the Space Shuttle. There are examples of solid fuel rockets everywhere, from the AJ-60A motors on the Atlas V rocket to the solid rocket boosters (SRBs) on the Space Shuttle to the Minotaur IV, which is a rocket with four solid fuel stages. Solid fuel rockets have extremely high thrust-to-weight ratios and are thus ideal for accelerating extremely rapidly. However, they don't generate chemical energy as efficiently as liquid fuels, meaning they can get to orbit but not much more. Solid fuels have the advantage of being easy to store; there are usable solid rocket boosters dating back decades in the U.S. inventory. Pressure-Fed EnginesA pressure fed rocket engine uses inert, pressurized gas to force the fuel and the oxidizer down toward the combustion chamber where they can be mixed and ignited. Pressure fed engines are not as common as the other kinds of conventional rockets, but one example was the Quad Rocket developed by Armadillo Aerospace. Pressure-fed engines are also used in the emergency abort systems of some crewed rockets, such as the SpaceX Dragon 2. This is due in part to their near-instantaneous response, with no need for turbopumps to spin up. They also use hypergolic propellant to avoid the need for separate ignition systems. For low-thrust engines, the pressure of the propellant tanks alone may be enough to supply their needs. This type of engine, again burning hypergolics, is used in the maneuvering systems of almost all spacecraft. Pressure-fed engines are limited by the pressure that can be stored in either the main tanks or the pressurizer tanks, and are typically not efficient enough to be used as the main engines on an orbital rocket. Pump-Fed EnginesRocket engines are all about managing pressures. Pressure flows from high to low, and so the maximum power you can get out of an engine is based on the difference between your tank pressure and the exterior pressure. Unless, that is, you use a pump, which can force propellants into the engine at extremely high pressures — in some cases, hundreds or even thousands times that of Earth's atmosphere. However, you need something to turn the pumps, and that something has to be powerful. There are two primary kinds of pumps: turbopumps, which work by burning some of the fuel and oxidizer to make what is in effect a miniature rocket, the force from which turns the turbines that power the pumps; and electric pumps, which are driven by motors that are in turn powered by batteries. Batteries are heavy, and unlike propellants don't lose mass as they discharge, note so they are only suitable for smaller rockets. There are many, many types of turbopumps. When you hear references to "expander cycle," "staged combustion cycle," "tap-off cycle," "gas generator cycle," and similar things, they are all talking about different ways to run a turbopump while obtaining the maximum efficiency from the combustion of the fuel and oxidizer, all while minimizing mass and cost. A complete discussion of these is beyond the scope of this article. - Turbopump-fed engines go back a long time: the infamous V-2 Rocket of World War II used them in its design. - Electric pump-fed engines are less common: the best-known modern example is the Rutherford engines powering the Electron rocket. Liquid Rocket PropellantsThis is a brief digression to discuss the various liquid propellants that are or have been used in rockets. Most of these need an additional ignition source, which could be an entire article on their own. Hypergolics are a subcategory consisting of two chemicals that spontaneously ignite when they come into contact with each other. This is a valuable advantage, especially for rockets that must be restarted often, but they are often highly toxic and thus very difficult to work with. Many, many chemicals have been tested for Liquid rockets, the book Ignition describes them in detail. The propellant chosen is a tradeoff between exhaust velocity, density (a denser propellant means smaller tanks, pipes, etc. are needed for the same thrust and delta V), handling characteristics (storage, toxicity, ease of loading, among others), and random other engineering characteristics. - Ethanol. The first suborbital liquid-fueled rockets used this. It is relatively inefficient and was eventually discarded in favor of other alternatives, it did burn cooler and provide superior regenerative cooling, easing the engineering of early rockets. It also had the unique qualification of being consumable by humans. - The German V-2 rocket used ethanol as its propellant. - Hydrazine family (hypergolic). There are several variations including monomethyl hydrazine, symmetric dimethyl hydrazine, and unsymmetric dimethyl hydrazine (UDMH). Highly toxic, but easy to store, so it can be loaded into a spacecraft and stays stable for long time periods. - Hydrazine is extremely common as a fuel, but one example of its use includes the Apollo Lunar Lander. - Hydrazine monopropellant. Hydrazine by itself can decompose to produce lots of hot gas for a rocket. The exhaust velocity is lower than a bipropellant, and requires an ignition source, but only half the number of tanks, pipes, and such are needed. Used in some reaction control systems, but not powerful enough for main propulsion. - The Dawn spacecraft uses such a system. - RP-1 or rocket-grade kerosene. This is essentially a highly refined jet fuel and is extremely common and cheap. It is liquid at room temperature, requiring no special storage, and so a fueled rocket can remain on the pad for days if necessary. It is more efficient than hypergolics but less than methane or hydrogen. The main problem with kerosene is its tendency to "coke", or generate long polymer chains that stick to the insides of engines and gum them up. This makes reuse of kerosene engines difficult. Rocket grade gasoline or diesel fuel could in theory be used, but would perform about the same as kerosene so there is no point in developing them. - The first stage of the Saturn V Rocket used RP-1. - Hydrogen. This is the most efficient chemical fuel that we can reasonably use (higher theoretic efficiency can be gained from other chemicals, but they are so volatile and or toxic that they're not worthwhile). It is the lightest and must be chilled to 20 Kelvin to become liquid, so storing it is extremely difficult. It also has a tendency to escape from any tanks it is held in, so is not suitable for long-duration missions. It is also very low density, so reasonably sized engines have a hard time generating enough thrust to take off: hydrogen engines are either restricted to second or above stages, or are combined with solid or liquid boosters for the first part of a launch. - The second and third stages of the Saturn V Rocket used hydrogen propellant, as did the Space Shuttle Main Engines. - Methane (refined natural gas). Occupying a happy medium between kerosene and hydrogen, methane is lighter than kerosene, does not coke at typical rocket temperatures, burns cooler than hydrogen, is easier to store than hydrogen, and has an efficiency somewhere higher than kerosene. In return, it has a somewhat lower density than kerosene, and is cryogenic. For various reasons, including supply and the widespread adoption of kerosene, methane has not been used as a fuel until very recently. The lack of coking is a big draw for reusable rockets. - The best-known use of methane is the Raptor engine which is intended for use on SpaceX's Starship rocket. A major factor in this design decision is that liquid methane and oxygen can be produced on Mars relatively easily using the Sabatier reaction. All bipropellant fuels must be burned with an oxidizer. Two are in common use: - Liquid Oxygen: The most powerful oxidizer available that isn't toxic, explosive, or too difficult to handle in some other way. Easily available from the air around us. However, being cryogenic, it is difficult to store for long periods. It is used in launch vehicles with all the fuels listed above except hydrazine. - Dinitrogen Tetroxide: The most powerful hypergolic oxidizer, apart from some fluorine based ones that would produce toxic combustion products. Like its partner hydrazine, it is highly toxic but easy to store; it can be loaded into a tank using proper procedures and remain stable. Decomposes to red nitrogen dioxide, producing red clouds when rockets that use it start. As mentioned above, other chemicals have been used or tested in the past. Some worked, but are less powerful than the above combinations, such as nitric acid and hydrogen peroxide as oxidizers. Many were more powerful than the listed combinations, but other difficulties mean they weren't used in production rockets: the fluorine based oxidizers mentioned above that produce toxic exhaust are one example. The book ignition linked above describes the difficulties in detail. Metallic hydrogen has been proposed as a rocket fuel with many times the energy density of liquid hydrogen, but actually creating it and storing it stably is the stuff of fiction at the moment. References - Most of the engines in Kerbal Space Program use chemical propellants and are modeled after real-life equivalents from across spaceflight history. For gameplay abstraction purposes, the game only has one type of generic liquid fuel and oxidizer each (though there are mods that remedy this); what they are is not explicitly stated anywhere in-game. Although liquid fuel is likely to be kerosene due to the fact that it's also what the game's jet engines run on, it's also something that can be synthesized off-world and doubles as fuel for a nuclear thermal rocket (see below). - The Realism Overhaul mod does represent actual real-life propellants and their respective densities, and even simulates the boil-off effect for cryogenic fuels like liquid hydrogen. - One interesting engine concept arising from rocket and jet engines using the same fuel in the game is the CR-7 RAPIER (Reactive Alternate-Propellent Intelligent Engine for Rockets) engine, inspired by the real-life SABRE concept, which is capable of using both atmospheric air and oxidizer for operation. It is less efficient than the dedicated jet engines and is a Jack-of-All-Stats among bipropellant engines as well, but being able to operate as both makes it an all-in-one engine for spaceplanes. Electric propulsion (Ion engines)Ion engines, or Ion thrusters, are in a completely different class from chemical rocket engines. The basic principle of an ion engine is that an inert gas such as xenon or krypton is excited by an electric field and loses some of its electrons. The ionized gas can then be accelerated using a magnetic field to extremely high velocities. In contrast to chemical rockets, here a heavier molecule is more efficient since it carries more momentum at any given velocity. However, xenon (the heaviest noble gas) is extremely rare and thus very expensive. While ion engines can have fuel efficiency many times that of chemical rocket engines, they have very low thrust, making them suitable only for maneuvering once a spacecraft has reached orbit. For this reason, they are also unsuitable for crewed spacecraft as their thrust is so low that it would take a long time to get anywhere. It is possible that future technology will improve ion engines to the point where they are practical for human interplanetary travel, but they will probably never be used to get off a planet into space. Ion engines are used by many satellites today, as their exceptional fuel efficiency allows satellites to remain in orbit for many years on a relatively small fuel budget. One example is SpaceX's Starlink satellite constellation, which uses krypton ion thrusters. References - In certain parts of the Star Wars franchise, it is claimed that spacecraft use ion engines as their main propulsion source. This implies an astonishing breakthrough in ion drive technology, assuming the writers aren't just picking cool words out of a dictionary. - Kerbal Space Program allows the player to unlock and use the IX-6315 Dawn ion engine near the end of the tech tree. Although it still has very low thrust compared to bipropellant engines, it is orders of magnitude more powerful than real-life ion engines due to practical reasons: the game does not allow time acceleration while any engine is burning, so a realistic ion engine would require the player to sit through hours-long maneuver burns in real time. This way, ion engine burn times are measured "merely" in tens of minutes. Light sails, solar sails, and laser sailsPhotons have no mass, but carry momentum. The quadrillions of particles blasted off the Sun every second that form the solar wind also carry momentum (and mass, but we digress). These can be harnessed by building a giant sail, literally. Demonstrations of these technologies have been made several times as of 2020, but as yet no operational spacecraft has used them. A solar sail is designed to capture the flow of high-energy particles from the solar wind. A light sail is designed to capture the photons emitted by the Sun. A laser sail takes a light sail a step further by using a directed laser beam from a planet or satellite to generate thrust. While solar and light sails have extremely low thrust and are impractical for human spaceflight, laser sails (powered by gigawatt laser arrays) have the potential to send our first relativistic spacecraft to other stars, being able to achieve velocities up to 10 percent of the speed of light, all without carrying or expending a drop of fuel, since their propulsion comes from an external source. The Breakthrough Starshot program, still in its earliest stages, is a proposal to build thousands of laser sail probes, each no larger than a postage stamp, and boost them off to our neighboring stars to send back information about what's there. Even the simplest of these probes would require the construction (and powering) of lasers thousands of times stronger than anything we can currently build. Another issue with sail-powered spacecraft is that it's not as easy to slow down once you get where you are going, requiring either an alternative propulsion system or creative use of "tacking". Robert Forward proposed a means for decelerating an interstellar light sail in the destination star system without requiring a laser array to be present in that system. In this scheme, a smaller secondary sail is deployed to the rear of the spacecraft, whereas the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. The best-known solar sail project to date is the Planetary Society's LightSail spacecraft. LightSail-2 was launched on a Falcon Heavy rocket in June 2019 and successfully demonstrated photonic propulsion in low Earth orbit. Nuclear pulse propulsion (Orion drives)This propulsion system was proposed in the earliest days of the space program and even tested (at smaller scales and without using nuclear bombs). Put very simply, an explosion generates force, so you can propel a spacecraft by detonating a series of bombs beneath a large shield. Some of the kinetic energy of the detonations will push against the shield, creating thrust. Nuclear pulse propulsion has the theoretical capability to achieve significant fractions of the speed of light and is one of the few systems discussed here, achievable with current technology, that could provide "constant" thrust for long durations. Modern designs would not use large nuclear bombs, but rather a number of small pellets, each of which creates a small thermonuclear pulse. To date, no nuclear pulse spacecraft has flown and international treaties against the deployment of nuclear arms in space may restrict the development of such a spacecraft for a long time to come. There's also the understandable problem that you wouldn't want to use it to get to space, or even in low Earth orbit, given the "minor" problems associated with setting off nuclear explosions in or near the atmosphere. Such a drive system would also require the construction of hundreds or thousands of times more nuclear bombs than currently exist today, with understandable political ramifications. The Orion drive is the only method of getting to another solar system within a human lifetime that we could potentially achieve with current technology. Nuclear thermal propulsion (fission and fusion drives)This is the last category of propulsion system in this list that we have the current technology to attempt (in principle, and with fission). In essence, the spacecraft carries a nuclear reactor that runs on some fuel source, like uranium/plutonium (for fission) or deuterium/tritium (for fusion). This reactor produces an extremely high energy flux, which can be used to accelerate an inert propellant to far greater velocities than chemical rockets. Typical designs use water for its ease of storage, relative chemical inertness, secondary use as coolant and for drinking, and relative lightness. The most efficient possible designs would use hydrogen. Nuclear thermal propulsion achieves extremely high theoretical efficiency, with fission being up to three times better than chemical propellant and fusion much more. Such drive systems would allow casual interplanetary travel, but would still need far too much fuel to be able to manage interstellar travel in a reasonable time frame. Fusion drives could operate under constant thrust for a significant portion of a trip, giving human passengers a semblance of gravity. The major obstacles to such drives are: - As with nuclear pulse drives, the idea of putting a full-scale nuclear reactor in space is a little intimidating to many nations. - They would probably not have enough thrust to get to space, never mind the problem of spewing highly radioactive exhaust into the atmosphere. - The amount of fuel needed could be seen as an escalation of nuclear proliferation and would be very expensive regardless. - While we can build fission reactors, making them small enough and foolproof enough to put on a rocket is another matter. We don't yet have working fusion reactors and probably won't be able to build anything remotely small enough to put on a rocket for at least fifty years. References - 2001: A Space Odyssey and its sequels feature spacecraft with nuclear drives. The Discovery in 2001 uses nuclear thermal propulsion of a unspecified type, requiring vast tanks of hydrogen to sustain. The Leonov in 2010 uses muon-catalyzed cold fusion, a real idea dating back decades, but one that cannot be run at a net energy gain with our current understanding of physics. The Universe in 2061 uses an even more advanced version that can sustain fusion with almost any fuel, including water, and indeed a plot point involves siphoning water from Halley's Comet to enable a direct flight to Jupiter. - In The Expanse, the invention of the "Epstein drive", a variation of nuclear thermal propulsion, enables rapid interplanetary travel. The liberty taken here is that Epstein drives have a mass-energy efficiency far beyond that of any known fusion process, letting ships travel without the gigantic fuel tanks that would otherwise be necessary. - Kerbal Space Program has the LV-N Nerv Atomic Rocket Motor, which is a fission version of this. Large, heavy and too weak to lift even half its own weight under Earth's gravity, but its fuel efficiency in vacuum is over double of even the best bipropellant liquid fuel engines, making it a very popular choice for interplanetary flight among players. For gameplay abstraction purposes, it runs on the same liquid fuel as bipropellant engines, it just doesn't require oxidizer, thus saving mass for carrying more fuel. Magnetic scoop fusion drive (Bussard ramjet)This hypothetical propulsion system is based on the fact that the vacuum of space is not actually empty. It contains extremely diffuse particulate matter, mostly individual hydrogen atoms, with an average density of about one atom per cubic centimeter, called the "interstellar medium". This can potentially be used as fuel. A "Bussard ramjet" spacecraft would be equipped with a magnetic "net" hundreds or thousands of kilometers in diameter, scooping the interstellar hydrogen into fuel tanks which would then power a fusion reactor. By definition, this would be a constant thrust craft and thus suitable for interstellar exploration. Such a ship would need to travel relatively fast to be able to gather enough hydrogen to power itself, needing a boost or kick to get started, but it would also encounter significant drag from all of those particles hitting the net. From the scientific literature on the topic it is not clear if the drag would be greater than the actual thrust that could be obtained from the collected fuel. References - Most spacecraft in the Wing Commander universe use Bussard ramjets to gather enough fuel to sustain their high acceleration and one of the EU novels has a subplot in which a ship "stalls" by being forced to go too slowly, taking several weeks to crawl back up to a useful velocity. The canon video games make no mention of this, however. RF resonant cavity thruster (aka the EM drive)This hypothetical propulsion system supposedly uses aspects of the quantum vacuum to provide a small thrust without an external source of propulsion and without expending any propellant. While prototype drives have been tested on several occasions in ground laboratories, and even supposedly in space by the Chinese, they violate the physical law of conservation of momentum and have thus far failed to demonstrate any real thrust that is not explained by other phenomena. In short, the EM drive doesn't work, although people continue to hope that it might. Antimatter drivesThis proposed propulsion system uses the most efficient possible fuel: antimatter. While nuclear fission converts up to 0.1 percent of the rest mass of its fuel into energy, and nuclear fusion converts up to 0.7 percent into energy, antimatter-matter collisions convert 100 percent of their combined mass into energy. In principle, a rocket powered by antimatter (probably using some sort of thermal acceleration system similar to that of a fission or fusion engine) would need only kilograms of fuel to achieve the same impulse as tons of fusion or thousands of tons of chemical propellants and could easily get to other stars while providing constant thrust for a comfortable 1 G living environment. In practice, however, getting the antimatter is a bit tricky. We currently produce and store positrons and anti-protons in large particle colliders like CERN, and a recent breakthrough (as of 2020) involved the creation of a complete antimatter atom. Yes, one atom. To produce the quantities of fuel required to even get to orbit from the ground would require trillions of dollars and entire national energy budgets worth of power. To achieve antimatter propulsion, we need a currently unimaginable breakthrough in particle physics that would allow us to manufacture and store it at scale. It is the stuff of science fiction dreams, no less than a century away even in the most optimistic future. References - Starships in Star Trek use engines fueled by antimatter to achieve warp speeds. These require "dilithium crystals" to catalyze the matter-antimatter reaction and draw usable power from it. Kugelblitz drives (black hole power!)The most exotic propulsion system ever conceived by physics (so far), a black hole engine (aka Kugelblitz drive) would literally be powered by a tiny black hole. Moreover, we believe that we know how to build one. Light does not have mass, but it does have energy. Using the mass-equivalence principle in Einstein's special relativity, if you concentrate enough light energy (via lasers) into a small enough space, it would warp spacetime in exactly the same manner as a high concentration of matter, potentially collapsing into a black hole. This tiny black hole would, in turn, emit Hawking radiation, a form of quantum leakage from its event horizon that occurs because of the black hole cutting off certain frequency modes of the quantum vacuum. note The smaller the black hole, the faster it emits such radiation, so this suggests an ideal size that would potentially power a spacecraft for years before evaporating. To make a Kugelblitz, you would need hundreds of gigawatts of laser energy all focused into a space smaller than the width of a proton, firing at exactly the same moment. If you miss, you get, well, a lot of laser energy flying around. If you succeed, you get a tiny black hole that could be captured (somehow) and used to provide virtually unlimited power over its lifespan. Its radiation would be used to accelerate a propellant (probably hydrogen) in the same way as a nuclear thermal engine, or its radiant energy could be captured with a reflector and turned into a beam for photonic propulsion. Alcubierre drives (warp speed!)Perhaps the ultimate expression of theoretical physics, the Alcubierre drive does not describe a power source but rather a propulsion method that is based on mathematics arising from the theory of general relativity. GR establishes that nothing can move faster than light through spacetime, but it does not establish that spacetime itself cannot move faster than light. Indeed, right now there are regions of space that are receding from us faster than light, such that no information emitted from them will ever reach us. The Alcubierre warp metric, as it is technically known, involves creating a field of warped spacetime (hence the name) in which space ahead of a ship is compressed while the space behind it is expanded. An object within this "warp bubble" would be carried along much like a surfer on a wave, experiencing no acceleration. As there is no limit as to how fast any patch of space can move relative to any other patch, this velocity could exceed the speed of light, allowing FTL Travel. As you may imagine, however, the idea has a few problems. First, we don't have anything remotely like the technology to compress spacetime on the scale needed. Imagine a Kugelblitz style drive but with the lasers mounted on the craft itself, or a gravity drive (see below). Second, we don't even know if it's possible to expand spacetime behind the craft. In most formulations, this would require some kind of "exotic matter" with negative mass, which is not believed to exist. (If it did, you could build perpetual motion machines and other lunacy, not just warp drives.) Still, the Alcubierre drive remains the proposal for high-speed or FTL travel that has the best correlation to known physics, and there is some hope that it may one day become technologically possible. NASA takes it seriously enough that its EagleWorks laboratory is experimenting with the idea on very small scales. References - Star Trek explicitly uses the Alcubierre concept in its warp drive technology. Of course, it adds a helping of technobabble to turn it into a coherent plot device. Warp engines in Star Trek are powered by antimatter (see above). Wormholes (jump drives)These are less a propulsion system than a proposed means of achieving FTL Travel. Based on another mathematical artifact in the theory of general relativity, a wormhole is a hypothetical bridge (technically, an "Einstein-Rosen bridge") between two distant points in spacetime, like folding a piece of paper over on itself and punching a hole through both sheets. Indeed, this is the analogy used in literally every work that features it. If a spacecraft could somehow create wormholes on demand (or access natural or artificial wormholes created or maintained by some technology), it could "jump" across spacetime without traversing the distance between the two points. Of course, the physics of general relativity also say that wormholes are unstable, lasting only fractions of a second and creating event horizons that "spaghettify" anything crossing them. It doesn't do you much good if your spaceship is swallowed, stretched out into a thin stream of atoms and disgorged as a cloud of undifferentiated subatomic particles. There are ideas to deal with these problems, but all concepts of traversable wormholes require "exotic matter", much like Alcubierre drives: something with negative mass-energy that could be used to stabilize or prop them open, and there are no reasonable proposals for creating them to begin with. However, the idea of using wormholes for travel remains a staple of Space Opera and even some science fiction. References - The sci-fi horror film Event Horizon is set on a ship that attempted to use a prototype jump drive to travel out of the solar system and back. It turns out that the alternate dimension that it crosses through is analogous to the Biblical concept of Hell and opens a doorway into our universe for pure evil. Gravity drivesNow we are moving from "maybe physically possible" into "there is no current theory that would allow this, but it would be really cool". If we can manipulate the force of gravity itself, we would achieve godlike power, one of the simplest manifestations of which would be to propel spacecraft to any speed we could desire (up to the speed of light, of course). In its most basic form, a gravity drive involves creating a strong artificial gravity well near a spacecraft. This gravity well would attract the ship to it, whereupon the ship moves the field farther ahead, and so on, like a carrot in front of a mule. This would violate conservation of momentum, but is already so far beyond anything in current physics that you might as well not worry about it. References - The Humanx Commonwealth novel series by Alan Dean Foster envisions the "posigravity drive" which uses exactly this principle to achieve FTL Travel. Co-discovered by the humans and Thranx, the two species working together come up with an improved version called the KK drive. The hypergenius Ulru-Ujurrians later modify the KK drive to be able to land on and take off from a planet without tearing its surface apart. Real-world rockets and rocket enginesIt's time for an inventory of the rockets and rocket engines that are in use or have been used in the past. We won't cover everything here, just the more famous ones with some notes about their design and significance. Specific types of rocket engines Bell nozzle enginesThe most common type of rocket engine involves a combustion chamber (with a bunch of stuff behind it) forcing high-pressure gas through a nozzle shaped roughly like a bell. There is a common belief that the nozzle itself is the engine; this is untrue. The nozzle, however, is a critical part of the overall design, because it acts to increase the velocity of the exhaust by reducing its pressure. It also balances the pressure of the exhaust with the pressure outside the rocket. If the escaping gas is at much higher pressure than the surrounding air (or vacuum), it expands rapidly to the sides once it exits the nozzle, costing a lot of power — you want as much exhaust as possible going straight backwards. If it is at a much lower pressure, the surrounding air pushes inwards and back up the nozzle, causing "flow separation" instability that can destroy the engine. This is why most vacuum-optimized engines can't be fired at sea level. Since rocket boosters in particular have to operate in these two very different regimes: a gradient from sea-level pressure all the way up to vacuum, you will often see two different kinds of engines in use: sea-level-optimized and vacuum-optimized. The latter have much larger nozzles meant to reduce the exit pressure as near to zero as possible, while the former attempt to achieve a happy medium between pressure at the ground and pressure in space. Rocket nozzles also have to be cooled because of the extreme heat of the exhaust gases they have to contain, or they could melt or crack under the stress. There are several types of cooling in use: - Regenerative, where some cold fuel is circulated through the walls of the nozzle and combustion chamber before being pumped into the engine proper. This has the side benefit of heating the fuel up to improve combustion. - The Merlin 1D engine on the Falcon 9 uses regenerative cooling for the sea-level portion of its nozzle. - Ablative, where the nozzle contains or is made of a substance that breaks away as it heats up. Such nozzles are inherently unable to be reused. - The RS-68 engine on the Delta IV uses ablative cooling, making its exhaust slightly orange in color when it would otherwise be mostly blue, as it it is composed almost entirely of water vapor. - Film, where some amount of unburned or partially burned fuel is allowed to run along the walls of the rocket nozzle to absorb excess heat. - The F-1 engines on the Saturn V used film cooling, easily visible in close-up images as a darker part of the exhaust. - Radiative, where the nozzle is designed from a high-temperature alloy that radiates heat as rapidly as possible. - The Merlin 1D engine on the Falcon 9 second stage uses a combination of film and radiative cooling on its niobium nozzle extension. It can be seen glowing bright orange in video. Aerospike enginesThe aerospike is a type of rocket engine designed to be used at both sea level and in vacuum without losing efficiency. Such an engine, if it were to be built and operated, would make single stage to orbit (SSTO) rocket designs feasible. Unfortunately, no aerospike engine has ever successfully propelled a production rocket, as all such attempts have failed, been abandoned, or are still in development. Simply put, an aerospike works by removing most of the rocket nozzle and aiming the exhaust flow towards a spike or linear surface that forces it to go in the proper direction. The other side of the exhaust flow is contained by air pressure, initially squeezed into the surface of the "spike" and later expanding outward, but always ending up in a linear flow. The challenges involved in building an aerospike include the increased complexity and mass of the engine, the difficulty of steering the engine, and most importantly, the extreme difficulty of cooling the nozzle. Multiple rockets using aerospike engines have been proposed, developed, and or tested to varying degrees. Among these are the Firefly Alpha, ARCA Space's Haas 2CA, and the VentureStar SSTO. An aerospike design was considered for the Space Shuttle Main Engine before the traditional bell design was adopted. Others (scramjet, VASIMIR, etc.) Rocket stages and SSTOs The rocket equationProperly named the Tsiolkovsky rocket equation, and colloquially referred to as "the tyranny of the rocket equation", this is perhaps the most well-known equation in rocketry, although it doesn't exactly flow off the tongue when written out. It tells you how much delta-V you can achieve with a given mass of fuel and a given mass of rocket. Put simply, to get a rocket to go farther (add delta-V), you need more fuel (propellant). Adding fuel adds mass, both in the fuel itself and the tanks to hold it. If you add mass, you need more (or more powerful) engines to push it all. Adding engines also adds mass, meaning you need more fuel to get the same delta-V. For any given fuel source and engine, this equation reaches a maximum whereby you can add infinitely more fuel and engines without getting any more performance. A major factor in the equation is the efficiency of the engine. This is the maximum energy you can get out of the propellant, taking into account its chemical properties and also the performance of the engines used to accelerate it. The technical term for this is "specific impulse", written Isp or just "isp", and is typically notated in seconds (defined as the theoretical time that a rocket engine can operate, expending one kilogram of fuel to maintain 1 Newton of acceleration). - The Space Shuttle Main Engines (SSMEs), burning hydrolox, have a specific impulse in vacuum of 453 seconds. - The Merlin 1D engine used by the Falcon 9 and Falcon Heavy rockets, burning kerolox, has a specific impulse in vacuum of 348 seconds when using a vacuum optimized nozzle extension. Multi-stage rocketsAlmost all orbital rockets use multiple stages. The idea behind this is that, once a portion of the rocket's fuel is expended, the heavy tanks and engines can be discarded and a smaller, lighter rocket will do the rest of the work. There is a trade-off between efficiency and complexity - in theory, you could have as many stages on a rocket as you could physically build, but since each stage needs engines and other hardware that take up space that could be used by propellant, you reach a point of diminishing returns. Another advantage of staging is that rocket engines perform differently in the vacuum of space than they do in Earth's atmosphere. Near the ground, you need powerful, high-thrust rockets optimized for use in atmosphere to push through it and into space. Once in space, however, you can use more efficient vacuum-optimized engines to push the spacecraft into orbit without worrying about atmospheric drag (not as much anyway). Reusing the parts of a multi-stage rocket is challenging because each stage must contain its own systems for reentering and landing. Single-stage rocketsA single-stage-to-orbit, or SSTO, is considered by some to be a Holy Grail of rocket design. You only need one vehicle to get to orbit, deploy a payload, and return to land. As yet, no single stage rocket has been able to reach orbit, and this is due to a variety of factors including the lack of suitable engines (see above for the difference between bell nozzle and aerospike engines) and the substantial differences between aerodynamic and vacuum regimes. A vehicle that is designed to move through atmosphere has very different technical requirements than a vehicle designed to move through space, and accommodating both in the same vehicle adds mass and complexity that reduce performance. Part of the drive for SSTOs is the cost savings from not having to throw away parts of the rocket on every flight. However, continual reductions in the cost of delivering payload on multi-stage rockets have all but eliminated the economic argument for them. Proposed and attempted SSTO vehicles include the Skylon, the DC-X, the Lockheed Martin X-33, and the Roton SSTO. Reusable rockets and spacecraftOne of the things that makes getting to space hard is the cost. Part of this is just the sheer effort needed to put stuff in orbit, but another part is the fact that most rockets are expendable. It has been noted that rockets are the only transportation method that we throw away after each use. If you drove a car from one city to another and it was immediately thrown in a crusher, forcing you to buy a new one, nobody would drive anywhere. If a new passenger aircraft had to be built for every flight, it would be too expensive to fly. Obviously, one way to cut down the cost of spaceflight is to reuse rockets. This is harder than it may seem at first glance, though. An orbital rocket needs as much fuel as possible to do its job, leaving little left for a landing. Also, it's designed to go to space, not to fall back down to the ground. You can over-design your rocket to leave enough margin to recover it, but then you lose some potential payload. There are also political considerations around reusable rockets. State-sponsored space programs insulate themselves from politics to some extent by spreading their supply chains over many districts to engage as many constituents as possible and guarantee high-quality jobs. Every rocket reused is a rocket not built and potential money not being earned by voters. Nevertheless, if we want to get to space cheaply, we must find ways to reuse rockets. There are several methods that have been proposed and attempted. Suborbital rocketsOne way to keep your margins high enough for landing is not to go to orbit at all. Such is the case with suborbital spacecraft, which don't need the high performance of orbital rockets because they use much less energy. Suborbital rockets (also known as sounding rockets) have their uses: monitoring weather, delivering experiments that need brief periods of free-fall, point to point transportation, and and space tourism. Suborbital rockets can land in a number of ways, including aerodynamically (wings and runways), via parachutes, or propulsively by firing their engines to come to a stop as they touch down. Some examples of reusable suborbital rockets include the Blue Origin New Shepard (propulsive) and Virgin Galactic's SpaceShipTwo (aerodynamic). The SpaceX Starship is designed for orbital operations but a suborbital version is also planned for point-to-point transport. It will land propulsively. SpaceplanesA spaceplane is a vehicle shaped like an aircraft that is designed to reach orbit, then descend, reenter the atmosphere, and use aerodynamic surfaces to control their descent for touchdown on a runway or landing strip. Lift surfaces are pointless in space, of course, but they can increase surface area to reduce the stress on a vehicle from reentry heating. Spaceplanes are launched on traditional rockets, which may or may not be expendable. Generally, the spaceplane will act as its own propulsive stage, either taking the place of a second stage or deploying as payload and then firing its own engines to modify its orbit. Notable spaceplanes include NASA's Space Shuttle, the Soviet Buran (which never flew crew), the Boeing X-37b (an uncrewed orbital test bed for the U.S. Department of Defense), and the Sierra Nevada Dream Chaser (which has yet to fly). ParachutesIf you look at the history of spaceflight, most spacecraft intended to return to Earth have used parachutes to accomplish that task. This includes all space capsules used for human flight with the exception of the spaceplanes mentioned above. After the vehicle reenters Earth's atmosphere, it uses air resistance to brake, then it deploys one or more leader parachutes (known as "drogues") to provide initial deceleration, followed by one or more primary parachutes (known as "mains") to slow it down for a safe landing. Parachute landings may occur on land or on water. Parachute designs are thus common, but it is worth noting those that accompany vehicles intended for refurbishment and reuse. These include Boeing's CST-100 Starliner, the SpaceX Dragon, NASA's Orion, and in the future the first stage of the Rocket Lab Electron rocket and the engine block of the ULA Vulcan rocket. The Space Shuttle's solid rocket boosters also descended via parachute and were recovered from the ocean, but it was acknowledged some time into the program that it would have been cheaper to build new ones than to refurbish them. RetropropulsionAll of the above are interesting and effective ways to reuse rockets, but the golden age of science fiction told us that rockets of the future would fall from the sky and land on their tails by firing their engines. Yet it took nearly seventy years from the first orbital rocket to the first propulsive landing of an orbital rocket. This feat was achieved by the SpaceX Falcon 9 on December 22, 2015. To land an orbital rocket using its engines requires a number of design considerations that reduce its effective payload to orbit. In an optimal scenario, such a rocket must reserve about 10 percent of its first-stage propellant, which is fuel it can't spend on a payload. Falcon 9 and Falcon Heavy have several modes depending on the mass and destination. A return to launch site (RTLS) landing requires about 30 percent of the booster's fuel, while an ocean landing on an automated platform can cost as little as the 10 percent mentioned above. China's Long March 8 and Blue Origin's New Glenn are rockets currently in development with planned retropropulsive booster stages. SpaceX's Starship, currently in development, will attempt to take this a step further by landing both the first and second stages of the rocket. If successful, it will be the first orbital rocket to be 100 percent reusable. Historically significant rockets Saturn V (USA)The Saturn V rocket to this day retains the title of the largest and most powerful orbital rocket ever built and operated successfully by humans. Standing at 110.6 meters tall and capable of delivering close to 7.9 million pounds of thrust, it was flown 13 times between 1967 and 1973 with a perfect operational record. (An engine shut down prematurely on one flight, but did not affect its performance.) Saturn V is of course best known for its role in the Apollo program, delivering U.S. astronauts to the Moon: the only humans to leave Earth orbit as of 2020. The Saturn V's first stage was powered by five F-1 engines, themselves the most powerful single combustion chamber liquid fuel rocket engines ever built. These ran on kerolox for its high energy density. Its second and third stages were equipped with J-2 engines, burning hydrolox for maximum efficiency. N1 (USSR)The N1 rocket was the Soviet Union's attempt to build a Moon rocket to match the Saturn V. It was a monster: three stages, 17 meters in diameter at the base and equipped with thirty NK-15 kerolox engines. The second stage used 8 NK-15 vacuum optimized engines and the third stage used 4 NK-15 kerolox engines. While a bit shorter than the Saturn V at only 105.3 meters tall, it was capable of 10.2 million pounds of thrust. The N1 would have been impressive had it worked. It attempted launch four times, all of which ended in failure. The second attempt crashed down on the launch pad at Baikonur Cosmodrome in Kazakhstan, utterly destroying the complex in one of the largest non-nuclear explosions ever produced by mankind. After two more failures and without any funding to continue the program, it was finally cancelled in 1976. A combination of rushed deadlines, engineering oversights, poor testing procedures and infighting between design bureaus spelled the downfall of this rocket, and it wouldn't be until the collapse of the Soviet Union in 1991 that information on the N1 was finally revealed to the world. Sea Dragon (USA)The most famous of the conceptual rockets. The Sea Dragon was a monster of a rocket first conceived by Robert Truax in 1962, and had it been built, would've stood at 150 meters tall and been capable of delivering 80 million pounds of thrust, dwarfing even the Saturn V. It would've been a two-stage rocket powered by massive single engines on each stage. It was planned to be built as cheaply as possible to reduce costs of getting payloads into orbit. Instead of using complex turbopumps like most rockets do, the Sea Dragon instead opted to use pressurized nitrogen tanks to feed the RP-1 fuel and LOX into its engines, as it makes it much easier for the rocket to be refurbished and reused. Indeed, the Sea Dragon was ahead of its time when it came to the concept of reusable rockets, predating SpaceX's Starship by decades. The two stages would've been equipped with inflatable air bags to slow their descent through the atmosphere and into the ocean, where they would've been recovered. The rocket's large size would've outputted so much thrust on liftoff that it would've destroyed its engine along with the launch pad had it been launched on land. To counter this, the Sea Dragon would've been built in a shipyard and towed at sea (hence the name), where the ocean water provides a good buffer to dampen the destructive shockwaves. Ballast tanks attached to the engine nozzle would've sunken the rocket vertically to make it ready for launch. As ambitious as the Sea Dragon was, its large size made it Awesome, yet Impractical for NASA to justify the costs of building it. Combined with NASA's budget being slashed as a result of The Vietnam War (which saw the cancellation of many projects), the Sea Dragon was ultimately canned and shelved. STS/Space Shuttle (USA)The Space Shuttle (officially known as the Space Transportation System) is the most successful crewed spaceflight program in human history. Five Shuttles were built and these flew a total of 135 missions between 1981 and 2011, launching such important payloads as the Hubble Space Telescope and most of the International Space Station. Significant parts of the Space Shuttle were reusable, including the SRBs and the orbiter itself. The Shuttle lifted off with the help of two solid rocket boosters, the largest solid-fuel rockets operated in history (prior to the SLS, below). Its three RS-25 main engines burned hydrolox and carried the vehicle from ground to orbit, requiring significant advances in engine design to be operable in both regimes. It had an orbital maneuvering system burning hydrazine. The Shuttle used its heat shield tiles to survive orbital reentry, then its wings allowed it to aerodynamically glide to a runway landing. The Shuttle failed in operation twice, killing the crew both times. In 1986, Challenger's main fuel tank exploded shortly after liftoff due to the failure of a solid rocket booster. In 2003, Columbia disintegrated during reentry as a result of damage to its heat shield on liftoff. Both incidents were traced to inadequate safety procedures; the Challenger disaster was particularly egregious as the decision to launch was political, despite unanimous warnings from engineers that conditions were unsafe. Its safety record notwithstanding, the biggest problem with the Shuttle was its cost. Nominally around 500 million USD per launch, the total price tag over its lifetime averaged out to nearly 1.6 billion per launch, and it never met its goal of rapid reuse, flying no more than nine missions per year even with four operational orbiters. Simply put, it cost far more to refurbish each Shuttle than was originally promised, and safety issues plagued the program throughout its lifetime. Every incident led to more time and money spent on refurbishment and inspection, and as the existing orbiters reached the end of their lifespans, no new ones were built to replace them. In 2004, the Bush administration announced the termination of the STS program once the International Space Station was complete. U.S. crewed launch capability was supposed be taken over by the Constellation program, which was itself shelved in 2010 due to cost overruns, a year before the final Shuttle mission. For nearly nine years afterwards, the United States relied on Russian Soyuz rockets to transport crew to and from the International Space Station, until the SpaceX Crew Dragon flew in 2020. Important active rockets by nation ChinaChina's rockets are built and operated by the China Aerospace Corporation (CASC). The majority of them are part of the Long March (Chang Zheng in Chinese) family, which are identified primarily by number and subtype. Beyond that, these rockets are very different in design, with some intended for small lift duty, some for medium lift, and some for heavy lift. Active Long March variants and their payloads include the 2C (2,400 kg to LEO), 2D (3,100 kg to LEO), 2F (8,400 kg to LEO), 3A (8,500 kg to LEO), 3B (several variants, estimated payload 13,000 kg to LEO), 4B and 4C (4,200 kg to LEO), 5 and 5B (25,000 kg to LEO), 6 (500 kg to SSO), 7 and 7A (13,500 kg to LEO), and 11 (700 kg to LEO). The 2F, 3A/B/C, 5, and 7/7A variants are also capable of reaching GEO orbit. Chinese media does not widely advertise the capabilities and design features of the country's rockets, so confirmed details are scarce. Most of its launches occur from land bases, with the result that the spent booster stages often fall on civilian populations. EuropeThe European Space Agency (ESA) operates two major rocket systems: Ariane 5 and Vega. Ariane 5 is a heavy-lift vehicle designed primarily for geostationary launches. Using two solid rocket boosters and a hydrolox main stage equipped with a Vulcain 2 engine, it can lift over 20,000 kg to LEO and 10,865 kg to GTO. The second stage can be either a hydrazine (hypergolic) or hydrolox version. The first-ever Ariane 5 launch failed mid-flight due to a software bug, becoming one of the most costly programming errors in history. It is expected to be retired in favor of the upcoming Ariane 6. Vega is a medium-lift vehicle designed for small satellites and rideshares, with a maximum payload of just under 2,000 kg. It is a three-stage rocket, all of which use solid motors. Its Z23 second stage failed on July 11 2019, causing a long delay in flight operations, which resumed September 3, 2020. Its AVUM upper stage failed on November 17 2020 due to an assembly error. IndiaThe India Space Research Organization (ISRO) operates two families of orbital rocket: the PSLV and GSLV. The Polar Satellite Launch Vehicle (PSLV) is a medium-lift rocket originally meant to launch into sun-synchronous orbits but also capable of small geostationary and even interplanetary launches. The first stage is a solid rocket booster, the second stage uses a Vikas engine burning hydrazine, the third stage is also solid, and the fourth stage uses hydrazine. Its maximum payload capability is 3,800 kg to LEO. The Geosynchronous Satellite Launch Vehicle (GSLV) is a medium-lift rocket capable of carrying 5,000 kg to LEO. Uniquely, it uses liquid-fueled boosters to assist a solid-fuel first stage. The second stage uses hydrazine and the third hydrolox. The GSLV Mark III is unrelated to the GSLV, despite sharing the name. It is designed for geosynchronous missions but also for human spaceflight, and is capable of lifting 10,000 kg to LEO. Its first stage consists of two solid rocket boosters, its second burns hydrazine, and the third stage burns hydrolox. The most notable launch of PSLV is the Chandrayaan-1 lunar vehicle, which reached the Moon on November 8, 2008. The GSLV Mark III in turn launched Chandrayaan-2, which successfully entered lunar orbit on August 20, 2019. The lander, however, went off course and was destroyed on impact with the Moon's surface. JapanThe Japanese Aerospace Exploration Agency (JAXA) operates the H-II and H3 rocket families, built by Mitsubish Heavy Industries. The H-IIA is currently in service, the H-IIB made its last flight in 2020, and the H3 is expected to go into service later in the same year. Both variants of the H-II are two-stage, medium-lift rockets, using LE-7A hydrolox engines on the first stage and LE-5B hydrolox engines on the second stage, assisted by solid rocket boosters in various configurations depending on mission requirements. Payload to LEO is up to 15,000 kg for the H-IIA and 19,000 kg for the H-IIB. The A variant famously carried the Emirates "Hope" Mars mission in 2020 and the B variant was the standard platform for the HTV resupply vehicle. It will be replaced by the H3 rocket and the upgraded HTV-X resupply vehicle. RussiaThe Soyuz rocket family is, all together, the most frequently used launch vehicle in the world, with over 1,700 flights since 1966. The modern version is the Soyuz-2, which in its most powerful variant can lift up to 8,200 kg to LEO. The Soyuz-2 first stage is powered by four liquid-fueled (kerolox) boosers using RD-107A engines, which separate in a dramatic maneuver known as the Korolev Cross, and a core with an RD-108A kerolox engine. The second stage uses either an RD-0110 or RD-0124 kerolox engine and the optional third stage uses either S5.92 or 17D64 hydrazine engines. Soyuz is the only rocket besides the SpaceX Falcon 9 that is currently certified to carry humans to orbit and it was the only means of reaching the International Space Station for the nearly nine-year period between the retirement of the Space Shuttle in 2011 and the launch of Crew Dragon aboard Falcon 9 in 2020. Soyuz is infamous for its longevity and has had high overall reliability, but there have been some notable failures. Most recently, the MS-10 mission to the ISS failed when a booster didn't detach from the core properly. The crew survived thanks to the Soyuz capsule's launch escape system. The Proton-M is Russia's main heavy-lift launch vehicle, with a payload to LEO of 23,000 kg. It is unique in many ways, among them that all three of its stages use hypergolic hydrazine as propellant. The first stage uses six RD-275M engines, the second stage uses 3 RD-0210 and 1 RD-0211 engines, and the third stage uses one RD-0212 engine. There are three optional fourth stage variants. The Proton-M rocket has had some notable failures, the most well-known of which occurred July 2013 and was caused by the incorrect installation of angular velocity sensors. The rocket veered off course almost immediately after launch and crashed very close to the launch site, causing the largest known spill of hypergolic rocket propellant. Video of the crash went viral and it is among the most viewed rocket failures in history. Russia intends to replace the Proton-M with the Angara-5, but that has run into numerous delays. United StatesAntares, built by Northrop Grumman, is a medium-lift rocket mainly used for Cygnus resupply launches to the International Space Station. Its maximum payload to LEO is 8,000 kg. The first stage is powered by two RD-181 kerolox engines and the second stage uses a Castor 30B solid fuel motor. It has optional third stages as well. Antares previously used AJ26 engines adapted from Russian NK-33s, but when one of these failed catastrophically in-flight in 2016, they were replaced with the RD-181. Atlas V, built by United Launch Alliance (ULA), is the final iteration of the Atlas family of rockets, which dates all the way back to the 1950s. The current version is powered by a single RD-180 kerolox engine, with a Centaur upper stage powered by an RL-10 hydrolox engine and optional solid rocket boosters for additional thrust at liftoff. It has a wide variety of configurations supporting payloads of up to 20,520 kg to LEO and is capable of interplanetary missions. Atlas V has a perfect operational record over more than 80 missions. It is the second most used active commercial rocket in the U.S.. Atlas V will be replaced by Vulcan-Centaur (see below). Delta IV Heavy, also built by United Launch Alliance (ULA), is the second most powerful operational rocket today, used mainly for high-energy GEO transfers and interplanetary missions. It consists of three Delta IV cores powered by a single RS-68 hydrolox engine each and a second stage powered by an RL-10B2 hydrolox engine. It is capable of lifting 28,790 kg to LEO. It is extremely expensive and thus doesn't fly that often. Delta IV is notable for having bright orange insulation on the hydrogen tanks and for lighting itself on fire just prior to ignition (to burn off excess hydrogen near the engines). Electron, built by Rocket Lab, is the first rocket by a startup company to achieve operational profitability while not part of any government contracts. It is a small-lift vehicle (capable of carrying 300 kg to LEO) that provides dedicated launch services for commercial smallsats and cubesats, and has recently secured government contracts. The first stage is powered by nine Rutherford kerolox engines and the second by a single Rutherford kerolox engine. It is the only operational orbital rocket to use electric pumps, powered by large lithium-ion batteries that are ejected during flight. Rocket Lab has begun experimenting with soft-landing the first stage boosters using parachutes with the ultimate goal of reusing them. SpaceX (United States)Falcon 9 is a partially reusable, medium-lift rocket built by SpaceX. Its payload capacity is 15,600 kg to LEO (reusable) or 22,800 kg (expendable). Its first stage is powered by nine Merlin 1D kerolox engines and the second stage by a single Merlin 1D vacuum optimized kerolox engine. In 2020, it overtook Atlas V as the most flown commercial rocket in active service. It is also the only reusable orbital rocket in active service as of the end of 2020. In 2016, SpaceX achieved the first ever propulsive landing of an orbital rocket booster, and now routinely lands and reuses them, with the goal of 10 flights per booster before extensive refurbishment. It also achieved the first recovery and reuse of a rocket fairing. The second stage is not recoverable. Falcon 9 became the first commercial rocket to lift humans to orbit on May 30, 2020, on the Demo-2 mission. Falcon 9 has failed twice, but only once in flight. In June 2015, a structural failure destroyed the rocket during ascent on a cargo mission to the International Space Station. In September 2016, a fueling anomaly caused the total loss of the rocket and payload on the launch pad. Several Merlin engines have failed in flight but none of those have caused a loss of mission. Falcon Heavy is a variant of Falcon 9 intended for high orbit and interplanetary missions, consisting of three Falcon 9 first-stage cores strapped together. Falcon Heavy is the most powerful operational rocket today (second only to the Saturn V on the all-time scoreboard), able to carry about 30,000 kg to LEO in fully recoverable mode, and up to 63,800 kg in fully expendable mode. It has flown three times as of 2020, all successfully. The side boosters detach first and perform a return to launch site maneuver, with a perfect landing record (so far). The center core flies farther and has landed in only one of three attempts. After the one successful drone ship landing of the center core on Falcon Heavy's second mission, the booster later toppled in high seas and was destroyed. This has led to a popular meme among SpaceX fans: "the curse of the center core". Falcon Heavy is expected to be extensively involved in supporting NASA's Artemis program, including launching supply craft to the Lunar Gateway as well as components of the Gateway itself. Others Future rockets Long March Family (CASC, China)Long March 8 is intended to be a partially reusable launch vehicle whose first stage booster will land propulsively like a Falcon 9. The first test flight is planned for 2020, but there is scarce official data about its design and capabilities. It is expected to have a payload capacity of 7,600 kg to LEO. Long March 9 is a super-heavy-lift launch vehicle that is currently in conceptual study, with a first flight potentially occurring in 2030. In terms of capability, it is expected to offer payload capacity of 100,000 kg to LEO, but its primary purpose will be to take Chinese astronauts to the Moon. New Glenn and New Armstrong (Blue Origin, USA)Blue Origin, owned and operated by Jeff Bezos, is developing two orbital rockets. The first of these is named New Glenn, a heavy-lift vehicle planned to enter service in 2022. New Glenn is expected to have a payload of 45,000 kg to LEO and will be partially reusable. Its first stage booster will land propulsively on an ocean platform in much the same way as Falcon 9. The second stage is expendable. New Glenn's first stage will be powered by seven BE-4 engines using methalox, The second stage will be powered by two BE-3U engines burning hydrolox. New Armstrong, also from Blue Origin, is a proposed super-heavy-lift launch vehicle with very few public details. Based on the company's naming conventions, it is believed that New Armstrong is meant to send astronauts to the Moon. Space Launch System (SLS) (Boeing/NASA, USA)The Space Launch System, or SLS, is a super-heavy-lift launch vehicle that is currently under development by Boeing at the request of NASA. As of early 2021, the first core stage is in final trials with the goal of launching in an uncrewed test flight in 2022. SLS has several variants, the first of which has a payload to LEO of 95,000 kg, and a planned upgraded version which can carry 130,000 kg. When operational, SLS will briefly claim the title of most powerful rocket in the world, although it will still be less powerful than the Saturn V. SLS will use a core stage powered by four RS-25 engines burning hydrolox and assisted by two solid rocket boosters. The initial version of the second stage will use a single RL10B-2 engine burning hydrolox. The proposed Exploration Upper Stage (EUS) will use four RL10 engines, also burning hydrolox. The primary use of SLS will be for the NASA Artemis program, intended to return humans to the Moon. It may also be used for interplanetary missions depending on availability. The SLS program has faced criticism over its handling and practicality. The rocket alone costs 1.5 billion USD and more than 20 billion will have been spent on its development by the time it launches. In combination with all other components of the Artemis program (the Orion capsule and the Human Landing System), each Moon mission will cost close to 8 billion USD when the total price is averaged out. This is less than Apollo, which cost close to 10 billion USD when adjusted for 2020, but even so is seen by many as unsustainable. Additional criticism targets its cadence, as it will be capable of no more than one flight per year. SLS is also not very ambitious, reusing significant amounts of technology from the Space Shuttle program. It is a "safe choice", but faced with upcoming competition from commercial rockets such as SpaceX's Starship, it may be obsolete before it ever gets to the Moon. Time will tell which approach will be successful. Starship(SpaceX)Starship, under development by SpaceX, is a super-heavy-lift launch vehicle that is intended for full reuse. Its first stage is the Superheavy booster, powered by up to 28 Raptor engines burning methalox, and its second stage is the Starship vehicle, powered by six Raptor engines burning methalox. As of 2021, prototype Starships are undergoing flight tests in the shipyard in Boca Chica, TX while large-scale manufacturing facilities are being built. Starship's designed payload capacity is at least 100,000 kg and potentially up to 158,000 kg to LEO, and if its orbital refueling program succeeds, it will be able to transport that same payload to the Moon or Mars. When operational, the full Starship/Superheavy stack will be the largest and most powerful rocket ever built, with nearly twice the thrust at liftoff of the Saturn V. It is so powerful that it will primarily operate from platforms at sea; otherwise it would cause unacceptable noise pollution. In contrast to SLS, which is expendable, meant for dedicated NASA missions, and will fly no more than once per year, Starship is designed for mass production and full and rapid reuse. Each booster will carry its Starship to space, return to land at its launch site, be refueled, and launch again, up to eight times per day. Each Starship will ascend to orbit and await one or more tanker missions that will dock, refuel it, then return to land. The Starship will then go to the Moon, Mars, or other destinations. There will also be a version of Starship designed for suborbital, point-to-point flight on Earth, completing intercontinental trips in under an hour that could take 12 to 24 hours by commercial jet. This rapid reuse is expected to reduce the cost of access to space by at least an order of magnitude, if it works, which is not guaranteed at this point. It would, if successful, be a revolutionary vehicle. Early orbital flights of Starship could take place as early as 2021, but it is not expected to enter full service until 2022. Vulcan Centaur (ULA, USA)Vulcan Centaur is a heavy-lift rocket under development by United Launch Alliance (ULA). It is expected to enter service in 2021, and will carry payloads of up to 27,200 kg to LEO. The Vulcan first stage will be powered by two BE-4 engines (developed by Blue Origin) burning methalox, and assisted by up to six GEM-63XL solid rocket boosters. The Centaur second stage will use two RL-10 hydrolox engines. Vulcan Centaur will replace Atlas V in ULA's lineup, and will have a Heavy variant that is expected to replace Delta IV Heavy, used for high orbit and interplanetary missions. Vulcan Centaur is provisionally designed for partial reuse. Its first flights will be fully expendable, but there are plans to detach the engine section of the Vulcan first stage (the heaviest and most expensive part of the vehicle) and allow it to reenter safely using a heat shield. After this, it will deploy parachutes for an aerial helicopter catch attempt.
https://tvtropes.org/pmwiki/pmwiki.php/UsefulNotes/RocketsAndPropulsionMethods
Our aim is to enable efficient centralized decision making amongst swarms of agents that are tasked to intercept or track a swarm of target vehicles. Specifically, we seek an optimal centralized assignment policy that is capability-aware — it can leverage known dynamics of the agents and targets to make optimal assignments that respect the capabilities of the agents and targets. We approach this problem by posing an objective function that accounts for both the high level cost of all assignments and the low-level costs of the optimal control policies used by each agent measures. We add differential constraints arising from vehicle dynamics to complete the optimization formulation. This approach stands in contrast to the majority of techniques that use distance-based (or bottle-neck assignment ) cost functions [26, 15, 30]. The approach we take in this work is based on the realization of the close relationship between the given problem and the theory of optimal couplings, or optimal transport [34, 35] . In the context of probability theory, to which it is often applied, optimal transport studies the problem of determining joint distributions between sets of random variables whose marginal distributions are constrained. In other words it tries to find a coupling that maps a reference measure to a target measure. Optimal transport has been applied a wide variety of other areas as well; for instance it has been used to great effect in the areas of machine learning[9, 11, 19], image manipulation , and Bayesian inference. I-a Innovation and Contributions The fundamental insight we use to relate OT to the present context is that the set agents may be viewed as a discrete measure that we seek to map to the discrete measure denoted by the set of targets. In this way, we consider discrete optimal transport (DOT). Our context is also different from the standard DOT problem in that the target measure is changing and that the transport of the reference to the target must respect the differential constraints given by the dynamics. Our innovation is that we can address these issues by introducing a new metric that respects the dynamics, as explored by Ghoussoub et. al. , rather than the traditional unweighted Euclidean metric that underpins the Wasserstein or “Earth Movers” distance. Our proposed metric uses the optimal control cost of a single-agent vs. single-target system as the cost of the proposed assignment. For instance if the agents will perform LQR reference tracking to intercept their targets then the LQR cost is used as the transportation cost. Alternatively, if the agents will solve a pursuit evasion game, then the transportation cost will be obtained from the solution to the differential game. In this way, the assignment becomes aware of the capabilities of the system, including the differential constraints and the decision making approach of individual agents. Our problem is specified by two inputs - The dynamics of the agents and their targets - A mechanism to evaluate a feedback policy and its cost for any single agent Using these two specifications we form a cost function that is the sum of all individual agent cost functions, and seek an assignment that minimizes this total cost. Critically, we see that the cost used for each agent is that of the feedback policy — not the distance. Typically, such feedback policies are obtained to either optimally regulate or operate the underlying agent. Thus, the cost incurred by an agent that is following its feedback policy is a more appropriate measure of optimality than one based on the distance an agent must travel. Our approach provides a solution for this problem, and consists of the following contributions - A new capability-aware assignment approach that simultaneously optimizes the assignment and the underlying feedback controls of the agents - A reformulation of the vehicle-target dynamic assignment problem as a linear program by leveraging concepts from discrete optimal transport The above two contributions are supported via both theoretical and simulation results. In particular, we prove that our cost function can be reformulated into the Monge problem from optimal transport. This problem can then be solved via a linear programming approach. The capability-aware assignment problem is demonstrated to have lower final cost as compared to a distance-based assignment that neglects the feedback control policy. We empirically show that the optimality gap between our approach and distance-based metrics grows with the number of agents. Finally, we prove that after formulating the assignment problem in the DOT framework, it needs only be solved once rather than repeatedly over the life of the system. As a result, we see significant computational benefits compared to repeatedly re-calculating a distance-based assignment. I-B Related work Assignment and resource allocation problems present themselves across many disciplines. In the area of multirobot task assignment, self-organizing map neural-networks were designed to learn how to assign teams of robots to target locations under a dual decision-making and path-planning framework . However, the algorithm proposed in that work is largely heuristic and does not consider the underlying capabilities of the assigned robots. Other papers have considered more general kinematic movements of the formations in general, rather than individual agent capabilities, and were able to provide suboptimality guarantees for the overall assignment. Another approach can be found on that proposes an approach that is very similar to ours in that it solves a related linear programming problem. However, that approach did not consider the effect of general dynamics of the system or a changing set of targets. A similar assignment problem arises in from vehicle-passenger transit scheduling that have become extremely important in ride sharing applications , . Alonso-Mora et.al investigated dynamic trip-vehicle assignment that is both scalable and optimal using a greedy guess that refines itself over time. In general, these problems lack consideration of underlying dynamics of the resource being assigned or that task being assigned to. Assignment problems also arise in wide areas of econometrics dealing with matching population models to maximize total utility surplus, contract theory, or risk management , , . In general, these problems also do not consider the underlying dynamic nature within the assignment problem. One closely related application area that, at times, also considers the dynamics in completing an assignment is the so-called weapon-target-assignment (WTA) [23, 8]. The WTA problem itself comes in two-forms: the static WTA and the dynamic WTA. In the static WTA problem, all of the assignments are made at once with no feedback possible, whereas the dynamic WTA allows for feedback between decision making phases . Our approach is related to this problem as it is a certain mixture of these two; first, it considers explicit dynamic capabilities of the agents and targets during the assignment problem; and second it potentially allows for reassignment of the agents during operations. Our setup can also be viewed as a limiting case of the traditional WTA in that we assume that once the weapon intercepts the target it successfully destroys it with 100% probability. This contrasts to the traditional WTA setting where a weapon might only have a certain probability of destroying its target. The traditional WTA assignment problem (with probabilistic survival of targets after interception) has typically been formulated as a nonlinear integer programming problem for which exact methods have not yet been found. As a result, a large number of heuristic and/or approximate approaches have been developed. For instance approaches based on approximate dynamic programming , fuzzy reasoning , various search heuristics , genetic and/or global optimization approaches [28, 29], network-based heuristics , amongst others have all been studied. In comparison to these previous works on WTA we provide several contributions. Our proposed (as far as we are aware previously unrecognized) link to optimal transport theory can yields additional theoretical and computational guarantees. Finally, we review some connections between our proposed approach and existing solutions in robotics and control. Fredrick et. al. investigated multi-robot path planning and shape formation underpinned by optimal transport to prove that the desired formations can be obtained while maintaining collision-free motions with assurance of convergence to global minima. Similarly, Bandyopadhyay et. al. [5, 4, 6] describe an approach where swarm members are placed into bins which have constraints that must satisfied in order to permit the transition of the agents to neighboring bins. These motion constraints are representative of the dynamics or physical limitations present in the system. In terms of the approach described in this paper, the optimal transport cost metric is thus a modified distance between the centroids of the bins subject to a motion constraint matrix; if motion is possible, the cost is the distance, otherwise the cost is the maximum value. Here we consider a specific setting, one with deterministic and known dynamics, for which we can prove optimality. While we do not consider limitations on communication between agents, this problem has also been consider in the decentralized decision making context where each vehicle is making its own decisions. In this case all of the agents must come to a consensus through different communication/negotiation strategies, see e.g. for a greedy assignment strategy and for an example of a game-theoretical formulation. Ii Problem Definition In this section we define the dynamic agent-target assignment problem. We begin by describing a dynamical system that describes the evolution of active states, targets, and destinations. We then provide an optimization problem that we seek to solve. Ii-a Dynamical System We limit our presentation to the case of linear, control-affine systems for clarity of exposition. Our approach and theory is also valid for nonlinear systems given an ability to compute policy costs associated with nonlinear controllers. Let denote a positive number of autonomous agents (resources) and targets. If we consider agent and target , then their states at time are denoted by , , respectively. Agent takes actions In our problem, the number of agents and targets can only decrease with time. We leave consideration of newly appearing targets to future work. Each object that has not been removed is termed active so that at we have active agents and active targets. An agent/target pair can become inactive when the agent successfully intercepts or completes its resource allocation. Let and define functions that extract the positions of the agents and targets from their states. Successful resource allocation of an agent is defined when In other words, when the position of agent is within an ball of target , then both become inactive. The activity of the agents and targets at each time is represented by the active sets and For instance, if all agents are active then , whereas if, for instance agent has successfully reached target then is removed from and is removed from . This process defines an evolution of the active sets. At a given time , the active agents and targets evolve according to a stochastic differential equation where and correspond to the drift of linear dynamics of the agent; and corresponds to the closed loop linear dynamics of the targets. Note that here we have assumed linear dynamics; however this assumption is entirely unnecessary for our theory in Section V. It is however, more computationally tractable because it leads to a solution of sets of linear optimal control problems, for instance LQR or LQI. A more significant assumption that these dynamics imply is that there is no interaction between agents, i.e., we do not consider collisions or other interference effects. We leave this matter for future work, but note that in the simulation examples in Section VI we noticed that collisions between agents did not generally occur. Finally, the entire state of the system be defined by the tuple Let denote a set of states for which there is at least one active target or destination. Define the exit time as the first time that the state exists from . Ii-B Policies, cost functions, and optimization We seek a feedback policy that maps to a set of controls for all active agents . The policy is represented by a tuple |(1)| where is an index function that assigns active agents to active targets and is a feedback control policy for the individual agents. The goal then, is to determine an optimal feedback policy of this form. An optimal feedback policy is one that minimizes |(2)| where the stage cost mapping the state of the system to the reals is and the time is the first exit time. The optimal value function will then be denoted by |(3)| The stage cost intends to guide each agent to its assigned target and is therefore represented by the sum |(4)| where the cost assigned to agent is a function of the corresponding agent state, the agent control, and the target to which the agent is assigned. For instance, this cost could be a quadratic corresponding to an infinite horizon tracking problem |(5)| where and penalize the distance between the agent-target system and the control and and are the steady-state values for the agent and assigned-to target, respectively. These transient and steady-state terms represent the dual goals of this particular optimal controller, which are to drive the error of the agent-target to the optimal steady-state and then keep the agent-target system at this optimal state. Iii Discrete optimal transport In this section we provide background to discrete optimal transport, and indicate how it relates to our dynamic assignment problem. We follow the description given by . Let denote a probability simplex so that any belongs to the simplex |(6)| A discrete measure is defined by the fixed locations and weights , and denoted by |(7)| The transport map between two discrete measures, and is a surjective function that satisfies |(8)| Compactly the above is written as , which implies that measure is the push-forward measure of under the map . Iii-a Monge problem We seek to find optimal assignments for the agent-target system, and this implies that we seek a map that minimizes some the transportation cost. Let the transportation cost be defined pointwise as |(9)| The Monge problem then seeks a map that minimizes |(10)| To parameterize T, we can define an index function so that , just as in Equation (1). The problem with optimizing Equation (10) is that it is not-convex. In general, convexity can be achieved by relaxing the deterministic nature of the map to allow portions of to be directed towards . The resulting stochastic map is defined by a coupling matrix (transition probability matrix, or stochastic matrix)with indicating the portion of assigned to . Define a set of allowable coupling matrices where denotes a vector of sizefilled with ones. The Monge-Kantorovich optimization formulation then becomes |(11)| and it can be solved with linear programming. Under the conditions given next, the solution to this optimization problem is equal to the solution of the Monge problem. Iii-B Matching problem The matching problem is a particular realization of OT that has the property that the minimizer of (11) is equal to that of (10). The formal statement of this equivalence is given below. Proposition 1 (Kantorovich for matching (Prop 2.1, )) In this setting, we seek a a one-to-one mapping . The constraint set becomes the set of doubly stochastic matrices with entries, and the coupling matrix has elements |(12)| In the context of our assignment problem, this case occurs when there are an equal amount of agents and targets. A discrete optimal transport formulation can also be applied for a relaxation of the Kantorovich problem so that several agents can be assigned to the same target. This problem can also guarantee binary coupling matrices (essential for our application). For further details, we refer to . Iii-C Metrics The choice of cost is problem dependent; however, the most commonly used cost for optimal transport between distributions is the Euclidean distance. Parameterized, by , it is given by |(13)| where is the Euclidean norm. Using this metric for points, can be viewed as a metric for measures. In this case, is called the -Wasserstein distance . In the statistical community, this metric is also called the Earth movers distance (EMD). This metric implies that the cost of moving a resource to is dominated by the distance between them in Euclidean space. The total cost of the assignment then becomes a sum of the distances. In our application to assignment in dynamical systems, the Euclidean metric may not be most appropriate because it does not account for the underlying dynamics of the system. One of our insights is that using a metric determined by the underlying dynamics of the problems leads to more optimal assignments. Iv Assignment in dynamic systems with DOT In this section we describe how DOT can be applied to minimize Equation (2). As previously stated, our goals are to determine the assignment policy that is “capability-aware.” In other words, the assignment policy must account for the dynamics of the system — the capabilities of the agents and the targets. A direct application of the EMD metric within DOT would potentially require constant reassignment at each timestep because the metric makes no accountability of the future system state. In other words, it would be greedy and simply assign each agent to minimize the total distance between agent/target pairs. In the next two subsections we first describe an algorithm that leverages the knowledge of the interception strategies of each agent to make an assignment, and then we provide and discuss pseudocode to illustrate the flexibility of our approach . Iv-a Algorithm The metric we propose for the transportation cost of assigning to is that corresponding to the optimal actions of a one-agent-to-one-target optimization problem. For instance, let us assume that agent is paired to target , then in the 1v1 scenario for policy , we have a total incurred cost of |(14)| The optimal policy is obtained by minimizing this value function. Let correspond to the value function under the optimal policy. Our proposed transportation cost is this optimal value function |(15)| For example, for linear dynamics with quadratic cost the transportation becomes |(16)| where is a function that combines the agent and target state into suitable form. For instance can be used for a reference tracking problem ; is the solution of the continuous algebraic Riccati equation for the LQR-based tracker; is the feed-forward control of the state being tracked; and is a function that provides the steady-state value for the quadratic agent-target state. Iv-B Pseudocode In this section we provide and describe the pseudocode for the proposed algorithm. A sample implementation that makes specific choices about the dynamics and policies is shown in Algorithm 1. This algorithm takes as inputs all of the agent states, target states, and dynamics. In Line 1, the assignment and individual agent policies are obtained by querying Algorithm 2. Algorithm 2 performs the optimal transport allocation. Its inputs are all of the states and their dynamics, an algorithm for computing the policies for each agent when it is assigned to some state, and a cost metric. Algorithm 1 makes two specific choices for these components. First, it uses the linear quadratic tracker (LQT) developed in that uses linear dynamics. However, if the dynamics are nonlinear, any other computable policy can be used. The specific cost metric is , which is the dynamics-based distance given by Algorithm 3. This algorithm, uses the cost of the LQT policy (16) as the transportation cost. Algorithm 2 has two steps. First it calls the discrete optimal transport routine with a pre-specified distance metric to obtain an assignment . It then iterates through all agents and obtains the individual policy for each agent that follows the assignment. The high-level Algorithms 1 and 5 demonstrate the differences between our approach and the standard approach that uses the distance metric (Algorithm 4). Algorithm 4 evaluates the distance directly by extracting the positions (pos, but an entire state can also be used), and discards — the cost of the actual policy. As a result, this assignment needs to be continuous recomputed. In Section V we prove that our approach only requires an assignment to be generated once. V Analysis In this section we analyze the proposed algorithms. Our aim is to show that the optimization problem (2) can be reformulated into the Monge-Kantorovich optimal transport problem. We follow a two step procedure. First we show that the optimal assignment policy of Equation (1) does not change with time, and next we show that the problem is identical to the Monge problem. Proposition 2 (Constant assignment policy) The optimal assignment policy for minimizing (2) is fixed over time. In other words, if for a fixed and at time , then for any we also have . We consider the case of two agents first, and then extend our approach to multiple agents through induction. We start with the case of two agents and and two targets and . We will compare two policies: a time varying policy that includes at least one switch and a second policy that does not switch and where and minimize Equation (14). We first consider the case where both contains switches and the final assignment is equivalent to the initial assignment. In this case let denote the time the first switch occurs and denote the time of the final switch back to the original assignment. Without loss of generality, assume that the initial assignment is given by . Let denote the active agents under the policy and and denote the exit times. The cost associated with is Letting denote an indicator function, we can rewrite the total cost as a sum over each agent |(17)| Now we can break up the integral into three section corresponding to the cost before the switch at , between and , and after . Denoting denote the exit time of agent we have Finally, suppose that is a policy that maintained the original assignment. In this case, because and are optimized for the original assignment, they clearly result in policies that have lower costs than and incurred during the time period from and In other words, since each agent ends up targeting the same target that it initially targeted, it is at least more effective to directly follow the policy to the target than to have intermediate deviations to the other target, i.e., An identical argument follows for the case where the final policy is different than the initial policy. In that case, we would set The case for more than two agents and targets follows by noticing that any system of agents can be analyzed by considering a system of two modified agents. The first modified agent is the augmentation of the first , and the last one is the th agent. Then the scenario is identical to the 2v2 assignment and the same argument follows. Now that we have shown that the optimal assignment is time independent, we can show that the minimizer of our our stated optimization problem (2) is the same as that of the Monge problem (10). Theorem 1 (Optimization problem equivalence) We use the fact that the optimal policy maintains a fixed index assignment vector for all time, i.e., . Let denote the initial state of the system, then the cost for any initial state can be represented as where the first equality came from Equations (3) and (4) the second equality follows the same argument as Equation (17); the third equality follows from the definition of ; and the final inequality follows from the definition of in Equation (15). Because of the definitions of 1v1 exit times , we implicitly the cost function to only those policies where the agents reach their targets, i.e., where is the initial distribution of the agents and is the distribution of the targets at interception. Strict equality is obtained when the policies correspond to the optimal policies 1v1 policies that minimize (14) so that are generated by . Thus, we have proved the stated result. Vi Simulation Results We now numerically demonstrate the effectiveness of our approach through several simulated examples. In each example, we have used the Python Optimal Transport library to solve the underlying DOT problem. In each case, the dynamics are integrated via the RK45 integration scheme. Vi-a Double integrators in three dimensions In this section we demonstrate that using the dynamics-based cost function over the standard distance-based Wasserstein metric yields significant savings that increases with size of the system. For the various examples we will consider agent/target systems of sizes 5 vs. 5, 10 vs. 10, 20 vs. 20 and 100 vs. 100. This set of examples uses a simple system of double integrators in three dimensions, where the velocity term is directly forced. The evolution of the state of each agent is given by for and for where each agent has three control inputs (one for each dimension). The target dynamics are identical to the agent dynamics. Each agent uses an infinite horizon linear-quadratic tracking policy of where the stage cost of an assignment is given by |(18)| where and are defined as the transient error and steady-state error between the agent state and an assigned-to target state, respectively; is the control input to drive the agent to the assigned-to target; and is the control input for the agent to the assigned-to target operating at steady-state conditions. For the weight matrices we choose where the nonzero weights correspond to the errors in positions in each dimension and the zero weights correspond to the errors in velocity. The control penalty is chosen to be The targets use an identical tracking policy; however they track certain fixed positions in space. The initial conditions of the system consist of the positions and velocities of each agent and target, a set of stationary locations that are tracked by the targets, and a set of assignments from each target to the the stationary location. These conditions are randomly generated for the following results. The initial conditions of the agents consist of uniformly distributed position and velocity components on an interval ofto and to , respectively. The initial conditions of the targets position are equivalent, but with velocity parameters following a uniform distribution between to . The terminal target locations were randomly selected on a uniform distribution between and . In Figure 1, we show the cumulative control costs incurred by a system of 100 agents while they are attempting to tracking 100 targets. Recall that the EMD-based objective assigns agents to targets with the aim of minimizing the total Euclidean distance. This assignment does not account for the dynamics of the agent and as a result, it performs worse than the dynamics-based assignment which accounts for the effort to actually get the agent to its assigned target. Mechanically, this performance difference results because agents are either incorrectly assigned at the beginning or because agents switch assignments over the course of their operations. For this simulation, the EMD-based policy checks whether reassignment is necessary every 0.1 seconds. Because visualizing the movements of 100 agents and targets is difficult, we demonstrate prototypical movements for a 5 vs. 5 system in Figures 2 and its X-Y projection 3 These figures demonstrate both the optimal trajectories of the agents and targets under the dynamics-based optimal assignment and the sub-optimal trajectories of the EMD-based assignment. Agents A0 and A1, for example, take significantly different paths to different targets. The movements corresponding to the EMD-based policy require more manuevering. The dynamics-based policy leverages the dynamic capabilities of the agents to select the targets that each individual would optimally be able to track over time. Finally, note that the individual agent controllers that we use are fundamentally tracking controllers, thus the agents act to match the velocity and position of their targets. This is the reason why several maneuvers show the agent passing and then returning to the target — for instance A1 to T3 under the EMD policy. Since the dynamics-based assignment policy selects an optimal assignment at initial time, it offers significant control cost benefits over assignments that continually reassign the agents based on the EMD. The benefit of dynamics-based assignment grows with the size of the system. To demonstrate this fact we use perform Monte Carlo simulations of one hundred realizations of a 5 vs 5, 10 vs 10, and 20 vs 20 system by sampling over initial conditions. As the complexity of the engagement increases, the amount of additional control effort required by the EMD based assignment grows, shown by Figure 4. Furthermore, Figure 5 illustrates that as the system size grows the EMD-based policy performs more switches. This fact contributes to the observed loss in efficiency of the EMD-based policy. Vi-B Linearized Quadcopter We now compare the algorithms on swarms of linearized quadcopter dynamics that are slightly modified versions of double integrators. The dynamics of both the agents and the targets in this case are given by where the twelve dimensional state space consists of the position, attitude, translational velocity, and rotational velocity components of the vehicle. The parameters of the system are , , , and respectively. Linearization was performed under small oscillation and small angle approximations. Furthermore, we will assume no wind disturbance forces and torques, . The control inputs are four dimensional and consist of the forces and torques that act on the vertical thrust and angular motions about the three principal axes. The initial positions and velocities of the agents are sampled uniformly between to and to , respectively. The initial velocities of the targets were sampled uniformly between to . The attitude and rotational velocity terms for both agents and targets were uniformly distributed between and and and , respectively. The terminal target locations were randomly selected from a uniform distribution between and . The control parameters for the agents and targets are updated to and , respectively. Similar to the double integrator systems, the dynamics-based assignment policy is able to optimally assign the more complex quadcopter agents to complete their tracking task with minimal cost. Figure 6 illustrates the cumulative cost expended by the agent swarm and once again showcases the optimality of the dynamics-based assignment method. Unlike the EMD policy, the complete dynamic information of the swarm members are used in the decision-making process as opposed to only the euclidean distance components. In the end, the EMD-based assignment policy incurs a cost that is 1.7 times greater than the dynamics-based assignment policy. Figures 7 and Figures 8 reveal the paths taken by the agents managed by the EMD and Dyn policies. Agents 1 and 4, in particular, are allowed to take advantage of their initial dynamic states to cheaply track their targets, instead of being reassigned (by the EMD-based policy) mid-flight to closer targets that appear. In this case, the reassignment causes extreme turning maneuvers that require significant control expense. Since the linearized quadcopter operates over a statespace, the computational cost for performing assignments are more expensive, and since the EMD-based policy requires checking and updating assignments every time increment, it requires significantly greater computational expense. For this problem, the total cost of all reassignments required 0.6 seconds by the EMD policy, a signification porition of the total simulation time of five seconds. Vii Conclusion In this paper we have demonstrated how to reformulate a dynamic multi-vehicle assignment problem into a linear program by linking this problem with the theory of optimal transport. This theory allows us to prove optimality and to increase the system efficiency using our approach. In the end, we have developed an assignment approach that is capability-aware. The assignment accounts for the capabilities of all the agents and targets in the system. One direction of future research is the incorporation of constraints amongst the various agents to avoid collisions or other interactions. An extension of DOT theory in this direction could greatly increase the tractability of numerous multi-agent swarm operations, for example large scale formation flight. Another direction for future research is the incorporation of stochastic dynamics and partial state information. For either case, the approach described in this paper can be used as the basis of a greedy or approximate dynamic programming approach that is traditionally used for these problems. Finally, we can incorporate learning into the program where the agents periodically update their knowledge about the intent of the targets. Viii Acknowledgments We would like to thank Tom Bucklaew and Dustin Martin of Draper Laboratory for their helpful guidance and vision in support of this project. This research has been supported by Draper Laboratory, 555 Technology Square, Cambridge, MA 02139. References - Ravindra K Ahuja, Arvind Kumar, Krishna C Jha, and James B Orlin. Exact and heuristic algorithms for the weapon-target assignment problem. Operations research, 55(6):1136–1146, 2007. - Javier Alonso-Mora, Samitha Samaranayake, Alex Wallar, Emilio Frazzoli, and Daniela Rus. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment. Proceedings of the National Academy of Sciences, 114(3):462–467, 2017. - Gürdal Arslan, Jason R Marden, and Jeff S Shamma. Autonomous vehicle-target assignment: A game-theoretical formulation. Journal of Dynamic Systems, Measurement, and Control, 129(5):584–596, 2007. - Saptarshi Bandyopadhyay. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems. PhD thesis, University of Illinois at Urbana-Champaign, 2016. - Saptarshi Bandyopadhyay, Soon-Jo Chung, and Fred Y Hadaegh. Probabilistic swarm guidance using optimal transport. In 2014 IEEE Conference on Control Applications (CCA), pages 498–505. IEEE, 2014. - Saptarshi Bandyopadhyay, Soon-Jo Chung, and Fred Y Hadaegh. Probabilistic and distributed control of a large-scale swarm of autonomous agents. IEEE Transactions on Robotics, 33(5):1103–1123, 2017. - Rainer E Burkard. Selected topics on assignment problems. Discrete Applied Mathematics, 123(1-3):257–302, 2002. - Huaiping Cai, Jingxu Liu, Yingwu Chen, and Hao Wang. Survey of the research on dynamic weapon-target assignment problem. Journal of Systems Engineering and Electronics, 17(3):559–565, 2006. - Guillermo Canas and Lorenzo Rosasco. Learning probability measures with respect to optimal transport metrics. In Advances in Neural Information Processing Systems, pages 2492–2500, 2012. - Avishai Avi Ceder. Optimal multi-vehicle type transit timetabling and vehicle scheduling. Procedia-Social and Behavioral Sciences, 20:19–30, 2011. - Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pages 2292–2300, 2013. - Michael T Davis, Matthew J Robbins, and Brian J Lunday. Approximate dynamic programming for missile defense interceptor fire control. European Journal of Operational Research, 259(3):873–886, 2017. - Gonçalo Homem de Almeida Correia and Bart van Arem. Solving the user optimum privately owned automated vehicles assignment problem (uo-poavap): A model to explore the impacts of self-driving vehicles on urban mobility. Transportation Research Part B: Methodological, 87:64–88, 2016. - Tarek A El Moselhy and Youssef M Marzouk. Bayesian inference with optimal maps. Journal of Computational Physics, 231(23):7815–7850, 2012. - Jan Faigl, Miroslav Kulich, and Libor Přeučil. Goal assignment using distance cost in multi-robot exploration. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3741–3746. IEEE, 2012. - Sira Ferradans, Nicolas Papadakis, Gabriel Peyré, and Jean-François Aujol. Regularized discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3):1853–1882, 2014. - Rémi Flamary and Nicolas Courty. Pot: Python optimal transport library, 2017. - Christina Frederick, Magnus Egerstedt, and Haomin Zhou. Multi-robot motion planning via optimal transport theory. arXiv preprint arXiv:1904.02804, 2019. - Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning with a wasserstein loss. In Advances in Neural Information Processing Systems, pages 2053–2061, 2015. - Alfred Galichon. Optimal transport methods in economics. Princeton University Press, 2018. - Nassif Ghoussoub, Young-Heon Kim, and Aaron Zeff Palmer. Optimal transport with controlled dynamics and free end times. SIAM Journal on Control and Optimization, 56(5):3239–3259, 2018. - Bryan S Graham. Econometric methods for the analysis of assignment problems in the presence of complementarity and social spillovers. In Handbook of social economics, volume 1, pages 965–1052. Elsevier, 2011. - Patrick A Hosein and Michael Athans. Some analytical results for the dynamic weapon-target allocation problem. Technical report, MASSACHUSETTS INST OF TECH CAMBRIDGE LAB FOR INFORMATION AND DECISION SYSTEMS, 1990. - Meng Ji, Shun-ichi Azuma, and Magnus B Egerstedt. Role-assignment in multi-agent coordination. 2006. - Roy Jonker and Anton Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38(4):325–340, 1987. - Stephen Kloder and Seth Hutchinson. Path planning for permutation-invariant multirobot formations. IEEE Transactions on Robotics, 22(4):650–665, 2006. - Kwan S Kwok, Brian J Driessen, Cynthia A Phillips, and Craig A Tovey. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms. Journal of Intelligent and Robotic Systems, 35(1):111–122, 2002. - Zne-Jung Lee, Chou-Yuan Lee, and Shun-Feng Su. An immunity-based ant colony optimization algorithm for solving weapon–target assignment problem. Applied Soft Computing, 2(1):39–47, 2002. - Zne-Jung Lee, Shun-Feng Su, and Chou-Yuan Lee. Efficiently solving general weapon-target assignment problem by genetic algorithms with greedy eugenics.IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 33(1):113–121, 2003. - Dimitra Panagou, Matthew Turpin, and Vijay Kumar. Decentralized goal assignment and trajectory generation in multi-robot networks. arXiv preprint arXiv:1402.3735, 2014. - Gabriel Peyré and Marco Cuturi. Computational optimal transport. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019. - Francesco Sabatino. Quadrotor control: modeling, nonlinearcontrol design, and simulation, 2015. - Mehmet Alper Şahin and Kemal Leblebicioğlu. Approximating the optimal mapping for weapon target assignment by fuzzy reasoning. Information Sciences, 255:30–44, 2014. - Cédric Villani. Topics in optimal transportation. Number 58. American Mathematical Soc., 2003. - Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. - Jacques L Willems and Iven MY Mareels. A rigorous solution of the infinite time interval lq problem with constant state tracking. Systems & control letters, 52(3-4):289–296, 2004. - Bin Xin, Jie Chen, Juan Zhang, Lihua Dou, and Zhihong Peng. Efficient decision makings for dynamic weapon-target assignment by virtual permutation and tabu search heuristics. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(6):649–662, 2010. - Brian Yamauchi. Decentralized coordination for multirobot exploration. Robotics and Autonomous Systems, 29(2-3):111–118, 1999. - Anmin Zhu and Simon X Yang. A neural network approach to dynamic task assignment of multirobots. IEEE transactions on neural networks, 17(5):1278–1287, 2006.
https://deepai.org/publication/dynamic-multi-agent-assignment-via-discrete-optimal-transport
I just place some cold eggs in the air fryer basket, set the timer to 16 minutes and temperature 250 and let the air fryer cook them. So easy and simple to make. Use some Everything Bagel Seasoning to top the eggs with, or eat them however you like. Make some deviled eggs or an egg salad sandwich with these. Can you fry an egg in a hot air fryer? The first thing you need to do is find a pan that you can use in your air fryer, if you have a basket air fryer, if you have an air fryer oven, you can just fry the egg on the tray. … Crack the eggs into the pan. Set your temperature to 370 degrees F. Set the timer for 3 minutes. How do you fry an egg in an air fryer? Instructions - Make a pouch with a sheet of foil that will fit in the air fryer, one sheet per egg. - Coat it with olive oil spray and crack an egg into the pouch. - Place it in the air fryer. - Cook at 390*F for 6 minutes or until done to your preference. - Carefully remove and serve with other foods. 22.10.2020 What happens when you put an egg in an air fryer? The air fried boiled eggs come out perfectly cooked and easy to peel! Once you try air fryer hard boiled eggs, you’ll realize it may just be the easiest method to “boiling” eggs. And it doesn’t require any water for cooking. What Cannot be cooked in Airfryer? Any food with a wet batter should not be placed in the air fryer. You also want to avoid putting food that has a wet batter, like corndogs or tempura shrimp, in air fryers. Can I put aluminum foil in an air fryer? Yes, you can put aluminum foil in an air fryer. Can you cook popcorn in an air fryer? Key Ingredients for Air Fryer Popcorn Oil – You can use 1/4 tablespoon oil to coat your kernels, or you can spray a little air-fryer-safe spray oil on them instead. … Pour in kernels in single layer. Turn on air fryer and preheat to 400º F for 5 minutes before cooking. Cook kernels for 7 minutes or until popping stops. Can I make toast in my air fryer? Add your bread to the air fryer basket. Set the temperature to 400° F | 204° C, and air fry for 4 minutes. If you have very thin sliced bread, your toast will probably be done in 3 minutes. If you like your toast dark, add 30 seconds more. Can I put a plate in the air fryer? Yes, you can put a plate in an air fryer, as long as it doesn’t restrict the airflow and is rated for oven safe. Plates can be used but it will take much more time to cook evenly because the hot air will not reach the bottom of the plate properly. Can you put raw meat in Airfryer? Raw Meat Can Be Done In The Air Fryer You can roast chicken or pork in the fryer. A whole chicken will take about 30 minutes at 360 degrees F. Do air fryers use a lot of electricity? The wattage for air fryers is still low so it’s unlikely you’ll be creating a huge electricity bill in the process. The air fryer example we used above had a wattage of 1,500. … Air fryers are also smaller in size compared to traditional ovens so there isn’t as big an area to keep warm and heat up in the first place. What kind of pan can I use in air fryer? You can use any ovenproof dish or mold in the Airfryer, whether it is made of glass, ceramic, metal or silicone. You can also use silicone or paper cupcake cups or molds to bake cupcakes, muffins or small gratins. How long do you preheat air fryer? “Take the time (about 3 minutes) to set the air fryer to the proper temperature before you get cooking,” says Dana Angelo White MS, RD, ATC author of the Healthy Air Fryer Cookbook, “Preheating the air fryer is best for optimum cooking, the temperature and air flow will be at right levels and food can cook to crispy … What is bad about air fryers? While air fryers reduce the likelihood of acrylamide formation, other potentially harmful compounds could still form. Not only does air frying still run the risk of creating acrylamides, but polycyclic aromatic hydrocarbons and heterocyclic amines can result from all high-heat cooking with meat. What are the disadvantages of air fryer? - Air fryers are difficult to clean. … - Air fryers are more expensive than deep fryers. … - Air fryers have longer cooking times compared to conventional deep fryer. … - Air fryers are too small for larger families. … - Burnt, dried and failed air fryer dishes. … - Air fryers can be loud and noisy. … - Air fryers require space and are bulky. Can I cook sausages in an air fryer? You can do any kind of sausages in the air fryer. … You can cook up to 6 of these at a time, if they fit in a single layer in your fryer basket. Cook them for 9-12 minutes at 400°F until well-browned outside and no longer pink inside. Other thick fresh sausages like bratwurst and fresh (not cured) kielbasa are the same.
https://gutomna.com/fry/can-you-cook-eggs-in-a-hot-air-fryer.html
The chilly work of Wolfgang Ketterle, Eric Cornell, and Carl Wieman has gotten a warm reception in Stockholm; this morning the three physicists learned that they've won the 2001 Nobel Prize in Physics for creating the first Bose-Einstein condensates (BECs) in gases of rubidium, sodium, and other alkali metals. Albert Einstein, building on the work of physicist S. N. Bose, predicted 75 years ago that a gas made of a certain type of particle would behave very strangely when cooled to a few billionths of a degree above absolute zero. When almost all the energy gets sucked out of so-called boson particles, they should stop jittering about and settle down together into the lowest energy state. Then their quantum identities merge and they start behaving, in a sense, like one large particle. This "superparticle" is a BEC, and BECs are playgrounds for bizarre physics. For example, a BEC can slow light down to a crawl. Several groups had been attempting to coax matter into becoming a BEC, but none succeeded until 1995, when Cornell and Wieman, physicists at the University of Colorado, Boulder, and the National Institute of Standards and Technology, Boulder, used optical and magnetic traps to bully about bosonic rubidium atoms into forming a BEC. Shortly thereafter, Wolfgang Ketterle of the Massachusetts Institute of Technology managed to do the same with sodium atoms, creating a considerably bigger condensate. These achievements set off a flurry of experiments: Scientists used BECs to create "atomic lasers" and watched as vortices formed and dissipated within the BECs. The work continues to give physicists new insights into the nature of quantum interactions. "We've been surprised to see the explosive growth of the field," says Ketterle. "We thought it would be neat, but it has had an enormous impact on atomic physics." The prize, split evenly among the three winners, comes as no surprise to the physics community. "It's very well deserved," says Claude Cohen-Tannoudji, a physicist at the École Normale Supérieure in Paris, who won the 1997 Nobel Prize in Physics with Steven Chu and William Phillips for developing the cooling technique that allows physicists to make BECs.
http://www.sciencemag.org/news/2001/10/physics-nobel-quantum-superparticle
I sat down to work on a Keys to the Game article for the upcoming wildcard round playoff game against the Tampa Bay Buccaneers, but I quickly realized that the article was going to be encyclopedic. It was also going to take me two weeks to write, but the game is just four days away; it is scheduled for 8:15 pm this Saturday night. Clearly, I had to re-think my plan. I decided to write a series of articles instead. I’m starting with this one today, and my hope is to publish three or four additional articles between now and game time, each one focusing on a different aspect of the playoff matchup between the Football Team and the Buccaneers. 24 points has been the key number for Tampa Bay during the regular season One of the first things I do when I look at an upcoming opponent is to scan the wins & losses looking for trends. The Buccaneers have a very simple one to spot — every time they scored 25 points or more in the regular season, they won; every time they scored 24 points or less, they lost. This is GREAT news for Washington and its 4th ranked scoring defense that gave up 20.6 points per game in the regular season. Washington has surrendered 24 points or more 5 times, but only once since Week 5, and not at all since Week 11. Here’s a list of Washington’s opponents, with points scored and offensive ranking (points per game). I’ve tried to use a little color-coding to highlight what looks to me to be the most obvious trend here. If you throw out the Week 7 game in which the Cowboys rolled over and died, appearing unmotivated and disinterested, as an outlier, two pretty strong patterns emerge. Pattern One - Weeks 2-10 As you can see from the tan-colored indicators in the “Pts allowed vs Avg” column, the Washington defense was giving up more points than opponents’ season averages from Weeks 2-10, ranging between 6% and 33% more than each team’s average. There was no apparent correlation with the quarterback play, as the pattern held no matter which of the three quarterbacks appeared in the game. Pattern Two - Weeks 11-17 The green indicator in the same column highlights that the Washington defense held every team it played in this part of the season (Weeks 11-17) below the opponent’s season average points per game. In the five games started by Alex Smith, 80% of the time the opposing offense scored between 64% and 67% of its 2020 average points per game. This included stronger scoring offenses like the Steelers (12th) and weaker ones like the Eagles (26th). In the two games started by Haskins, the opposing team’s score was higher (70%, 91%) but sill below the season average. What conclusion do I draw from this? Without pointing to any specific cause, I can still see that there was a profound change with regard to opponents’ scoring that took hold starting in Week 11. This seems to have implications for the playoff game against the Buccaneers; I think we can look at the second pattern as being the one relevant to a “Week 18” game. What this means to me is that the expectation should be that the Washington defense, with Alex Smith as the starting quarterback, should be expected to allow the Buccaneers to score approximately 66% of their season points per game average. It turns out that Tampa Bay is a more prolific scoring offense than any that Washington faced in 2020; the Buccaneers are ranked #3 in the NFL, scoring 30.8 points per game for the full 16-game season. 30.8 x 66% = 20.3 points This analysis suggests that Washington can be expected to hold the Bucs to around 20 points on Saturday. Will that be enough? Well, Washington has scored an average of 20.9 points per game for the season. In the games that Alex Smith has started, the team has averaged 25.7 points per game, but that includes 3 defensive touchdowns. If you limit the analysis to only offensive points scored by excluding those three defensive TDs, then in those 5 games, the WFT offense has scored 22.17 points per game. All of these raw numbers, on the face of things, appear to be enough. Adjusting for the specific opponent But, let’s try to take into account the specific defense that Washington will be facing. It turns out that Tampa Bay has a pretty good defense, ranked 8th in scoring, giving up 22.2 points per game. As can be seen from the chart above, Washington is averaging 98% of points allowed per game in Smith’s 5 starts, but the offense is scoring only 84% of that number. The “cluster” seems to be at 75% - 83% for both groups. Let’s look at what that could mean for the Alex Smith led offense going up against the Tampa Bay defense. 22.2 x 84% = 18.65 It seems that the offense, by itself, should be expected to score in the 17-19 point range, which isn’t likely to be enough to win the game. A defensive touchdown (and PAT) would add 7 points, making a win likely, however, no game plan should rely on scoring a defensive touchdown. Recent results The math here already looks difficult for the burgundy & gold; it gets more so when considering just the most recent three games. - Over the past three games, the TB offense has scored an average of 40.7 points per game, as compared to the season-long average of 30.8. - The Washington offense has scored 23, 9, and 20 points in Alex Smith’s last three starts. - Over the past three games, Tampa Bay’s defense has given up 20.3 points per game, compared to the season-long average of 22.2. - The good news here is that, in Alex’s last three starts, Washington’s defense has given up just 17, 15, and 14 points. Bottom line The divisional playoff game between the NFC East Champion Washington Football Team and the wildcard team from the NFC South looks to be shaping up as a low-scoring affair — the type that is often decided by which team possesses the ball last, or the team that generates the most turnovers. The simplistic analysis offered in this article indicates that the Alex Smith-led Washington offense is unlikely to generate more than 20 points against a top-ten Tampa Bay defense, and is most likely to score in the 17-19 point range, so the burden of winning the game is likely to fall to the Washington defensive unit, as it has since Week 11. Defense stepping up Averages indicate that the WFT defense is likely to hold Tampa Bay’s offense to around 20 points, which means that the Washington defense would have to do something extraordinary to win the game. What do I mean by extraordinary? Well, a defensive touchdown would be one example. A turnover differential of +2 would be another. Both of these can swing the scoreboard. In the absence of any sudden-change plays of this variety, then just hard-nosed defensive football that holds the TB offense to 17 points or less will probably be necessary. In that case, we’re talking about limiting TB drives with 3rd down stops and staying in favorable field position all game long. This, of course, may be asking too much from a defense that is already carrying the team. Outside of something extraordinary from the special teams unit, either the defense or the offense (or both) is going to have to play significantly above the level it has done since Week 11. The defensive unit has already been among the very best in the league during that time frame, so expecting more from that unit may be unreasonable. Offense stepping up The Alex Smith-led offensive unit, on the other hand, has largely under-performed from a points-scored perspective, putting up more than 23 points only twice in 6 games. with both of those games coming against bottom-5 NFL defenses. It is probably more realistic to hope, with Alex Smith, Antonio GIbson, JD McKissic, Logan Thomas and Terry McLaurin all healthy enough to play, along with a healthy and well-performing OL unit, that the offense will be able to do more than it has been doing. Based on the research and analysis I’ve offered here, I’m suggesting that the Tampa Bay Buccaneers are likely to score about 20 points against the WFT defense. Further, it is likely that they will score 13-17 points by halftime, and be held to a field goal or touchdown in the second half to reach 20 points for the game. JDR's 'D' doesn't give up PTS in the 2nd half. Another shutout after intermission. Here are the point totals WSH allowed over the last 11 games:— Grant Paulsen (@granthpaulsen) January 4, 2021 PHI- 0 CAR- 0 SEA- 7 SF- 8 PIT- 3 DAL- 3 CIN- 0 DET- 13 NYG- 3 DAL- 0 NYG- 7 Yielded 4.4 2nd-half points per game over 11 games. If the offense achieves the “expected” total of 17-19 points on Saturday, then the Washington playoff bid will end quickly. Instead, the healthy Football Team offense needs to score one time more than this analysis says they will; they need to put up 24 points or more to secure the win against a very good Tampa Bay team led by the most successful quarterback in NFL history. Poll How many points will be scored by TB & WFT combined in Saturday’s game?
https://www.hogshaven.com/2021/1/5/22214735/previewing-saturdays-tampa-bay-washington-playoff-game-scoring-analysis
Pages Tuesday, May 26, 2015 Organization Is Not For The Faint Of Heart Okay so we all know that I have a fascination for organization, whether it is my paper, markers, stamps, cutting files, vinyl...well it really doesn't matter what it is I always feel I need to organize it and always look for a better way to organize. Organization is not for the faint of heart, but to me it is always worth it in the long run. I had recently reorganized my vinyl, I'll make a post about it later, and the shoe cabinet where I use to keep quite a bit of my vinyl in was empty and I wondered what I could put in there. After looking around in my craftroom closet, which was still in disarray, I thought about the giftbags I have to alter and thought I would try those in the cabinet. I had the giftbags in two different boxes and a few bags, I said my closet was in disarray, but was able to get all of them in the cabinet and it still shuts nice and neat. Most of the giftbags where bought in bundles and I never paid more than a dime for any of the bags. I haven't altered any bags in awhile and maybe that can be put on my list of what I want to craft next, right after getting my mini albums done that I've been working on for months and replenishing my gift card boxes. So once I got the gift bags in the cabinet it lead to another thing, then another, then another...ya know how it goes. until I was moving everything around, going through boxes that have been packed up since I moved and making a more thorough mess than what I had in the first place. All these things on the shelves were moved about 10 different times! I kept changing things until I had items on shelves that looked okay to me and fit well, that also called for me moving shelves up and down a few times. Finally after a few hours, okay more like 10 hours or so I got things straightened up enough to please me for now, I could still stand to get some better storage items for a few things. I'm always looking in Goodwill and clearance aisle's for storage and sometimes I get lucky enough to find some good ones. This is the right hand corner of the closet and most items in that corner are ones I don't use or need to get to often. This is the left hand corner of the closet and the boxes on those shelves are Really Useful Boxes I got at Staples years ago and right now they hold things like ribbon, envelopes, card inserts, cording, etc. The two shelves at the bottom hold scrapbooks and a few catalogs or books for crafting. At the top of the closet are boxes that hold items for altering, such as tumblers and there are also a few boxes that hold completed projects such as mini albums and frames. I will either find something that looks more pleasing or eventually decorate these boxes so they look nicer. The red bins now have labels on them so I can grab the one I need without having to pull each one and go through it. These white boxes are what I store cards in and use to keep giftbags in two of them. These came from Walmart about 5 years ago and I think I picked them up for $1 each on clearance. I like having the label frame for identifying what's in each box. The cabinet in the lower left hand corner of the picture is the one where the giftbags are stored. I can't believe I can see the floor under the shelving now! I worked on my pegboard storage a few weeks ago and it looks a bit better than it originally did. The shelves in the picture are were I store my stamps using little plastic crates I found at Walmart for $1 each, they are originally to store DVDs I do believe. The black boxes contain envelopes, binding wires, silk flowers, adhesives and other things. This is the view into the closet with the door open and yes that is empty space there! AACCKK! I will eventually get rid of the wicker baskets and replace those with something else but I was getting tired and left them for now. Now ya would think I would be happy with this closet after all this but alas, NOPE! I started working on updating my stamp organization and then changed how my stamps are stored a bit which I will post about tomorrow! LOVE the way you organize!!!! I get this way too... In Fact I was re-organizing last night! Don't ya just love spending 10 hours organizing.. it so worth it in the end.... LOVE how your crafty closet is coming along! :) Have a Wonderful Day! :)
I decided to begin with the backdrop. I needed to get a smooth transition from dark to lighter, from left to right. This can be tricky to do. It requires a lot of careful blending. Blending can get messy because the paint often gets smeared onto adjacent areas. Better to paint it first, so I don’t mar anything that’s already been worked on. Above you can see where I’ve tested several different values of gray to make sure that I’ll get a smooth transition. It’s clear here that the underpainting is a few shades lighter than the finished value. I painted swatches of these colors directly onto my value study to make sure I got them right. Above you can see the dots of paint that I dabbed onto the value study to compare values. The orange spots are from when I was painting the underpainting. I was checking that the values I chose were a few shades lighter than the finished value. The dark spots are from this session, where I was trying to get the value the same as the in the study. Above, the backdrop is complete (for now!). I don’t want to work on any adjacent areas until it dries, so I’ll have a first go at the dish and stones. At first, I try to get the basic local colors in. I can’t do any finished work at this early stage, because all parts of the painting must grow together. If I put a lot of detail in now, I’d probably find later that the colors and/or values were incorrect. Everything must be compared with everything else, as the painting slowly evolves. At my next session, I roughed in the blue crystal and the two bricks on the left. The color of the bricks is very similar to the underpainting, so it’s hard to see what I did. I put in a thicker layer of paint and indicated some texture. The brick on the far left has a stippled, rough texture. The one next to it has concentric semi-circular markings which I indicated with strokes of the palette knife. I painted the bricks slightly darker than was indicated on my value study because I intend to scumble a lighter value over them to mimic paler deposits on the bricks. I put down the local color of the tabletop on the left. Above, I’ve made a first stab at the basket Next, I worked on the paperweight. The patterns are very hard to see, but I did my best! It’s frustrating to not get it right on the first try, but I can always make corrections later. Each time I repaint an object, it’s easier to see. The more landmarks that are in place, the easier it is to see if what is there is correct. The vase was next. I didn’t paint the darks as dark as they would be because I want to glaze the shadowed areas later. Finally, I finished putting a layer of paint on all of the bricks, the shell, and the rest of the tabletop. I couldn’t resist glazing a few shadows on the drier parts of the tabletop on the left. The paint wasn’t quite dry enough, but I got away with it. They will be much darker after I apply a few more glazes.
https://lindamann.blog/2020/05/20/painting-begins/
This was taken on a trip with students to San Francisco and the coast. Beautiful day with brunch at the famous Cliff House. You may also like Summer Flowers & the Last Day of School Today was the last day of school for our kiddos. For the boy it was a half day and for the girl […] Share this: Bird, man, of Alcatraz Sometimes you just get lucky. I was snappy shots of the island when this gull came across and the focus locked in […] Share this: Birds on the Beach Over Memorial Day weekend we had the pleasure of being on Tybee Island and the nearby water. At low tide sandbars appear […] Share this: Scenic Gettysburg In addition to the reenactment of the Battle of Gettysburg there is also the simple beauty of the countryside. Share this: Leave a Reply Cancel reply This site uses Akismet to reduce spam. Learn how your comment data is processed.
http://photoblog.targuman.org/pacific-surf/
8 Worst (But Likely) Outcomes Of The #GamerGate Scandal Three months and over a million tweets later, there's only a few ways this can end. To anyone other than those who know their IGNs from their Polygons and Giantbombs, Gamergate is a very hard thing to explain - even to the point where most of us who have been watching from the sidelines and occasionally getting stuck in have been consistently blown away by a lack of focus. It's starting to produce whiffs of the Occupy Wall Street movement, which although that started with vaguely relatable and somewhat attainable goals, over time as soon as any figures of worth and influence picked it apart, there wasn't much left to talk about. As that withered, faded away and died, it's highly likely that GamerGate will suffer a similar fate. Now we've discussed the movement a few times in the past - with this particular piece providing a more in-depth look at its origins. However if you want a speedier recap, what started as an accusatory finger pointed towards the way games journalists do business with publishers has provided the means for many long-silent people with very disgusting opinions to latch onto the furore, their endgame literally only being to demean others and stir the pot as much as possible. At the heart of the whole thing is supposed to be a discussion around why - in theory - bigger game websites tend to favour the titles they're given exclusive access to, along with the fact that many reviewers are friends with - or support financially - other games they may have to review in the future. Whether or not you believe that automatically makes said person unable to divorce their personal preferences from professional obligations is exactly what this debate was supposed to boil down to. However, amongst all the hate speech that has emerged towards certain prominent female figures in the industry because of it, it's made the entire movement's water so muddied it's impossible to address anything clearly.
https://whatculture.com/gaming/8-worst-likely-outcomes-gamergate-scandal
How much does a green card cost 2021? The primary form for adjusting status is USCIS Form I-485, the fee for which is $1,140 in 2021 (minus $85 for people who don’t need biometrics, that is, fingerprinting, and with downward adjustments for children filing with their parents). How much does it cost to renew green card 2021? How Much Is the Green Card Renewal Fee? The current cost to renew a green card is $540, which includes a $455 filing fee and an $85 biometrics fee (for your fingerprint, photo, and signature). You do not have to pay either fee if you’re also applying for a fee waiver. How much is it to become a U.S. citizen 2021? The current naturalization fee for a U.S. citizenship application is $725. That total includes $640 for application processing and $85 for biometrics services, both of which are nonrefundable, regardless of whether the U.S. government approves or rejects an application. What is the fastest way to get green card? If you’re a close relative to a U.S. citizen or a green card holder, they can petition for you to obtain legal permanent residency. This option is the fastest and most popular path to getting a green card. U.S. citizens are permitted to petition for immediate relatives, including: Spouses. How much is a lawyer for green card? Here are some typical legal fees for common immigration services: Green Card Petition for Relative: $1,000 to $3,500. Adjustment of Status Application: $2,000 to $5,000. Citizenship/Naturalization Application: $500 to $2,500. Can I renew my green card for free? Having a Green Card (officially known as a Permanent Resident Card) allows you to live and work permanently in the United States. The steps you must take to apply for a Green Card will vary depending on your individual situation. Do you need a lawyer to renew green card? Should I Hire an Immigration Attorney? Not every immigration issue will require the assistance of an immigration attorney. However, if you do not understand the requirements for renewing or replacing a green card, then it may be in your best interest to consult a local immigration attorney for further guidance.
https://booking-accommodation.com/excursionist/frequent-question-how-much-does-uscis-charge-for-green-card.html
Chef studied science at school. But now he was tired of it. He likes to solve problems on Codechef. Here and now, he decided to go to his favourite site and solve several interesting problems. But as it turned out Chef is not yet ready to solve the harder problems. So you have to help him. If you take away all the foreword, then the task set is this: you have N non-negative numbers A1, A2 ... AN and M requests. Each request consists of three numbers (l, v, k). MagicInteger(l,v) is equal to concatenation of the numbers Ai, such that (i = l + v * q & 1 <= i <= n & q >= 0). For each such request you have to write the k-th digit of MagicInteger(l,v). If you have 2 numbers 11, 22 - after concatenation you will have 1122. If you have 3 number 13, 44, 12 - after concatenation you will have 134412. One more example: 0, 3, 11, 0 - after concatenation you will have 03110. First digit of this string is 0, not 3. The first line contains an integer N denoting the number of integers in the given array. The second line contains N space-separated integers A1, A2, ..., AN denoting the given array. The third line has a single integer M denoting the number of requests. The next M lines contains three space separated numbers in each line (l, v, k) denoting each request. For each request output a single line containing k-th digit of MagicInteger(l, v). If the length of MagicInteger(l,v) is smaller than k, you need to output "So sad" without quotes.
https://www.codechef.com/problems/CHITHH
Priority of debts – petitioning creditor – application for priority payment from bankrupt’s estate under s.38(5B) – legal costs incurred to fund proceedings to recover assets for creditors’ benefit – whether assets recovered or preserved by payment of moneys within meaning of s.38(5B) – exercise of discretion In 1998, B transferred shares in two companies worth approximately HK$300 million (the “Shares”) allegedly at a gross undervalue to HF (the “Transfer”). In 2001, on P’s petition, B was adjudicated bankrupt for failing to pay a judgment debt of approximately HK$5.18 million. The Official Receiver (the “OR”), as trustee of B’s estate, decided not to bring proceedings to recover the Shares from HF (the “Decision”). P successfully applied to reverse the Decision with costs payable by B’s estate and undertook to indemnify the OR against all liabilities (the “Funding Agreement”) so it could issue proceedings against HF (the “Avoidance Proceedings”). Subsequently, under a settlement agreement in the Avoidance Proceedings, HF gave to the OR a cheque for HK$5.18 million and undertook to pay the OR all proved debts and interest, secured by a HK$15 million bank guarantee and the Shares. P now applied: (a) under s. 38(5B) of the Bankruptcy Ordinance (Cap. 6) for approximately HK$452,000 in legal costs incurred by P in its application to reverse the OR’s Decision and HK$3 million in legal costs paid by P in the Avoidance Proceedings; and (b) for the return of a deposit of HK$1.2 million paid to the OR under the Funding Agreement, since all proceedings had been stayed by a Tomlin order with costs payable by HF to the OR. Held, allowing the applications, that: - The Court had jurisdiction to order the OR to pay the HK$3 million to P under the first limb of s. 38(5B) of the Ordinance (ie, assets recovered under indemnity for costs of litigation given by creditor). Assets had been recovered from HF in the Avoidance Proceedings under an indemnity for costs of the litigation in the Funding Agreement provided by P. - There was also jurisdiction to order the OR to pay the HK$452,000 to P under the second limb of s. 38(5B) (ie, assets protected or preserved by the payment of moneys or the giving of indemnity by creditor). The OR had a potential claim to recover the Shares. The asset was a chose in action and vested in the OR under s. 58 of the Ordinance. If the OR did not institute the Avoidance Proceedings, no one else could. Thus, the legal costs paid by P in those proceedings to reverse the Decision were moneys paid to “preserve” B’s asset. - P had assumed considerable risk under the Funding Agreement. There was apparently no monetary limit to the indemnity to the OR. Nine proofs of debt of about HK$149 million were admitted. P’s debt of HK$5.18 million was relatively small, but its potential liability for costs was substantial. Given the primary objective of s.38(5B) was to encourage creditors to assist trustees/liquidators in the recovery of assets, with a view to giving P an advantage over others for the risk run by it, the Court would exercise its discretion in granting the application. - Finally, the HK$1.2 million deposit should be returned to P. The OR faced no adverse costs orders and there was no justification for the OR to hold on to it.
http://hk-lawyer.org/content/re-leung-yat-tung-no-2
Geodesics and planar approximations using PROJ and geod¶ The PROJ system supports computations with Geodesic calculations, through API calls or through the geod command line interface. This exercise will proceed mostly by having you copy sample commands from the text onto the command line, and noting the results. The tutorial is written with the Windows command line in mind but it should be straight forward to use it on Unix-based systems as well. Hint Users of Unix-based systems can replace set with export and %VAR% with $VAR to follow the tutorial. 1. Objective¶ The objective of this exercise is to investigate how well a straight line in a projected plane approximates the geodesic between the same end points. 2. The geodesic¶ The geodesic we will consider runs from Helsinki to Tallinn, using the following coordinates: Helsinki 60.171 N 24.938 E Tallinn 59.437 N 24.745 E So to save some typing, let’s define some environment variables. On Windows: set HEL="60.171 24.938" set TAL="59.437 24.745" export HEL="60.171 24.938" export TAL="59.437 24.745" (i.e. mark, copy, and paste these lines into the command prompt) First, we want to solve the “inverse geodetic problem” for the geodesic, i.e. finding the azimuth and distance between the two points: echo %HEL% %TAL% | geod -I +ellps=GRS80 where the -I option indicates “the inverse geodetic problem”. geod replies with these 3 numbers: -172d22'12.772" 7d27'46.693" 82488.500 The first is the forward azimuth from Helsinki to Tallinn, the second is the return azimuth, while the third is the distance in meters. We would actually rather have the azimuths in fractional degrees, so we provide an explicit output format, using the -f option to geod: echo %HEL% %TAL% | geod -I -f "%.12f" +ellps=GRS80 getting us: -172.370214337896 7.462970382137 82488.500 We assert that the planar approximation is worst at the midpoint of the geodesic, i.e. (82488.5 m) / 2 = 41244.25 m, from Helsinki. So now, we want to find the coordinates of this point, by solving the “direct geodetic problem”: echo %HEL% -172.370214337896 41244.25 | geod -f "%.12f" +ellps=GRS80 resulting in: 59.804045696236 24.840439590098 7.545306054713 i.e. the latitude, the longitude, and the return azimuth of the mid point of the geodesic. Now, for convenience, define this environment variable: set MID="59.804045696236 24.840439590098" Note You can save a bit of manual copying by capturing the geod output into the clipboard, using the “clip” command on Windows, i.e. by appending | clip to the geod command above. 1. The planar approximation¶ We will be working on ETRS89/UTM35 coordinates, so first define an environment variable, to save some typing: set utm35=proj -rs +proj=utm +zone=35 +ellps=GRS80 Note The -rs option switches the proj coordinate i/o order to latitude/longitude and northing/easting, respectively. We do this to comply with the expected order for the geod program. Now, find the UTM coordinates of the two end points: echo %HEL% | %utm35% echo %TAL% | %utm35% Note your results and compute the mean of the northings, and the mean of the eastings, to obtain the planar approximation of the mid point Hint you can use python as a makeshift command line calculator by saying: python -c print((4+8)/2) Now, compute the geographical coordinates corresponding to the UTM mid point: echo <your northing your easting> | %utm35% -f "%.12f" -I (note the -I option for doing the inverse projection) For convenience define: set MID_APPROX=<your result here> Finally, compute the distance between the geodesic mid point and its planar approximation, by stating it as another case of the inverse geodetic problem: echo %MID% %MID_APPROX% | geod -I +ellps=GRS80 resulting in: 107d0'16.673" -72d59'43.193" 2.527 i.e. a deviation of 2.5 m over a stretch of about 85 km. 4. Suggested meditations¶ Consider these aspects: Given that geod is available, fast, and reliable - is it really worth the effort doing approximate calculations in the projected plane? geod includes functionality for computing intermediate points along a geodesic. Check the manual, especially the description of the n_Soption, and try to compute the geodesic mid point directly, by setting n_S=2. One should actually expect the result of meditation 2 above to be slightly superior to the result obtained in the exercise. Why? Answers¶ Helsinki, UTM: 6672241.54 385592.95 Tallinn, UTM: 6590881.40 372106.37 Mid point, UTM: 6631561.47 378849.66 Mid point, GEO: 59.804039062602 24.840482644156 Meditations: Probably not geod +lat_1=60.171 +lon_1=24.938 +lat_2=59.437 +lon_2=24.745 +n_S=2 +ellps=GRS80 -f "%.12f"
https://proj.org/tutorials/geodesics.html
* Word Gap only works on version 11.4.0 and later. Description of Word Gap in Digiexam This question type means that a text is presented followed by a text box in which the student fills in his/her answer. The answer is then compared to the correct word for a match. "Word Gap" can be used for glossaries, quizzes, questioning, spelling and much more. There is also a new fix option "Points per question" that will be discussed later in this guide. Grading Information Matching "Exact Match" This option requires that the blank is filled in exactly as the teacher wrote. "Contains" With this setting, it needs not be exactly that word, but the word must be included. For example: The word hatch contains [Fish], hence if you write "Fishing" then it will turn out correct since the word still contains Fish. This also works in a sentence, "I walked my [dog]" and the student writes "I walked my [dogs]". "Case sensitive" makes the words sensitive to upper and lower case letters. So if the word is [Horses] and the user writes "horses" then there will be no points. Grading options As a correction option, it is also possible to choose "All or nothing", "Right minus error" and "Points per answer", they are explained in more detail in this section. All or nothing The student must complete all the correct spellings to get points. If the student makes a misspelling or writes an incorrect word, the student is assigned zero points. Figure 1 below shows examples in "Word Gap" that is added to the editing view for a test/exam with the correction option "All or nothing". Figure 1 The examples below are based on the word gaps in Figure 1. Example 1: Student 1 writes: fourth, driver's license, 18. Since all answers were correctly written, it will be 3 points. Example 2: Student 2 writes: 4, driver's license, 18. Since the student did not write "fourth" but instead the number 4, it was wrong and will then be 0 points. Example 3: Student 3 writes: fourth, driver's license, eighteen. The same goes for this when the student wrote eighteen instead of the numbers 18, so then its 0 points. Example 4: Student 4 writes: fourth, driver's license-B, 18. Here it was possible to score if the setting "Contains" was chosen instead of "Exact match" which requires it to be exactly as specified in the gap, it will be 0 points. Right minus wrong Students receive plus points for correct spelling and minus points for each incorrect spelling. Each word gap is worth the fraction of the maximum points: So in this case, the maximum score is 4, then every word gap is worth 1 point. So if you write 2 right and 2 wrong then it will be +2 - (-2) = 0 in the final score. The lowest score awarded is zero, so it is not possible to assign minus points (for example -1) as points on "Word gap". Figure 2 below shows an example of a multiple-choice question with "Contains" and "Case-sensitive" settings. Figure 2 The examples below are based on the question in Figure 2. Example 1: Student 1 writes: furniture company, Ingvar Kamprad, Netherlands, Helsingborg. All words are spelt correctly and had case sensitive letters so it will be 4 points. Example 2: Student 2 writes: Furniture company, Ingvar kamprad, The Netherlands, Helsingborg. This then becomes 2 points due to 2 wrong case sensitive letters. Example 3: Student 3 writes: furniture company, ingvar kamprad, netherlands, helsingborg. This will be 0 points when three errors will remove the points that were correct. Points per answer Each correct spelling gives plus points and if the spelling of a gap is wrong it won't give any points. Each correct answer is worth the maximum points divided by the number of gaps. In figure 3 below we have a question worth 5 points with 5 gaps. So 1 point per correct spelling and case sensitive. Figure 3 Preview of the exam With a preview of the exam, you get a picture of what the exam looks like. Numbers will appear indicating the number of the word gap in that view. These are not visible to Pupils / Students. Word gap from the Student's perspective The Student will not see if its "All or nothing", "Right minus error" or "Points per answer". See examples of Word gaps when the student answers them in Figures 3, 4 and 5 below.
https://support.digiexam.se/hc/en-us/articles/360014151733-Question-type-Word-Gap
Binder of 400 Vintage Foreign Coins. Binder contains coins from Germany, Cyprus, Netherlands, Egypt, Italy, Canada, Yugoslavia, France, Denmark, Israel, Panama, India, Hong Kong, Argentina and more. Conditions vary and binder measures 11′ × 11 1/2′ × 2 1/2’. - Everything But The House does not grade coins or currency. Existing grades offered by third party grading services, if accompanying any particular coins or currency, are presented for informational purposes only and subsequent feedback from future grading authorities may or may not coincide with the information provided. - Please note, if shipping is selected for this item, a signature will be required upon delivery to persons over the age of 21. Unsuccessful deliveries will be returned to EBTH. Condition - conditions vary. Dimensions 11.0" W x 11.5" H x 2.5" D - measurements of binder.
https://ebth-14fa.kxcdn.com/items/12497686-binder-of-400-vintage-foreign-coins
Q: extracting residual series for every element in a list I run 1000 linear regressions simultaneously and get results in form of lists of list. Now I want to extract residual series from every regression and make a data set combining residuals for every regression.I have run following codes and get residuals. Please help me to combine residuals from different models. set.seed(1234) library(xts) library(zoo) set.seed(1234) ys<-data.frame(matrix(rnorm(1000),nrow=100)) x<-data.frame(matrix(rnorm(200),nrow = 100)) date<-as.Date("2010-01-01")+0:99 colnames(x)<-c("ts","cs") ys<-xts(ys,order.by = date) x<-xts(x,order.by = date) models<-apply(ys,2,function(y) {lm(y~x)}) models_residuals<-lapply(models,function(x) {x[2]}) A: Method1: The idea is that data.frame can take list input and will treat each elements of list as a col. There may be some restrictions on the input, like elements should be equal length etc. Also, this method will generate a messy column names. For example: data.frame(list(list(1:2),list(2:3))) Output: X1.2 X2.3 1 1 2 2 2 3 Code: # each sub list will be a column res_df <- data.frame(models_residuals) # rename the dataframe names(res_df) <- paste(names(ys),'res',sep='_') head(res_df) Output: Method 2: Since each element of your list is also a list, this method will unlist each sub elements in your list and then convert it to a vector. res_df = lapply(models_residuals,function(x){as.vector(unlist(x))}) res_df = data.frame(res_df) head(res_df)
Vulnerability during short-term memory induced response in canine ventricle. Cardiac short-term memory is an intrinsic property which can make the action potential duration produce a transient response after a sudden change in heart rate. The change of vulnerability was investigated by using computer simulation method during the transient period which was created by abruptly shortening the cycle length from 800ms to 300ms. The study was performed on a heterogeneous fiber consisting of endo-, mid-, and epi-cardiac canine myocytes. An OpenMP parallel algorithm was used to accelerate the calculation. The study shows that the vulnerable window (VW) relied on both pacing times and locations. At the cycle length of 300ms, compared with the situation of 500th beat, there was a large transmural dispersion of repolarization (TDR) at the 30th beat. For most of the sites along the fiber, VW consistently demonstrated widely at the beginning of the transient period. Generally, with sustained pacing, VW tended to become small. The results suggested that during a memory-induced transient response, the probability of an occurrence of reentrant wave increased immediately after an abrupt change in pacing rate because of the relatively large TDR and VW within this period. Therefore, avoidance of a sudden heart rate variation was indicated to be helpful for the suppression of reentrant arrhythmias.
Wesley Ward trained her as a two year old, she won her maiden race, finished 10th at Ascot, then ran an Allowance race at Saratoga end of July 2017 and hasn't raced since then. Good for Stonestreet to take the time and bring her back as a four year old. 6/27/19 Windracer wins! Go baby girl! https://twitter.com/StonestreetFarm/status/1144290603196399616 6/27/19 Nice! Love the name. The Pretty Polly Stakes is tomorrow, I hate to miss it. 8/27/19 8/27/19 What a sweet face and expression!!! 8/27/19 Love that Taco!!!!!:) 9/7/19 Cambria is headed to the BC! https://www.bloodhorse.com/horse-racing/articles/235645/cambria-prevails-in-kentucky-downs-juvenile-turf-sprint 9/8/19 Wow what a little fighter she is! Stonestreet has a good one! That turf course at KY Downs is interesting. I didn't realize it was downhill. 9/8/19 Kidney shaped too.
http://forums.delphiforums.com/alexbrown/messages/55257/2406
BACKGROUND OF THE INVENTION 1. FIELD OF THE INVENTION 2. RELATED ART SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE INVENTION First Component Second Component Third Component Liquid Crystal Composition (1) Liquid Crystal Composition (2) Embodiments of Liquid Crystal Composition Characteristics of the Liquid Crystal Composition EXAMPLE (1) Maximum temperature of a nematic phase (NI; °C) (2) Minimum temperature of a nematic phase (Tc; °C) (3) Optical anisotropy (Δn; measured at 25°C) (4) Viscosity (η; mPa·s, measured at 20°C) (5) Dielectric Anisotropy (Δε; measured at 25°C) (6) Voltage Holding Ratio (VHR; measured at 25°C and 100°C; %) (7) Gas chromatographic Analysis COMPARATIVE EXAMPLE 1 EXAMPLE 1 EXAMPLE 2 EXAMPLE 3 EXAMPLE 4 EXAMPLE 5 EXAMPLE 6 EXAMPLE 7 EXAMPLE 8 EXAMPLE 9 EXAMPLE 10 EXAMPLE 11 EXAMPLE 12 EXAMPLE 13 The invention relates to a liquid crystal composition and a liquid crystal display device containing the composition. Japanese Patent No. 2811342 No. 1761492 DE 19 607 043 JP 2004-532344 02/99010 JP H10-176167 A / 1998 A liquid crystal display device (which is a generic term for a liquid crystal display device, a liquid crystal display panel and a liquid crystal display module) utilizes optical anisotropy, dielectric anisotropy and so forth of a liquid crystal composition, and as an operating mode of the liquid crystal display device, such various modes have been known as a phase change (PC) mode, a twisted nematic (TN) mode, a super twisted nematic (STN) mode, a bistable twisted nematic (3TN) mode, an electrically controlled birefringence (ECB) mode, an optically compensated bend (OCB) mode, an in-plane switching (IPS) mode, a vertical alignment (VA) mode, and so forth. Among these display modes, it has been known that an ECB mode, an IPS mode, a VA mode and so forth are capable of being improved in viewing angle while the conventional modes, such as a TN mode, an STN mode and so forth, have a problem therein. A liquid crystal composition having a negative dielectric anisotropy can be used in a liquid crystal display device of these modes. A liquid crystal compound having 2,3-difluorophenylene contained in the liquid crystal composition having a negative dielectric anisotropy is being studied (as described, for example, in and ). A liquid crystal composition having a negative dielectric anisotropy capable of being used in the liquid crystal display device has also been studied (as described, for example, in , (International Publication No. ), and ). DE 19 607 043 JP 2004-532344 02/99010 JP H10-176167 A / 1998 DE 19 607 043 JP H10-176167A/ 1998 A fluorine-replaced liquid crystal compound and a liquid crystal composition containing the compound are disclosed in JP H07-053432 A / 1995. However, the technique disclosed in JP H07-053432 A / 1995 considers only a liquid crystal compound having a positive dielectric anisotropy but fails to study a liquid crystal compound having a negative dielectric anisotropy. A liquid crystal composition having a combination of a liquid crystal compound having 2,3-difluorophenylene and a non-fluorine-replaced liquid crystal compound is disclosed in , (International Publication No. ), and . However, the compound contains a non-fluorine-replaced liquid crystal compound not having a negative dielectric anisotropy, and there are some cases where the compound does not have a negatively large dielectric anisotropy. A liquid crystal composition having a combination including a mono-fluorine-replaced, which analogous to the first component of the invention, is disclosed in and . However, the compounds disclosed in the examples thereof have a negatively small dielectric anisotropy, and the minimum temperature of a nematic phase has not been clarified. 2 2 2 2 2 2 2 4 2 3 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 The liquid crystal compound is a generic term for a compound having a liquid crystal phase such as a nematic phase, a smectic phase and so forth, and also for a compound having no liquid crystal phase but being useful as a component of a composition. The content ratio of the component is calculated based on the total weight of the liquid crystal composition. The liquid crystal compound herein is a compound represented by formula (A). The compound may be an optically active compound: In formula (A), Rx and Ry are independently hydrogen, alkyl, alkoxy, alkoxyalkyl, alkoxyalkoxy, acyloxy, acyloxyalkyl, alkoxycarbonyl, alkoxycarbonylalkyl, alkenyl, alkenyloxy, alkenyloxyalkyl, alkoxyalkenyl, alkynyl, alkynyloxy, cyano, -NCS, fluorine or chlorine. These groups have 10 or less carbons. In the group having 1 to 5 carbons, arbitrary hydrogen may be replaced by fluorine or chlorine, and the total number of the replaced fluorine and chlorine is 1 to 11. In formula (A), ring B is 1,4-cyclohexylene, 1,4-phenylene, pyrane-2,5-diyl, 1,3-dioxane-2,5-diyl, pyridine-2,5-diyl, pyrimidine-2,5-diyl, decahydronaphthalene-2,6-diyl, 1,2,3,4-tetrahydronaphthalene-2,6-diyl or naphthalene-2,6-diyl. In ring B, arbitrary hydrogen may be replaced by fluorine or chlorine. In ring B, the total number of the replaced fluorine and chlorine is 1 to 4. In the 1,4-phenylene, arbitrary one or two hydrogens may be replaced by cyano, methyl, difluoromethyl or trifluoromethyl. In formula (A), Y represents a single bond, -(CH)-, -COO-, -OCO-, -CHO-, -OCH-, -CFO-, -OCF-, -CH=CH-, -CF=CF-, -(CH)-, - (CH)-O-, -O- (CH)-, -CH=CH- (CH)-, - (CH)-CH=CH-, - (CH)CFO-, -OCF(CH)-, -(CH)COO-, -(CH)OCO-, -COO (CH)-, -OCO(CH)-, -CH=CH-COO-, -CH=CH-OCO-, -COO-CH=CH- or -OCO-CH=CH-. In formula (A), n represents 1, 2, 3 or 4. A liquid crystal display device having such a display mode as an IPS mode and a VAmode still has a problem as a display device as compared to CRT, and is demanded to be improved in characteristics thereof. The liquid crystal display device driven in an IPS mode or a VA mode is constituted mainly by a liquid crystal composition having a negative dielectric anisotropy, and in order to improve further the characteristics, the liquid crystal composition preferably has the following characteristics (1) to (5), i.e., (1) a wide temperature range of a nematic phase, (2) a low viscosity, (3) a suitable optical anisotropy, (4) a large absolute value of dielectric anisotropy, and (5) a large specific resistance. The temperature range of a nematic phase relates to a temperature range where the liquid crystal display device is used, and a liquid crystal display device containing a liquid crystal composition having a wide temperature range of a nematic phase as in the item (1) has a wide temperature range where the liquid crystal display device can be used. A liquid crystal display device containing a liquid crystal composition having a small viscosity as in the item (2) has a short response time. A liquid crystal display device having a short response time can be favorably used for displaying a moving image. Furthermore, upon injecting the liquid crystal composition into a liquid crystal cell of the liquid crystal display device, the injection time can be reduced to improve the workability. A liquid crystal display device containing a liquid crystal composition having a suitable optical anisotropy as in the item (3) has a large contrast. A liquid crystal display device containing a liquid crystal composition having a large absolute value of dielectric anisotropy as in the item (4) reduces a threshold voltage, decreases a driving voltage, and reduces an electric power consumption. A liquid crystal display device containing a liquid crystal composition having a large specific resistance as in the item (5) increases a voltage holding ratio and increases a contrast ratio. Therefore, such a liquid crystal composition is preferred that has a large specific resistance in the initial stage and has a large specific resistance even after using for a long period of time. 11 12 11 12 13 11 12 11 12 11 12 11 12 11 12 2 4 2 2 The invention concerns a liquid crystal composition having a negative dielectric anisotropy and comprising a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1) to (1-3) and a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1) to (2-3): wherein, in formulas (1-1) to (1-3) and formulas (2-1) to (2-3), R and R are independently alkyl, alkenyl or alkoxy; ring A is independently 2-fluoro-1,4-phenylene or 3-fluoro-1,4-phenylene; ring A is independently 1, 4-cyclohexylene or 1, 4-phenylene; ring A is independently 1,4-cyclohexylene or 1,4-phenylene, arbitrary hydrogen of which may be replaced by fluorine or chlorine; and Z and Z are independently a single bond, -CH-, -CHO-or -OCH-, provided that a compound represented by formula (1-2), wherein ring A is 2-fluoro-1,4-phenylene, ring A is 1,4-phenylene, and Z and Z are single bonds, and a compound represented by formula (1-3), wherein ring A is 3-fluoro-1,4-phenylene, ring A is 1,4-phenylene, and Z and Z are single bonds, are excluded. The inventions also concerns a liquid crystal display device comprising the liquid crystal composition, and so forth. An advantage of the invention is to provide a liquid crystal composition that is properly balanced regarding many characteristics among the characteristics such as a wide temperature range of a nematic phase, a small viscosity, a suitable optical anisotropy, a negatively large dielectric anisotropy, anda large specific resistance. Another advantage of the invention is to provide a liquid crystal display device containing the composition, and the liquid crystal display device has a large voltage holding ratio and is driven by an active matrix (AM) mode suitable for a VA mode, an IPS mode and so forth. The inventors have found that a liquid crystal composition containing a mono-fluorine-replaced liquid crystal compound having a specific structure and a liquid crystal compoundhaving two or more fluorines having a specific structure has a wide temperature range of a nematic phase, a small viscosity, a suitable optical anisotropy, a negatively large dielectric anisotropy, and a large specific resistance, and thus the invention has been completed. The liquid crystal composition of the invention has a wide temperature range of a nematic phase, a small viscosity, a suitable optical anisotropy, a negatively large dielectric anisotropy, and a large specific resistance. The liquid crystal composition of the invention is properly balanced regarding these characteristics. The liquid crystal composition of the invention can have an optical anisotropy in a range of from 0.080 to 0.120 and a dielectric anisotropy in a range of from -6.5 to -2.0. The liquid crystal display device of the invention contains the liquid crystal composition and has a high voltage holding ratio. The liquid crystal display device contains the composition and has a large voltage holding ratio. The liquid crystal display device can be suitably used as a liquid crystal display device driven in an active matrix (AM) mode (hereinafter, sometimes referred to as an AM device) having an operation mode such as a VA mode and an IPS mode. The inventors have found that a liquid crystal composition containing a liquid crystal compound having one hydrogen replaced by fluorine having a specific structure and a liquid crystal compound having two or more fluorines having a specific structure has a wide temperature range of a nematic phase, a small viscosity, a suitable optical anisotropy, a negatively large dielectric anisotropy, and a large specific resistance, and thus the invention has been completed. 11 12 11 12 13 11 12 11 12 11 12 11 12 11 12 2 4 2 2 1. A liquid crystal composition having a negative dielectric anisotropy and comprising a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1) to (1-3) and a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1) to (2-3): wherein, in formulas (1-1) to (1-3) and formulas (2-1) to (2-3), R and R are independently alkyl, alkenyl or alkoxy; ring A is independently 2-fluoro-1,4-phenylene or 3-fluoro-1,4-phenylene; ring A is independently 1,4-cyclohexylene or 1,4-phenylene; ring A is independently 1,4-cyclohexylene or 1,4-phenylene, arbitrary hydrogen of which may be replaced by fluorine or chlorine; and Z and Z are independently a single bond, -CH-, -CHO-or -OCH-, provided that a compound represented by formula (1-2), wherein ring A is 2-fluoro-1, 4-phenylene, ring A is 1,4-phenylene, and Z and Z are single bonds, and a compound representedby formula (1-3), wherein ring A is 3-fluoro-1,4-phenylene, ring A is 1,4-phenylene, and Z and Z are single bonds, are excluded. a b 2 2 21 22 23 21 2. A liquid crystal composition having a negative dielectric anisotropy and comprising a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1), (1-1-2), (1-2-1), (1-2-2), (1-2-3) and (1-3-1) and a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1), (2-2-1) and (2-3-1): wherein in formulas (1-1-1), (1-1-2), (1-2-1), (1-2-2), (1-2-3), (1-3-1), (2-1-1), (2-2-1) and (2-3-1), R is independently alkyl or alkenyl; R is independently alkyl, alkenyl or alkoxy; ring A is independently 2-fluoro-1, 4-phenylene or 3-fluoro-1, 4-phenylene; ring A is independently 1,4-cyclohexylene or 1,4-phenylene; ring A is 1,4-cyclohexylene or 1,4-phenylene, arbitrary hydrogen of which may be replaced by fluorine; and Z is -CHO- or -OCH-. 3. The liquid crystal composition according to claim 2, wherein the liquid crystal composition comprises the first component comprising at least one compound selected from a group of compounds representedby formulas (1-1-1), (1-1-2), (1-2-1), (1-2-2) and (1-2-3) and the second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1), (2-2-1) and (2-3-1). 4. The liquid crystal composition according to one of claims 1 to 3, wherein a content ratio of the first component is from 10% to 80% by weight, and a content ratio of the second component is from 20% to 90% by weight, based on the total weight of the liquid crystal compounds. 11 12 11 12 13 11 12 22 23 22 11 12 11 12 11 12 11 12 2 4 2 2 a b c 2 2 5. A liquid crystal composition having a negative dielectric anisotropy and comprising a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1) to (1-3), a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1) to (2-3), and a third component comprising at least one compound selected from a group of compounds represented by formulas (3-1) to (3-3): wherein, in formulas (1-1) to (1-3) and formulas (2-1) to (2-3), R and R are independently alkyl, alkenyl or alkoxy; ring A is independently 2-fluoro-1,4-phenylene or 3-fluoro-1,4-phenylene; ring A is independently 1, 4-cyclohexylene or 1, 4-phenylene; ring A is independently 1,4-cyclohexylene or 1,4-phenylene, arbitrary hydrogen of which may be replaced by fluorine or chlorine; and Z and Z are independently a single bond, -CH-, -CHO-or -OCH-, wherein in formulas (3-1) to (3-3), R is independently alkyl or alkenyl; R is independently alkyl, alkenyl or alkoxy; R is alkyl, alkenyl, alkoxy or alkoxymethyl; plural rings A are independently 1, 4-cyclohexylene or 1,4-phenylene; and ring A is independently 1, 4-cyclohexylene or 1, 4-phenylene, arbitrary hydrogen of which may be replaced by fluorine; and Z represents a single bond, -CHO-, -OCH- or -COO-, provided that a compound represented by formula (1-2), wherein ring A is 2-fluoro-1,4-phenylene, ring A is 1,4-phenylene, and Z and Z are single bonds, and a compound represented by formula (1-3), wherein ring A is 3-fluoro-1,4-phenylene, ring A is 1,4-phenylene, and Z and Z are single bonds, are excluded. 6. The liquid crystal composition according to claim 5, wherein a content ratio of the first component is from 5% to 75% by weight, a content ratio of the second component is from 20% to 80% by weight, and a content ratio of the third component is from 5% to 45% by weight, based on the total weight of the liquid crystal compounds. a b 7. A liquid crystal composition having a negative dielectric anisotropy and comprising a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1-1), (1-1-1-2), (1-1-2-1), (1-1-2-2), (1-2-1-1), (1-2-1-2), (1-2-2-1), (1-2-3-1) and (1-2-3-2) and a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1-1), (2-2-1-1), (2-2-1-2), (2-2-1-3), (2-2-1-4) and (2-2-1-5): wherein in formulas (1-1-1-1), (1-1-1-2), (1-1-2-1), (1-1-2-2), (1-2-1-1), (1-2-1-2), (1-2-2-1), (1-2-3-1), (1-2-3-2), (2-1-1-1), (2-2-1-1), (2-2-1-2), (2-2-1-3), (2-2-1-4) and (2-2-1-5), R is independently alkyl or alkenyl; and R is independently alkyl, alkenyl or alkoxy. 8. The liquid crystal composition according to claim 7, wherein the liquid crystal composition comprises the first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1-1), (1-1-2-1), (1-2-1-1) and (1-2-3-1) and the second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1-1) and (2-2-1-1) to (2-2-1-5). 9. The liquid crystal composition according to claim 7, wherein the liquid crystal composition comprises the first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1-1), (1-1-1-2), (1-2-1-1), (1-2-1-2) and (1-2-2-1) and the second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1-1) and (2-2-1-1) to (2-2-1-5). 10. The liquid crystal composition according to claim 7, wherein the liquid crystal composition comprises the first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1-1), (1-1-2-1), (1-2-1-1) and (1-2-3-1) and the second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1-1), (2-2-1-1) and (2-2-1-2). 11. The liquid crystal composition according to claim 7, wherein the liquid crystal composition comprises the first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-2-1), (1-1-2-2), (1-2-3-1) and (1-2-3-2) and the second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1-1), (2-2-1-1) and (2-2-1-2). 12. The liquid crystal composition according to claim 7, wherein the liquid crystal composition comprises the first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-2-1) and (1-2-3-1) and the second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1-1), (2-2-1-1) and (2-2-1-2). 13. The liquid crystal composition according to one of claims 7 to 12, wherein a content ratio of the first component is from 30% to 75% by weight, and a content ratio of the second component is from 25% to 70% by weight, based on the total weight of the liquid crystal compounds. a b c 14. A liquid crystal composition having a negative dielectric anisotropy and comprising a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1-1), (1-1-1-2), (1-1-2-1), (1-1-2-2), (1-2-1-1), (1-2-1-2), (1-2-2-1), (1-2-3-1) and (1-2-3-2), a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1-1), (2-2-1-1), (2-2-1-2), (2-2-1-3), (2-2-1-4) and (2-2-1-5), and a third component comprising at least one compound selected from a group of compounds represented by formulas (3-1-1), (3-1-2), (3-2-1), (3-3-1) and (3-3-2): wherein in formulas (1-1-1-1), (1-1-1-2), (1-1-2-1), (1-1-2-2), (1-2-1-1), (1-2-1-2), (1-2-2-1), (1-2-3-1), (1-2-3-2), (2-1-1-1), (2-2-1-1), (2-2-1-2), (2-2-1-3), (2-2-1-4) and (2-2-1-5),(3-1-1), (3-1-2), (3-2-1), (3-3-1) and (3-3-2), R is independently alkyl or alkenyl; R is independently alkyl, alkenyl or alkoxy; and R is independently alkyl, alkenyl, alkoxy or alkoxymethyl. 15. The liquid crystal composition according to claim 14, wherein the third component comprises at least one compound selected from a group of compounds represented by formulas (3-1-1), (3-1-2) and (3-2-1). 16. The liquid crystal composition according to claim 14, wherein the third component comprises at least one compound selected from a group of compounds represented by formula (3-2-1). 17. The liquid crystal composition according to one of claims 14 to 16, wherein a content ratio of the first component is from 10% to 65% by weight, a content ratio of the second component is from 25% to 60% by weight, and a content ratio of the third component is from 5% to 35% by weight, based on the total weight of the liquid crystal compounds. a b c 2 2 2 2 21 22 23 21 22 18. A liquid crystal composition having a negative dielectric anisotropy and consisting essentially of a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1), (1-1-2), (1-2-1), (1-2-2), (1-2-3) and (1-3-1), a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1), (2-2-1) and (2-3-1), and a third component comprising at least one compound selected from a group of compounds represented by formulas (3-1) to (3-3). wherein in formulas (1-1-1), (1-1-2), (1-2-1), (1-2-2), (1-2-3), (1-3-1), (2-1-1), (2-2-1), (2-3-1) and(3-1) to (3-3), R is independently alkyl or alkenyl; R is independently alkyl, alkenyl or alkoxy; R is alkyl, alkenyl, alkoxy or alkoxymethyl; ring A is independently 2-fluoro-1, 4-phenylene or 3-fluoro-1, 4-phenylene; ring A is independently 1,4-cyclohexylene or 1,4-phenylene; ring A is 1,4-cyclohexylene or 1,4-phenylene, arbitrary hydrogen of which may be replaced by fluorine; and Z is independently -CHO- or -OCH-; Z is independently a single bond, -CHO-, -OCH- or -COO. 19. The liquid crystal composition according to claim 2, wherein the liquid crystal composition consisting essentially of a first component comprising at least one compound selected from a group of compounds represented by formulas (1-1-1), (1-1-2), (1-2-1), (1-2-2), (1-2-3) and (1-3-1) and a second component comprising at least one compound selected from a group of compounds represented by formulas (2-1-1), (2-2-1) and (2-3-1). 20. The liquid crystal composition according to one of claims 1 to 19, wherein the liquid crystal composition has a dielectric anisotropy in a range of from -6.5 to -2.0. 21. The liquid crystal composition according to one of claims 1 to 20, wherein the liquid crystal composition has an optical anisotropy in a range of 0.080 to 0.120. 22. A liquid crystal display device comprising the liquid crystal composition according to one of claims 1 to 21. 23. The liquid crystal display device according to claim 22, wherein the liquid crystal display device is driven in an active matrix mode and displayed in a VA mode or an IPS mode. The present invention has the following. The liquid crystal composition of the invention contains a liquid crystal compound containing one fluorine and having a specific structure as a first component, and a liquid crystal compound containing two or more fluorines and having a specific structure as a second component, and also contains, depending on necessity, a liquid crystal composition having a specific structure as a third component. With respect to the first to third components, the structures of the compounds used in the components, the characteristics and effects of the components, and specific examples and preferred examples of the components are described below. The liquid crystal compound as the first component of the liquid crystal composition of the invention is a liquid crystal compound represented by formulas (1-1) to (1-3): 11 12 11 12 11 12 In formulas (1-1) to (1-3), independently, R, R, ring A, ring A, Z and Z are defined as follows. 11 12 R and R are independently alkyl, alkenyl or alkoxy. Among the alkyl, alkyl having 1 to 20 carbons is preferred, alkyl having 1 to 10 carbons is more preferred, methyl, ethyl, propyl, butyl, pentyl, hexyl, heptyl and octyl are further preferred, and ethyl propyl, butyl, pentyl and heptyl are particularly preferred. Among the alkenyl, alkenyl having 2 to 20 carbons is preferred, alkenyl having 2 to 10 carbons is more preferred, vinyl, 1-propenyl, 2-propenyl, 1-butenyl, 2-butenyl, 3-butenyl, 1-pentenyl, 2-pentenyl, 3-pentenyl, 4-pentenyl, 1-hexenyl, 2-hexenyl, 3-hexenyl, 4-hexenyl and 5-hexenyl are further preferred, and vinyl, 1-propenyl, 3-butenyl and 3-pentenyl are particularly preferred. 11 12 11 12 11 12 In the case where R or R is the alkenyl, the steric configuration of -CH=CH- depends on the position of the double bond. In the case where R or R is a group having a double bond starting from a carbon with an odd position number, such as 1-propenyl, 1-butenyl, 1-pentenyl, 1-hexenyl, 3-pentenyl, 3-hexenyl or 5-hexenyl, a trans configuration is preferred. In the case where R or R is a group having a double bond starting from a carbon with an even position number, such as 2-propenyl, 2-butenyl, 2-pentenyl, 4-pentenyl, 2-hexenyl or 4-hexenyl, a cis configuration is preferred. Among the alkoxy, alkoxy having 1 to 20 carbons is preferred, alkyl having 1 to 10 carbons is more preferred, methoxy, ethoxy, propoxy, butoxy, penthyloxy, hexyloxy and heptyloxy are further preferred, and methoxy, ethoxy and butoxy are particularly preferred. 11 12 11 12 11 12 11 12 11 12 11 12 2 4 2 2 Ring A is independently 2-fluoro-1, 4-phenylene or 3-fluoro-1, 4-phenylene, ring A is independently 1,4-cyclohexylene or 1,4-phenylene, and Z and Z are independently a single bond, -CH-, -CHO- or -OCH-. In the case where the ring contained in the compound represented by formulas (1-1) to (1-3) is 1,4-cyclohexylene, the steric configuration thereof is preferably a trans configuration. A compound represented by formula (1-2), wherein ring A is 2-fluoro-1, 4-phenylene, ring A is 1, 4-phenylene, and Z and Z are single bonds, and a compound representedby formula (1-3), wherein ring A is 3-fluoro-1, 4-phenylene, ring A is 1,4-phenylene, and Z and Z are single bonds, are excluded. One of the characteristic features of the compound represented by formulas (1-1) to (1-3) resides in that the compound contains onlyone fluorine and has 2-fluoro-1, 4-phenylene or 3-fluoro-1, 4-phenylene as a fluorine-containing group. Owing to the structure of the liquid crystal compound as the first component, the liquid crystal composition of the invention can have a negatively large dielectric anisotropy and a low minimum temperature of a nematic phase. Among the liquid crystal compounds represented by formulas (1-1) to (1-3), compounds represented by formulas (1-1-1), (1-1-2), (1-2-1) to (1-2-3) and (1-3-1) are preferred: a b 21 22 21 In formulas (1-1-1), (1-1-2), (1-2-1) to (1-2-3) and (1-3-1), independently, R, R, ring A, ring A and Z are defined as follows. a b 11 11 R is alkyl or alkenyl. Preferred embodiments of the alkyl and alkenyl are the same as in R. R is alkyl, alkenyl or alkoxy. Preferred embodiments of the alkyl, alkenyl and alkoxy are the same as in R. 21 22 Ring A is 2-fluoro-1, 4-phenylene or 3-fluoro-1, 4-phenylene. Ring A is 1, 4-cyclohexylene or 1,4-phenylene. 21 2 2 Z is -CHO- or -OCH-. In the case where the liquid crystal compound as the first component of the liquid crystal composition of the invention is a compound represented by the aforementioned formulas, the liquid crystal composition can have a negatively larger dielectric anisotropy and can have a low minimum temperature of a nematic phase. Among the compounds, the liquid crystal compound represented by formulas (1-1-1) and (1-1-2) has, as compared to an ordinary liquid crystal compound, such characteristics as a moderate viscosity, a moderate optical anisotropy, a moderate negative dielectric anisotropy, and a large specific resistance, while the maximum temperature of a nematic phase is not high. Among the compounds, the liquid crystal compound represented by formulas (1-2-1) and (1-2-2) has, as compared to an ordinary liquid crystal compound, such characteristics as a high maximum temperature of a nematic phase, a moderate viscosity, a moderate to large optical anisotropy, a moderate negative dielectric anisotropy, and a large specific resistance. Among the compounds, the liquid crystal compound represented by formula (1-2-3) has, as compared to an ordinary liquid crystal compound, such characteristics as a high maximum temperature of a nematic phase, a moderate to large viscosity, a moderate optical anisotropy, a moderate negative dielectric anisotropy, and a large specific resistance. Among the compounds, the liquid crystal compound represented by formula (1-3-1) has, as compared to an ordinary liquid crystal compound, such characteristics as a high maximum temperature of a nematic phase, a moderate to large viscosity, a moderate to large optical anisotropy, a moderate negative dielectric anisotropy, and a large specific resistance. Among the compounds represented by formulas (1-1-1), (1-1-2), (1-2-1) to (1-2-3) and (1-3-1), compounds represented by formulas (1-1-1), (1-1-2), (1-2-1) to (1-2-3) are preferred. In the case where the liquid crystal compound as the first component is a compound represented by the aforementioned formulas, the temperature range of a nematic phase can be widely controlled, and the liquid crystal composition can have a negatively large dielectric anisotropy. Among the compounds represented by formulas (1-1-1), (1-1-2), (1-2-1) to (1-2-3) and (1-3-1), compounds represented by formulas (1-1-1-1), (1-1-1-2), (1-1-2-1), (1-1-2-2), (1-2-1-1), (1-2-1-2), (1-2-2-1), (1-2-3-1) and (1-2-3-2) are more preferred. a b 11 11 In formulas (1-1-1-1), (1-1-1-2), (1-1-2-1), (1-1-2-2), (1-2-1-1), (1-2-1-2), (1-2-2-1), (1-2-3-1) and (1-2-3-2), R is independently alkyl or alkenyl, and preferred embodiments of the alkyl and alkenyl are the same as in R. R is independently alkyl, alkenyl or alkoxy, and preferred embodiments of the alkyl, alkenyl and alkoxy are the same as in R. In the case where the liquid crystal compound as the first component of the liquid crystal composition of the invention is a compound represented by the aforementioned formulas, the liquid crystal composition can have a negatively large dielectric anisotropy. Furthermore, the temperature range of a nematic phase can be easily changed by changing the content ratio of the first component with respect to the total weight of the liquid crystal composition. Among the compounds, the liquid crystal compounds represented by formulas (1-1-1-1), (1-1-2-1), (1-2-1-1) and (1-2-3-1) are preferred, and the liquid crystal compounds represented by formulas (1-1-2-1) and (1-2-3-1) are more preferred. In the case where the first component is the aforementioned compound, in particular, the dielectric anisotropy of the liquid crystal composition can be negatively large. In the case where the first component is the liquid crystal compound represented by formulas (1-1-1-1), (1-1-1-2), (1-2-1-1), (1-2-1-2) and (1-2-2-1), the viscosity of the liquid crystal compound can be further small. The liquid crystal compounds may be used as the first component solely or as a combination of plural kinds thereof. The liquid crystal compound as the second component of the liquid crystal composition of the invention is a liquid crystal compound represented by formulas (2-1) to (2-3): 11 12 12 13 In formulas (2-1) to (2-3), R, R and ring A are the same as in the compound represented by formulas (1-1) to (1-3) as the first component. In formulas (2-1) to (2-3), ring A is independently 1, 4-cyclohexylene or 1, 4-phenylene, arbitrary hydrogen of which may be replaced by fluorine or chlorine. One of the characteristic features of the compound represented by formulas (2-1) to (2-3) resides in that the compound contains two or more fluorines and has 2,3-difluoro-1, 4-phenylene as a fluorine-containing group. Owing to the structure of the liquid crystal compound as the second component, the liquid crystal composition of the invention can have a negatively large dielectric anisotropy. Among the liquid crystal compounds represented by formulas (2-1) to (2-3), compounds represented by formulas (2-1-1), (2-2-1) and (2-3-1) are preferred: a b 22 23 In formulas (2-1-1), (2-2-1) and (2-3-1), R, R, and ring A are the same as in the compound representedby formulas (1-1-1), (1-1-2), (1-2-1) to (1-2-3) and (1-3-1) as the first component. In formulas (2-1-1), (2-2-1) and (2-3-1), ring A is 1,4-cyclohexylene or 1,4-phenylene, arbitrary hydrogen of which may be replaced by fluorine or chlorine. In the case where the liquid crystal compound as the second component of the liquid crystal composition of the invention is a compound represented by the aforementioned formulas, the liquid crystal composition can have a negatively larger dielectric anisotropy. Among the compounds, the liquid crystal compound represented by formula (2-1-1) has, as compared to an ordinary liquid crystal compound, such characteristics as a moderate viscosity, a moderate to relatively large optical anisotropy, a moderate to relatively large negative dielectric anisotropy, and a large specific resistance, while the maximum temperature of a nematic phase is not high. Among the compounds, the liquid crystal compound represented by formula (2-2-1) has, as compared to an ordinary liquid crystal compound, such characteristics as a moderate to high maximum temperature of a nematic phase, a moderate to large viscosity, a moderate to large optical anisotropy, a negatively large dielectric anisotropy, and a large specific resistance. Among the compounds, the liquid crystal compound represented by formula (2-3-1) has, as compared to an ordinary liquid crystal compound, such characteristics as a moderate to high maximum temperature of a nematic phase, a moderate to large viscosity, a moderate to large optical anisotropy, a negatively large dielectric anisotropy, and a large specific resistance. Among the compounds represented by formulas (2-1-1), (2-2-1) and (2-3-1), compounds represented by formulas (2-1-1-1), (2-2-1-1), (2-2-1-2), (2-2-1-3), (2-2-1-4) and (2-2-1-5) are more preferred. In the case where the liquid crystal compound as the second component of the liquid crystal composition of the invention is a compound represented by the aforementioned formulas, the liquid crystal composition can have a negatively large dielectric anisotropy. Furthermore, the optical anisotropy can be easily changed by changing the content ratio of the second component with respect to the total weight of the liquid crystal composition. Among the compounds, the liquid crystal compounds represented by formulas (2-1-1-1), (2-2-1-1) and (2-2-1-2) are preferred. In the case where the second component is the aforementioned compound, the liquid crystal composition can have a higher maximum temperature of a nematic phase and a negatively large dielectric anisotropy. Furthermore, the optical anisotropy can be easily changed by changing the content ratio of the second component with respect to the total weight of the liquid crystal composition. The viscosity of the liquid crystal composition can be small as compared to the other second component, such as the compounds represented by formulas (2-2-1-3), (2-2-1-4) and (2-2-1-5). The liquid crystal compounds may be used as the second component solely or as a combination of plural kinds thereof. The liquid crystal compound as the third component of the liquid crystal composition of the invention is a liquid crystal compound represented by formulas (3-1) to (3-3): a b 11 11 In formulas (3-1) to (3-3), R is independently alkyl or alkenyl, and preferred embodiments of the alkyl and alkenyl are the same as in R of the compound represented by formulas (1-1) to (1-3) as the first component. R is independently alkyl, alkenyl or alkoxy, and preferred embodiments of the alkyl, alkenyl and alkoxy are the same as in R. c 11 In formulas (3-1) to (3-3), R is alkyl, alkenyl, alkoxy or alkoxymethyl. Preferred embodiments of the alkyl, alkenyl and alkoxy are the same as in R. Among the alkoxymethyl, alkoxymethyl having 2 to 20 carbons is preferred, alkoxymethyl having 2 to 10 carbons is more preferred, methoxymethyl, ethoxymethyl,propoxymethyl,butoxymethyland pentyloxy methyl are further preferred, and methoxymethyl is particularly preferred. 22 2 2 In formulas (3-1) to (3-3), Z represents a single bond, -CHO-, -OCH- or -COO-. 22 23 In formulas (3-1) to (3-3), plural rings A are independently 1, 4-cyclohexylene or 1, 4-phenylene, and ring A is independently 1, 4-cyclohexylene or 1, 4-phenylene, arbitrary hydrogen of which may be replaced by fluorine or chlorine. Owing to the structure of the liquid crystal compound as the third component, the liquid crystal composition of the invention can have a small viscosity. Furthermore, the maximum temperature of a nematic phase and the optical anisotropy can be easily changed by changing the content ratio of the third component with respect to the total weight of the liquid crystal composition. Among the compounds, the liquid crystal compound represented by formula (3-1) has, as compared to an ordinary liquid crystal compound, such characteristics as a small viscosity, a small to moderate optical anisotropy, an extremely small negative dielectric anisotropy, and a large specific resistance, while the maximum temperature of a nematic phase is not high. Among the compounds, the liquid crystal compound represented by formula (3-2) has, as compared to an ordinary liquid crystal compound, such characteristics as a moderate maximum temperature of a nematic phase, a small viscosity, a moderate optical anisotropy, an extremely small negative dielectric anisotropy, and a large specific resistance. Among the compounds, the liquid crystal compound represented by formula (3-3) has, as compared to an ordinary liquid crystal compound, such characteristics as an extremely high maximum temperature of a nematic phase, a moderate viscosity, a moderate optical to large anisotropy, an extremely small negative dielectric anisotropy, and a large specific resistance. Among the liquid crystal compounds represented by formulas (3-1) to (3-3), compounds represented by formulas (3-1-1), (3-1-2), (3-2-1), (3-3-1) and (3-3-2) are preferred: a b c R, R, and R are the same as in the compound represented by formulas (3-1) to (3-3). In the case where the liquid crystal compound as the third component is a compound represented by the aforementioned formulas, the liquid crystal composition can have a small viscosity. Among the compounds, the compound represented by formulas (3-1-1), (3-1-2) and (3-2-1) is preferred, and the compound represented by formula (3-2-1) is more preferred. In the case where the third component is the compound, the liquid crystal composition can have a smaller viscosity. The liquid crystal compounds may be used as the third component solely or as a combination of plural kinds thereof. Synthesis Method of Liquid Crystal Compounds The preparation methods of the liquid crystal compounds as the first to third components will be explained. Japanese Patent No. 2811342 The compounds represented by formulas (2-1) and (2-2) represented by compounds represented by formulas (2-1-1-1) and (2-2-1-1) to (2-2-1-5) can be synthesized by the methods disclosed in . JP S59-70624A/1984 JP S60-16940A/1985 The compounds represented by formula (3-1) represented by compounds represented by formulas (3-1-1) and so forth can be synthesized by the methods disclosed in or . Organic Syntheses (JohnWiley & Sons, Inc. Organic Reactions (John Wiley & Sons, Inc. Comprehensive Organic Synthesis (Pergamon Press New Experimental Chemistry Course (Shin Jikken Kagaku Kouza) (Maruzen, Inc. The compounds for which preparation methods were not described above can be prepared according to the methods described in ), ), ), ), and so forth. Combinations of the components of the composition and preferred ratios of the components will be described. One of the characteristic features of the liquid crystal composition of the invention resides in the combination of the first component and the second component (hereinafter, sometimes referred to as a liquid crystal composition (1)). Owing to the combination of the two components, the dielectric anisotropy of the composition can be negatively large, and the minimum temperature of a nematic phase of the composition can be low. As compared to a liquid crystal composition containing only the second component and the third component, such a liquid crystal composition obtained by replacing the third component with the first component while maintaining the content ratio of the second component can have a negatively large dielectric anisotropy. Furthermore, a liquid crystal composition containing only the second component and the third component cannot have a negatively large dielectric anisotropy while maintaining the minimum temperature of a nematic phase in some cases. The content ratios of the first component and the second component in the liquid crystal composition (1) of the invention are not particularly limited. It is preferred that the content ratio of the first component is from 10% to 80% by weight, and the content ratio of the second component is from 20% to 90% by weight, and it is more preferred that the content ratio of the first component is from 30% to 75% by weight, and the content ratio of the second component is from 25% to 70% by weight, based on the total weight of the liquid crystal compounds in the liquid crystal composition (1). In the case where the content ratios of the first component and the second component are in the aforementioned ranges, the liquid crystal composition can have an enhanced temperature range of a nematic phase, a suitable optical anisotropy, a dielectric anisotropy in a suitable range, a small viscosity, and a large specific resistance. The liquid crystal composition of the invention preferably contains the third component in addition to the first and second components (hereinafter, sometimes referred to as a liquid crystal composition (2)). Owing to the combination of the components, the liquid crystal composition can have a negatively large dielectric anisotropy and an enhanced temperature range of a nematic phase. The content ratios of the first component, the second component and the third component in the liquid crystal composition (2) of the invention are not particularly limited. It is preferred that the content ratio of the first component is from 5% to 75% by weight, the content ratio of the second component is from 20% to 80% by weight, and the content ratio of the third component is from 5% to 45% by weight, and it is more preferred that the content ratio of the first component is from 10% to 65% by weight, the content ratio of the second component is from 25% to 60% by weight, and the content ratio of the third component is from 5% to 35% by weight, based on the total weight of the liquid crystal compounds in the liquid crystal composition (2). In the case where the content ratios of the first component, the second component and the third component of the liquid crystal composition (2) are in the aforementioned ranges, the liquid crystal composition can have an enhanced temperature range of a nematic phase, a suitable optical anisotropy, a dielectric anisotropy in a suitable range, a small viscosity, and a large specific resistance. The liquid crystal composition of the invention may contain, in addition to the first and second components and the third component added depending on necessity, another liquid crystal compound in some cases for controlling the characteristics of the liquid crystal composition. The liquid crystal composition of the invention may not contain any other liquid crystal compound than the first and second components and the third component added depending on necessity from the standpoint, for example, of cost. The liquid crystal composition of the invention may further contain an additive, such as an optically active compound, a coloring matter, a defoaming agent, an ultraviolet ray absorbent and an antioxidant. In the case where an optically active compound is added to the liquid crystal composition of the invention, a helical structure can be induced in the liquid crystal to apply a twist angle thereto. In the case where a coloring matter is added to the liquid crystal composition of the invention, the composition can be applied to a liquid crystal display device having a guest host (GH) mode. In the case where a defoaming agent is added to the liquid crystal composition of the invention, the composition can be prevented from being foamed during transportation of the liquid crystal composition or during the production process of a liquid crystal display device with the liquid crystal composition. In the case where an ultraviolet ray absorbent or an antioxidant is added to the liquid crystal composition of the invention, the liquid crystal composition or a liquid crystal display device containing the liquid crystal composition can be prevented from being deteriorated. For example, an antioxidant can suppress the specific resistance from being decreased upon heating the liquid crystal composition. Examples of the ultraviolet ray absorbent include a benzophenone ultraviolet ray absorbent, a benzoate ultraviolet ray absorbent and a triazole ultraviolet ray absorbent. Specific examples of the benzophenone ultraviolet ray absorbent include 2-hydroxy-4-octoxybenzophenone. Specific examples of the benzoate ultraviolet ray absorbent include 2,4-di-tert-butylphenyl-3, 5-di-tert-butyl-4-hydroxybenzoate. Specific examples of the triazole ultraviolet ray absorbent include 2-(2-hydroxy-5-methylphenyl)benzotriazole, 2-(2-hydroxy-3-(3,4,5,6-tetrahydroxyphthalimide-methyl)-5-methylphenyl)benzotriazole and 2-(3-tert-butyl-2-hydroxy-5-methylphenyl)-5-chlorobenzotri azole. Examples of the antioxidant include a phenol antioxidant and an organic sulfur antioxidant. Specific examples of the phenol antioxidant include 3,5-di-tert-butyl-4-hydroxytoluene, 2,2'-methylenebis(6-tert-butyl-4-methylphenol), 4,4'-butylidenebis(6-tert-butyl-3-methylphenol), 2,6-di-tert-butyl-4-(2-octadecyloxycarbonyl)ethylphenoland pentaerythritol tetrakis(3-(3,5-di-tert-butyl-4-hydroxyphenyl)propionate). Specific examples of the organic sulfur antioxidant include dilauryl-3,3'-thiopropionate, dimyristyl-3,3'-thiopropyonate, distearyl-3,3'-thiopropionate, pentaerythritol tetrakis(3-laurylthiopropionate) and 2-mercaptobenzimidazole. The additives representedby an ultraviolet ray absorbent and an antioxidant can be used in such an amount range that the objects of the addition of the additives are attained, but the objects of the invention are not impaired. For example, in the case where the ultraviolet ray absorbent is added, the addition amount thereof is generally from 100 ppm to 1, 000, 000 ppm, preferably from 100ppm to 10, 000ppm, andmorepreferably from 1,000 ppm to 10,000 ppm, based on the total weight of the liquid crystal compounds. For example, in the case where the antioxidant is added, the addition amount thereof is generally from 10 ppm to 500 ppm, preferably from 30 ppm to 300 ppm, and more preferably from 40 ppm to 200 ppm, based on the total weight of the liquid crystal compounds. The liquid crystal composition of the invention may contain, in some cases, impurities, such as a synthesis raw material, a by-product, a reaction solvent and a synthesis catalyst, that are mixed therein during the synthesis process of the compounds constituting the liquid crystal composition and the preparation process of the liquid crystal composition. Production Method of Liquid Crystal Composition The liquid crystal composition of the invention can be prepared, for example, by mixing by shaking the component compounds when the compounds are in a liquid state, or by mixing the compounds, which are then melted by heating, followed by shaking when the compounds contains one in a solid state. The liquid crystal composition of the invention can also be prepared by the other known methods. The liquid crystal composition of the invention generally has an optical anisotropy of from 0.080 to 0.120. The liquid crystal composition of the invention can have an optical anisotropy in a range of from 0.050 to 0.180 or in a range of from 0.050 to 0.200, by appropriately controlling the composition and so forth. The liquid crystal composition of the invention generally has a dielectric anisotropy of from -6.5 to -2.0, and preferably a liquid crystal composition having a dielectric anisotropy of from -5.0 to -2.5 can be obtained. A liquid crystal composition having a dielectric anisotropy in the aforementioned ranges can be preferably applied to liquid crystal display devices having an IPS mode and a VA mode. In the liquid crystal composition of the invention, such a liquid crystal composition can be generally obtained that has both an optical anisotropy in the aforementioned ranges and a dielectric anisotropy in the aforementioned ranges. In order to maximize a contrast ratio of a liquid crystal display device driven in an IPS mode or a VA mode, it is designed in such a manner that the product (Δn·d) of the optical anisotropy (Δn) of the liquid crystal composition and the cell gap (d) of the liquid crystal display device is a constant value. In a VA mode, the value (Δn·d) is preferably in a range of from 0.30 µm to 0.35 µm, and in an IPS mode, the value (Δn·d) is preferably in a range of from 0.20 µm to 0.30 µm. The cell gap (d) is generally from 3 µm to 6 µm, and therefore, the optical anisotropy of the liquid crystal composition is preferably in a range of from 0.05 to 0.11 in order to maximize the contrast ratio. In the case where the cell gap (d) is 3 µm or less, the optical anisotropy of the liquid crystal composition preferably exceeds a range of from 0.10 to 0.11. Liquid Crystal Display Device The liquid crystal composition of the invention can be applied to a liquid crystal display device. The liquid crystal display device of the invention may be driven in an AM mode or a passive matrix (PM) mode, and maybe displayed in any display mode, such as a PC mode, a TN mode, an STN mode, an OCB mode, a VA mode and an IPS mode. The liquid crystal display device driven in an AM mode or a PM mode can be applied to a liquid crystal display of any type, i.e., a reflection type, a transmission type or a semi-transmission type. The liquid crystal composition of the invention can be applied to a dynamic scattering (DS) mode device using a liquid crystal composition containing an electroconductive agent, a nematic curvilinear aligned phase (NCAP) device prepared by microcapsulating a liquid crystal composition, and a polymer dispersed (PD) device in which a three dimensional net-work polymer is formed in a liquid crystal composition, for example, a polymer network (PN) device. Owing to the aforementioned characteristics of the liquid crystal composition of the invention, the liquid crystal composition can be preferably applied to an AM mode liquid crystal display device driven in an operation mode utilizing negative dielectric anisotropy, such as a VA mode and an IPS mode, and particularly preferably applied to an AM mode liquid crystal display device driven in a VA mode. K. Ohmuro, S. Kataoka, T. Sasaki and Y. Koie, SID '97 Digest of Technical Papers, vol. 28, p. 845 (1997 91/10936 / 1991 US 5 576 867 In a liquid crystal display device driven in a TN mode, a VA mode or the like, the direction of the electric field is perpendicular to the liquid crystal layer. In a liquid crystal display device driven in an IPS mode or the like, the direction of the electric field is in parallel to the liquid crystal layer. The structure of the liquid crystal display device driven in a VA mode has been reported in ), and the structure of the liquid crystal display device driven in an IPS mode has been reported in International Publication (). The invention will be explained in detail by way of Examples. The liquid crystal compounds used in the Examples are expressed by the symbols according to the definition in Table 1 below. The steric configuration of 1, 4-cyclohexylene in Table 1 is a trans configuration. The ratio (percentage) of the liquid crystal compound is percentage by weight (% by weight) based on the total weight of liquid crystal compounds unless otherwise indicated. The characteristics of the composition are summarized in the last of the Examples. The numerals attached to the liquid crystal compounds used in the Examples correspond to the formula numbers representing the liquid crystal compounds used as the first, second and third components of the invention, and the case where no formula number is indicated but a symbol "-" is indicated means another liquid crystal compound that does not correspond to the components of the invention. [Table 1] Method for Description of Compound using Symbols R- (A&lt;sub&gt;1&lt;/sub&gt;)-Z&lt;sub&gt;1&lt;/sub&gt;- ..... -Z&lt;sub&gt;n&lt;/sub&gt;-(A&lt;sub&gt;n&lt;/sub&gt;)-X 1) Left Terminal Group R- Symbol 3) Bonding Group -Z&lt;sub&gt;n&lt;/sub&gt;- Symbol C&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2+1&lt;/sub&gt;- n- -C&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2n&lt;/sub&gt;- N C&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2+1&lt;/sub&gt;O- nO- -C&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2&lt;/sub&gt;O- 10 C&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2+1&lt;/sub&gt;OC&lt;sub&gt;m&lt;/sub&gt;H&lt;sub&gt;2m&lt;/sub&gt;- nOm- -OCH&lt;sub&gt;2&lt;/sub&gt;- 01 CH&lt;sub&gt;2&lt;/sub&gt;=CH- V- -CH=CH- V CH&lt;sub&gt;2&lt;/sub&gt;=CHC&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2n&lt;/sub&gt;- Vn- [Table 1] 2) Ring Structure -An- Symbol 4) Right Terminal Group -X Symbol B -C&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2+1&lt;/sub&gt; -n B (2F) -OC&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2n+1&lt;/sub&gt; -On B (3F) -CH=CH&lt;sub&gt;2&lt;/sub&gt; -V B (2F, 3F) -C&lt;sub&gt;n&lt;/sub&gt;H&lt;sub&gt;2n&lt;/sub&gt;CH=CH&lt;sub&gt;2&lt;/sub&gt; -nV H -F 5) Example of Description Example 1 3-HB(2F)-3 Example 3 3-HB(2F)B(2F,3F)-O2 Example 2 3-HB(2F, 3F)-O2 The method of description of compounds using symbols is shown below. Measurements of the characteristics were carried out according to the following methods. Most methods are described in the Standard of Electric Industries Association of Japan, EIAJ ED-2521 A or those with some modifications. A sample was placed on a hot plate in a melting point apparatus equipped with a polarizing microscope and was heated at the rate of 1°C per minute. A temperature was measured when a part of the sample began to change from a nematic phase into an isotropic liquid. A higher limit of a temperature range of a nematic phase may be abbreviated to "a maximum temperature". A sample having a nematic phase was kept in a freezer at temperatures of 0°C, -10°C, -20°C, -30°C and -40°C for ten days, respectively, and the liquid crystal phase was observed. For example, when the sample remained in a nematic phase at -20°C and changed to crystals or a smectic phase at -30°C, Tc was expressed as ≤ -20°C. A lower limit of a temperature range of a nematic phase may be abbreviated to "a minimum temperature". Δ n n n ⁢ = ∥ − ⊥ Measurement was carried out with an Abbe refractometer mounting a polarizing plate on an ocular using a light at a wavelength of 589 nm. The surface of amainprismwas rubbed in one direction, and then a sample was dropped on the main prism. The refractive index n∥ was measured when the direction of the polarized light was parallel to that of the rubbing, and the refractive index n⊥ was measured when the direction of the polarized light was perpendicular to that of the rubbing. A value (Δn) of optical anisotropy was calculated from the equation: The measurement was carried out by using an E-type viscometer. A solution of octadecyltriethoxysilane (0.16 mL) dissolved in ethanol (20 mL) was coated on a glass substrate having been well cleaned. The glass substrate was rotated with a spinner and then heated to 150°C for 1 hour. A VA device having a distance (cell gap) of 20 µm was fabricated with two sheets of the glass substrates. A polyimide orientation film was prepared on a glass substrate in the same manner. The orientation film on the glass substrate was subjected to a rubbing treatment, and a TN element having a distance between two sheets of the glass substrates of 9 µm and a twisted angle of 80° was fabricated. A specimen was charged in the VA device, to which a voltage of 0.5V (1 kHz, sine wave) was applied, and a dielectric constant (ε∥) in the major axis direction of the liquid crystal molecule was measured. A specimen was charged in the TN device, to which a voltage of 0.5V (1 kHz, sine wave) was applied, and a dielectric constant (ε⊥) in the minor axis direction of the liquid crystal molecule was measured. Δ ε ε ε ⁢ = ∥ − ⊥ The dielectric anisotropy Δε was calculated by the equation: A composition having the negative value is a composition having a negative dielectric anisotropy. voltage holding ratio area with TN device area without TN device = / × 100 A specimen was charged in a TN device having a polyimide orientation film and having a distance between two glass substrate (cell gap) of 6 µm. A pulse voltage (60 µs at 5 V) was applied to the TN device at 25°C to charge the device. The waveform of the voltage applied to the TN device was observed with a cathode ray oscilloscope, and an area surrounded by the voltage curve and the abscissa per unit cycle (16.7 ms) was obtained. The area was obtained in the same manner from a waveform obtained after removing the TN device. The value of the voltage holding ratio (%) was calculated by the equation: The voltage holding ratio thus obtained was designated as VHR-1. Subsequently, the TN device was heated to 100°C for 250 hours. After cooling the TN device to 25°C, the voltage holding ratio was measured in the same manner. The voltage holding ratio obtained after the heating test was designated as VHR-2. The heating test is an accelerating test and is a test corresponding to a long term durability test of the TN device. Gas Chromatograph Model GC-14B made by Shimadzu Corp. or an equivalent thereof was used as a measuring apparatus. Capillary Column CBP1-M25-025 (length: 25 m, bore: 0.22 mm, film thickness: 0.25 µm, dimethylpolysiloxane as stationary phase, no polarity) made by Shimadzu Corp. was used as a column. Heliumwas used as a carrier gas, and the flow rate was controlled to 2 mL/min. The column was maintained at 200°C for 2 minutes and then further heated to 280°C at a rate of 5°C per minute. A specimen evaporating chamber and a detector (FID) were set up at 280°C and 300°C, respectively. A specimen was dissolved in acetone to prepare a solution of 0.1% by weight, and 1 µL of the resulting solution was injected into the specimen evaporating chamber. The recorder used was Chromatopac Model C-R5A made by Shimadzu Corp. or an equivalent thereof. Gas chromatogram obtained showed a retention time of a peak and a peak area corresponding to the component compound. The solvent for diluting the specimen may also be, for example, chloroformor hexane. The following capillary columns may also be used: DB-1 made by Agilent Technologies Inc. (length: 30 m, bore: 0.32 mm, film thickness: 0.25 µm), HP-1 made by Agilent Technologies Inc. (length: 30 m, bore: 0.32 mm, film thickness: 0.25 µm), Rtx-1 made by Restek Corp. (length: 30 m, bore: 0.32 mm, film thickness: 0.25 µm), and BP-1 made by SGE International Pty. Ltd. (length: 30 m, bore: 0.32 mm, film thickness: 0.25 µm). In order to prevent compound peaks from overlapping, a capillary column CBP1-M50-025 (length: 50 m, bore: 0.25 mm, film thickness: 0.25µm) made by Shimadzu Corp. may be used. An area ratio of each peak in the gas chromatogram corresponds to the ratio of the component compounds. In general, the percentage by weight of the component compound is not completely identical to the area ratio of each peak. According to the invention, however, the percentage by weight of the component compound may be regarded to be identical to the percentage by area of each peak since the correction coefficient is substantially 1 when these capillary columns are used. This is because there is no significant difference in correction coefficient among the liquid crystal compounds as the component compounds. In order to obtain more precisely the compositional ratio of the liquid crystal compounds in the liquid crystal composition by gas chromatogram, an internal reference method is applied to gas chromatogram. The liquid crystal compound components (components to be measured) having been precisely weighed and a standard liquid crystal compound (standard substance) are simultaneously measured by gas chromatography, and the relative intensity of the area ratio of peaks of the components to be measured and a peak of the standard substance is calculated in advance. The compositional ratio of the liquid crystal compounds in the liquid crystal composition can be precisely obtained by gas chromatography analysis by correcting using the relative intensity of the peak areas of the components with respect to the standard substance. 3-HB (2F, 3F)-O2 (2-1-1-1) 14% 5-HB (2F, 3F)-O2 (2-1-1-1) 14% 3-HHB (2F, 3F)-O2 (2-2-1-1) 11% 5-HHB (2F, 3F)-O2 (2-2-1-1) 11% 2-HHB (2F, 3F)-1 (2-2-1-1) 10% 3-HHB (2F, 3F)-1 (2-2-1-1) 10% 3-HH-4 (3-1-1) 7% 3-HH-5 (3-1-1) 7% 3-HB-O1 (3-1-2) 8% 5-HB-3 (3-1-2) 8% C The following composition containing the second component and the third component of the invention was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 68.9°C; T ≤ -10°C; Δn = 0.081; Δε = -3.3; VHR-1 = 99.3% 3-HB (2F)-O2 (1-1-1-1) 8% 3-HB (3F)-O2 (1-1-1-2) 8% 3-HHB (2F)-O2 (1-2-1-1) 7% 3-HHB (3F)-O2 (1-2-1-2) 7% 3-HB (2F, 3F)-O2 (2-1-1-1) 14% 5-HB (2F, 3F)-O2 (2-1-1-1) 14% 3-HHB (2F, 3F)-O2 (2-2-1-1) 11% 5-HHB (2F, 3F)-O2 (2-2-1-1) 11% 2-HHB (2F, 3F)-1 (2-2-1-1) 10% 3-HHB (2F, 3F)-1 (2-2-1-1) 10% The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 84.3°C; Tc ≤ -10°C; Δn = 0.096; Δε = -4.3; η = 38.8 mPa·s; VHR-1 = 99.3% The non-halogen-replaced liquid crystal compound as the third component in the composition of Comparative Example 1 was replaced by a mono-fluorine-replaced liquid crystal compound as the first component. As compared to Comparative Example 1, the composition of Example 1 had a high maximum temperature, a negatively large Δε and a large voltage holding ratio, while the minimum temperature was equivalent. The temperature range of a nematic phase could be enhanced, and Δε could be negatively large, owing to the combination of the first component and the second component. 3-HB (2F)-3 (1-1-1-1) 8% 3-HB (2F)-O2 (1-1-1-1) 8% 3-HHB (2F)-O2 (1-2-1-1) 6% 3-HHB (2F)-1 (1-2-1-1) 6% 3-HHB (3F)-O2 (1-2-1-2) 7% 3-HHB (3F)-1 (1-2-1-2) 8% 3-H1OB (2F)-O2 (1-1-2-1) 6% 3-H1OB (3F)-3 (1-1-2-2) 6% 3-H1OB (3F)-O2 (1-1-2-2) 6% 3-H1OB (2F)H-3 (1-3-1) 7% 3-H1OB (2F)H-O2 (1-3-1) 7% 3-HB (2F,3F)-O2 (2-1-1-1) 5% 3-HB (2F,3F)-O4 (2-1-1-1) 5% 5O-HHB (2F,3F)-1 (2-2-1-1) 5% 3-HB (2F)B(2F,3F)-O2 (2-2-1-3) 5% 3-HB (3F)B(2F, 3F)-O2 (2-2-1-4) 5% The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 71.2°C; Tc ≤ -20°C; Δn = 0.087; Δε = -3.3; η = 28.4 mPa·s As compared to Comparative Example 1, the composition of Example 2 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. 3-HB (3F)-3 (1-1-1-2) 10% 3-HB (3F)-O2 (1-1-1-2) 10% 3-H1OB (2F)H-3 (1-3-1) 7% 3-H1OB (2F)H-O2 (1-3-1) 8% 3-H1OB (2F)B-3 (1-3-1) 8% 3-H1OB (2F)B-O2 (1-3-1) 7% 3-HH1OB (2F)-O2 (1-2-3-1) 5% 3-HH1OB (2F)-1 (1-2-3-1) 5% 5-HB (2F, 3F)-O2 (2-1-1-1) 5% 5-HB (2F,3F)-O4 (2-1-1-1) 5% 3-HHB (2F,3F)-1 (2-2-1-1) 5% 3-HHB (2F,3F)-2 (2-2-1-1) 5% 3-HHB (2F,3F)-O2 (2-2-1-1) 7% 5-HHB (2F, 3F)-O2 (2-2-1-1) 8% 5-HBB (2F,3F)-O2 (2-2-1-2) 5% c The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 98.1°C; T ≤ -20°C; Δn = 0.108; Δε = -3.4; η = 32.5 mPa.s As compared to Comparative Example 1, the composition of Example 3 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. 3-HHB (2F)-O2 (1-2-1-1) 5% 3-HHB (3F)-1 (1-2-1-2) 5% 3-HHB (3F)-O2 (1-2-1-2) 5% 3-HH1OB (2F)-1 (1-2-3-1) 7% 3-HH1OB (2F)-O2 (1-2-3-1) 7% 20-HHB (2F,3F)-1 (2-2-1-1) 7% 3-HHB (2F,3F)-02 (2-2-1-1) 7% 5-HHB (2F, 3F)-O2 (2-2-1-1) 6% 5O-HHB (2F, 3F)-1 (2-2-1-1) 7% 3-HBB (2F, 3F)-O2 (2-2-1-2) 8% 5-HBB (2F, 3F)-O2 (2-2-1-2) 8% 3-HB-O2 (3-1-2) 7% 5-HB-O2 (3-1-2) 7% 5-HB-3 (3-1-2) 7% 7-HB-1 (3-1-2) 7% The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 102.8°C; Tc ≤ -20°C; Δn = 0.098; Δε = -3.1; η = 27.7 mPa·s; VHR-1 = 99.2% As compared to Comparative Example 1, the composition of Example 4 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. The composition had a large voltage holding ratio. 3-HB (2F)-O2 (1-1-1-1) 7% 3-HHB (3F)-1 (1-2-1-2) 6% 3-H1OB (3F)-3 (1-1-2-2) 7% 3-H1OB (3F)-O2 (1-1-2-2) 6% 3-H10B (2F) H-3 (1-3-1) 6% 3-H1OB (2F)B-3 (1-3-1) 5% 3-H1OB (2F)B-O2 (1-3-1) 5% 3-HH1OB (2F)-O2 (1-2-3-1) 5% 3-HH1OB (2F)-1 (1-2-3-1) 5% 3-HHB (2F,3F)-O1 (2-2-1-1) 5% 3-HHB (2F, 3F)-O2 (2-2-1-1) 7% 5-HHB (2F, 3F) -O2 (2-2-1-1) 8% 5-HBB (2F, 3F)-O2 (2-2-1-2) 8% 3-HH-4 (3-1-1) 7% 3-HB-02 (3-1-2) 6% 5-HB-3 (3-1-2) 7% C The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 90.5°C; T ≤ -20°C; Δn = 0.098; Δε = -2.7; η = 27.5 mPa·s As compared to Comparative Example 1, the composition of Example 5 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. 3-HB (3F)-3 (1-1-1-2) 8% 3-HB (3F)-O2 (1-1-1-2) 8% 3-HHB (3F)-O2 (1-2-1-2) 5% 3-HHB (3F)-1 (1-2-1-2) 5% 3-H1OB (2F)-3 (1-1-2-1) 7% 3-H1OB (2F)H-3 (1-3-1) 7% 3-H1OB (2F)H-O2 (1-3-1) 7% 3-HH10B (2F)-O2 (1-2-3-1) 5% 3-HH1OB (2F)-1 (1-2-3-1) 5% 3-HB10B (2F)-02 (1-2-3) 5% 3-HB (2F,3F)-O2 (2-1-1-1) 8% 3-HB (2F,3F)-O4 (2-1-1-1) 8% 3-HB (2F)B(2F,3F)-O2 (2-2-1-3) 7% 3-HB (3F)B(2F,3F)-O2 (2-2-1-4) 7% 101-HBBH-3 (3-3-1) 5% 101-HBBH-5 (3-3-1) 3% C The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 85.3°C; T ≤ -20°C; Δn = 0.100; Δε = -3.5; η = 32.4 mPa·s As compared to Comparative Example 1, the composition of Example 6 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. 3-HB (2F)-O2 (1-1-1-1) 5% 3-HHB (3F)-O2 (1-2-1-2) 3% 3-HBB (3F)-O2 (1-2-2-1) 5% 3-H1OB (3F)-3 (1-1-2-2) 6% 3-H1OB (2F)H-O2 (1-3-1) 5% 3-H1OB (2F)B-O2 (1-3-1) 6% 3-HB1OB (2F)-O2 (1-2-3) 5% 3-HB1OB (2F)-1 (1-2-3) 5% 3-HB (2F, 3F)-O2 (2-1-1-1) 5% 3-HB (2F, 3F)-O4 (2-1-1-1) 6% 20-HHB (2F, 3F)-1 (2-2-1-1) 3% 3-HBB (2F,3F)-O2 (2-2-1-2) 7% 5-HBB (2F, 3F)-O2 (2-2-1-2) 7% 3-HB (3F)B(2F, 3F)-O2 (2-2-1-4) 7% 3-HH-4 (3-1-1) 8% 5-HB-3 (3-1-2) 7% 3-HHB-3 (3-2-1) 5% 3-HHB-O1 (3-2-1) 5% The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 91.0°C; Tc ≤ -20°C; Δn = 0.108; Δε = -3.4; η = 28.1 mPa·s As compared to Comparative Example 1, the composition of Example 7 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. 3-HB (2F)-3 (1-1-1-1) 10% 3-HB (2F)-O2 (1-1-1-1) 10% 3-HHB (2F)-O2 (1-2-1-1) 7% 3-HB (2F,3F)-O2 (2-1-1-1) 10% 3-HB (2F,3F)-O4 (2-1-1-1) 10% 3-HB (2F)B(2F,3F)-O2 (2-2-1-3) 8% 3-HB (3F)B(2F,3F)-O2 (2-2-1-4) 7% 3-HB (2F,3F)B(2F,3F)-O2 (2-2-1-5) 7% 3-HB-O2 (3-1-2) 8% 5-HB-O2 (3-1-2) 8% 5-HBB (3F)B-2 (3-3-2) 5% 5-HBB (3F)B-O2 (3-3-2) 5% 5-HBB (3F)B-3 (3-3-2) 5% C The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 80.3°C; T ≤ -20°C; Δn = 0.116; Δε = -4.3; η = 34.1 mPa·s As compared to Comparative Example 1, the composition of Example 8 had a high maximum temperature and a low minimum temperature of ≤ -20°C, and had a negatively large Δε. 3-HB1OB (2F)-O2 (1-2-3) 6% 3-HB1OB (2F)-1 (1-2-3) 6% 3-HB (2F,3F)-O2 (2-1-1-1) 11% 3-HB (2F,3F)-O4 (2-1-1-1) 10% 5-HB (2F,3F)-O2 (2-1-1-1) 10% 5-HB (2F,3F)O4 (2-1-1-1) 9% 3-HB (2F)B(2F,3F)-O2 (2-2-1-3) 7% 3-HB (3F)B(2F,3F)-O2 (2-2-1-4) 7% 3-HB (2F, 3F)B(2F, 3F)-O2 (2-2-1-5) 6% V-HHB-1 (3-2-1) 8% V2-HHB-1 (3-2-1) 8% 101-HBBH-4 (3-3-1) 5% 1O1-HBBH-5 (3-3-1) 4% 5-HBB(3F)B-2 (3-3-2) 3% The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 91.8°C; Tc ≤ -20°C; Δn = 0.110; Δε = -4.1; η = 36.7 mPa·s As compared to Comparative Example 1, the composition of Example 9 had a high maximum temperature and a low minimum temperature of ≤ -20°C, and had a negatively large Δε. 3-HB (3F)-3 (1-1-1-2) 10% 3-HB (3F)-O2 (1-1-1-2) 8% 3-HHB (2F)-1 (1-2-1-1) 5% 3-HHB (3F)-O2 (1-2-1-2) 5% 3-HHB (3F)-1 (1-2-1-2) 5% 3-HB (2F,3F)-O2 (2-1-1-1) 5% 3-HB (2F, 3F)-O4 (2-1-1-1) 5% 3-HHB (2F,3F)-O2 (2-2-1-1) 5% 5-HHB (2F,3F)-O2 (2-2-1-1) 3% 3-HB (2F)B(2F,3F)-O2 (2-2-1-3) 8% 3-HB (3F)B(2F,3F)-O2 (2-2-1-4) 8% 2-HH-5 (3-1-1) 5% 3-HH-4 (3-1-1) 12% 3-HHB-3 (3-2-1) 5% 3-HHB-O1 (3-2-1) 3% 5-HBB (3F)B-2 (3-3-2) 3% 5-HBB (3F)B-O2 (3-3-2) 5% c The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 94.3°C; T ≤ -20°C; Δn = 0.096; Δε = -3.4; η = 31.5 mPa·s As compared to Comparative Example 1, the composition of Example 10 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. 3-HB (2F)-O2 (1-1-1-1) 13% 3-HB (3F)-O2 (1-1-1-2) 13% 3-HHB (2F)-O2 (1-2-1-1) 7% 5-HHB (2F)-O2 (1-2-1-1) 7% 3-HHB (3F)-O2 (1-2-1-2) 7% 5-HHB (3F)-O2 (1-2-1-2) 7% 3-HB (2F, 3F)-O2 (2-1-1-1) 13% 5-HB (2F,3F)-O2 (2-1-1-1) 13% 3-HHB (2F, 3F)-O2 (2-2-1-1) 4% 2-HHB (2F,3F)-1 (2-2-1-1) 8% 3-HHB (2F,3F)-1 (2-2-1-1) 8% C The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 71.2°C; T ≤ -20°C; Δn = 0.091; Δε = -3.1; η = 23.0 mPa·s As compared to Comparative Example 1, the composition of Example 11 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. 3-HB (2F)-O2 (1-1-1-1) 5% 5-HB (3F)-O1 (1-1-1-2) 5% 3-HHB (3F)-O2 (1-2-1-2) 4% 3-HBB (3F)-O2 (1-2-2-1) 4% 3-H1OB (3F)-3 (1-1-2-2) 7% 3-H1OB (2F)H-O2 (1-3-1) 5% 3-H1OB (2F)B-O2 (1-3-1) 5% 3-HB1OB (2F)-1 (1-2-3) 5% 3-HB (2F,3F)-O2 (2-1-1-1) 5% 3-HB (2F,3F)-O4 (2-1-1-1) 6% 3-HBB (2F,3F)-O2 (2-2-1-2) 7% 5-HBB (2F,3F)-O2 (2-2-1-2) 7% 3-HH-4 (3-1-1) 9% 3-HB-5 (3-1-2) 6% 3-HHB-3 (3-2-1) 6% 3-HHB-O1 (3-2-1) 4% 5-HHB (3F)-F ( - ) 5% 4-HBB (3F)-F ( - ) 5% C The following composition was prepared, and the characteristic values thereof were measured by the aforementioned methods. NI = 84.7°C; T ≤ -20°C; Δn = 0.101; Δε = -2.4; η = 29.3 mPa·s As compared to Comparative Example 1, the composition of Example 12 had a high maximum temperature and a low minimum temperature of ≤ -20°C to enhance the temperature range of a nematic phase. C 140 ppm of 3,5-di-tert-butyl-4-hydroxytoluene as an antioxidant was added to the composition of Example 4, and the composition had the following characteristic values. NI = 102.8°C; T ≤ -20°C; Δn = 0.098; Δε = -3.1; η = 27.7 mPa·s; VHR-1 = 99.2%
Q: How to center a bullet in itemize using LaTeX? I'm trying to create a list of bullets containing some math in LaTeX, but the default settings of LaTeX put the bullets in the beginning of the first line , due to the use of equation the result is very messy.... Here is a working example: \documentclass{amsart} \usepackage{amsmath} \begin{document} \begin{itemize} \item $g^{(k)}(\psi)=\frac{(n^{2}-1)\delta}{(n-(k-1))((1-\psi)k-(n-1)\delta)}$ \item $f^{(k)}=(g^{(k)}\frac{1}{n})$ \item $\underbar{k}=\frac{n(1+\delta)}{2}$. \item \begin{equation*} \underbar{f}=\begin{cases} n\delta&\mbox{If }n<4\mbox{ or }n=4\&\delta>\frac{1}{8}\\ f^{\underbar{k}}&\mbox{ otherwise} \end{cases} \end{equation*} \item \begin{equation*} \underbar{g}=\begin{cases} n\delta&\mbox{If }n<4\mbox{ or }n=4\&\delta>\frac{1}{8}\\ g^{\underbar{k}}&\mbox{ otherwise} \end{cases} \end{equation*} \end{itemize} \end{document} I need to move the last two bullets vertically to the middle: A: Instead of trying to move the two final bullet points to the right, I'd drop the use of the equation* environments and move the math material to the left, so that all bullet points have the same alignment. (Better still, don't use an itemize environment at all. Look into, say, using an align* environment, with all equations aligned on the respective = symbols.) I can't help but comment on the use of the \underbar instruction: It typesets its argument in an upright font, and it doesn't adjust the position of the bar if the argument is a letter that has a descender component (such as the letter "g"). Can you maybe find a better notation (or at least a more suitable macro, say, \underline)? \documentclass{amsart} \begin{document} \begin{itemize} \item $g^{(k)}(\psi)=\dfrac{(n^{2}-1)\delta}{(n-(k-1))((1-\psi)k-(n-1)\delta)}$ \item $f^{(k)}=(g^{(k)}\frac{1}{n})$ \item $\underbar{k}=\dfrac{n(1+\delta)}{2}$ \item $ \underbar{f}=\begin{cases} n\delta&\text{If $n<4$ or $n=4$ \& $\delta>\frac{1}{8}$}\\ f^{\underbar{k}}&\text{otherwise} \end{cases}$ \item $\underbar{g}=\begin{cases} n\delta&\text{If $n<4$ or $n=4$ \& $\delta>\frac{1}{8}$}\\ g^{\underbar{k}}&\text{otherwise} \end{cases}$ \end{itemize} \end{document}
--- abstract: 'Population dynamics with spatial information is applied to understand the spread of pests. We introduce a model describing how pests spread in discrete space. The number of pest descendants at each site is controlled by local information such as temperature, precipitation, and the density of pine trees. Our simulation leads to a pest spreading pattern comparable to the real data for pine needle gall midge in the past. We also simulate the model in two different climate conditions based on two different representative concentration pathways scenarios for the future. We observe that after an initial stage of a slow spread of pests, a sudden change in the spreading speed occurs, which is soon followed by a large-scale outbreak. We found that a future climate change causes the outbreak point to occur earlier and that the detailed spatio-temporal pattern of the spread depends on the source position from which the initial pest infection starts.' author: - Woo Seong Jo - Beom Jun Kim - 'Hwang-Yong Kim' title: 'Climate Change Alters Diffusion of Forest Pest: A Model Study' --- Introduction ============ Population dynamics has been established as one of the successful mathematical methods for describing the temporal dynamics of populations in physics [@PRE-extinction; @PRE-pattern; @PRE-extinction2] and biology [@MB-text; @MB-logistic; @Ecology-PD]. The methodology in population dynamics often takes different forms: Full-mixing models (i.e., mean-field models in physics) with quantities averaged over whole spatial locations, and models with spatial information explicitly taken into account (i.e., structured population models) have been widely used. The advantage of the mean-field approach is that the equation for population dynamics often becomes mathematically tractable, and one can clearly understand what happens in a long-time limit, i.e., species will be extinct or not. However, such a mathematical tractability comes at a cost: In the real world of nature, all species are locally embedded in a large-scale geographic space with finite dimensions of two or three. Living agents located in space cannot interact with all other agents in a finite time, and their behaviors are spatially and temporarily limited. Such a spatial constraint leads to remarkably different results, and the solution from the mean-field approach often fails to explain empirical observations. For this reason, researchers have been trying to integrate spatial information into dynamics in the design of their models to mimic the real world of nature. Cellular Automata (CA) is often used as a tool for spatially explicit models. In the CA approach, space is approximated as a discrete lattice whose resolution needs to be fine enough to properly describe local information. Besides discretized space, the quantities describing the system are assigned to each of the sites. It is one of differences between the CA and the individual-based model, where agents have their own properties regardless of their location. In the present work, we use a discrete lattice like CA, but use the population density as a local variable defined on each lattice point. The time evolution of the population density is given by a dynamic equation with a transition rate and local interactions as key ingredients similarly to Ref. . A model defined in a discrete lattice is very efficient from a the computational point of view because much spatial information can be integrated as spatially discrete variables. Such approaches have been utilized for various biological systems like vegetation dynamics [@CA-vegetation-Harada; @CA-vegetation-Hiebeler; @CA-vegetation-Ikegamia], epidemics [@CA-epidemics-Rhodes; @CA-epidemics-AIDS], and spread of pests [@pest-newphysics; @CA-pest-Brockhurst; @CA-pest-Chon], as well as variant subjects in physics [@game1; @game2; @jkps-lattice]. The spread of pests in vegetation has been a critical issue because a very long time is needed for ruined vegetation to recover. In Japan and South Korea, the spread of the pine needle gall midge (PNGM) has been a serious problem from the early $1900$’s. When a pine tree is parasitized by PNGMs, the infected trees wither and can eventually die. Ecological studies of the PNGM have been conducted to identify conditions for the PNGM to spread broadly, and the spreading pattern of withered forest have also been empirically investigated [@PNGM-1983; @PNGM-1985; @PNGM-2007]. Computational approaches with more available data have been developed, and it has become possible to use simulational methods for the study of PNGM spread [@CA-pest-Chon]. A machine-learning technique has been used to forecast the spread of damage [@ANN-PNGM-Chon; @ANN-PNGM-Chung], and images from satellites have also been analyzed [@satellite-PNGM]. Although population dynamics in a discrete lattice can be a very useful research framework due to its flexibility and computational efficiency, not many studies, with a few exception, have used this method [@CA-pest-Chon]. Accurate prediction of future climate change is very difficult. Even when the computational framework for the forecast is given, the forecast results can differ depending on the assumptions the model uses. In reality, the number of quantitative assumptions for future environmental conditions can be huge. If different studies use different assumptions among many possibilities, comparing the results of one study with those of other studies is difficult. Scholars in the research area of future climate change have agreed on a few official scenarios. The first scenario was introduced by the Intergovernmental Panel on Climate Change (IPCC) in $1992$ [@IS92]. In 2000, the IPCC published the second version of a scenario called the Special Report on Emissions Scenarios (SRES) [@SRES]. The SRES have four categories, which were discussed in two follow-up reports, i.e., Third Assessment Report (TAR) and Assessment Report Four (AR4). Each category shows the amount of emission of greenhouse gas, which is determined by the speed of development, human population, and other possible causes for generating gas. The latest scenario of climate change is called Representative Concentration Pathways (RCPs) [@IPCC2014], which is an improved version of the SRES with the latest climate data in $2014$. RCPs contain four categories called RCP2.6, RCP4.5, RCP6.0, and RCP8.5. The first category, RCP2.6 assumes that the greenhouse effect can be reduced by nature itself, which is doubtful right now. As the number in each category increases from 2.6 to 8.5, the emission of greenhouse gas is assumed to increase more in the future. The RCP8.5 scenario assumes that the concentration of greenhouse gas will follow the current trend. Most studies on climate change use one of the RCP scenarios with different greenhouse gas concentrations for the future and try to predict local climate variables such as temperature and precipitation. In this paper, we introduce a model to describe the temporal dynamics and the spread of a population of PNGMs in the two-dimensional lattice model of South Korea. For consistency with real field data, small initial PNGM densities are assigned to three cities, Incheon, Mokpo and Busan, and the spread patterns are predicted by using climate scenarios RCP4.5 and RCP8.5. We observe that the patterns of spread are comparable with the field data observed in the past. Furthermore, we also investigate how different climate scenarios affect the future prediction of the spread pattern of PNGM. Method {#sec:method} ====== Climate Conditions ------------------ Future climate changes in South Korea have been estimated based on the RCP scenarios. Each such estimate, RegCM4 [@RegCM4], SNURCM [@SNURCM], GRIMs [@GRIMs], WRF [@WRF], and HadGEM3-RA [@HadGEM3], has limitations of its own and contains inevitable uncertainty about initial conditions. Later, MME5s [@MME5s], which uses the concept of ensemble, combining estimates from five climate predictions, was suggested in order to overcome such limitations. In the present study, we use the future climate estimates of the MME5s in Climate Information Portal [@CIP]. The climate data from MME5s have a fine spatial resolution of 1km $\times$ 1km for every month from $2021$ to $2050$. We choose climate data based on two RCP scenarios, RCP4.5 and RCP8.5, to compare our results based on different assumptions for the future concentration of greenhouse gas. Model {#subsec:model} ----- Once the spatial climate conditions are fixed from the data based on MME5s with RCP4.5 or RCP8.5, we then apply our structured CA model for the spread of the PNGM. Heretofore, we will simply call a PNGM as a midge. Let $\rho_t({\bf r})$ be the density of adult midges at a discrete lattice point ${\bf r}$ and at a discrete time $t$ (we fix the unit of time as unity, which corresponds to one year, the life cycle of the PNGM). The adult midge density is modeled to evolve in time as $$\label{eq:rho_tr} {\rho}_{t} ({\bf r}) = \lambda({\bf r}; T, P, \psi) \overline{\rho}_t({\bf r}) \exp \left[-\frac{ \overline{\rho}_t({\bf r}) } {\psi_t({\bf r})}\right] ,$$ where $\overline{\rho}_t({\bf r})$ is the density of midge eggs to be explained below and $\lambda({\bf r})$ is the position-dependent survival rate, which depends on the local values of the temperature $T$, the precipitation $P$, and the tree density $\psi$. The previous generation of adult midges at $t-1$ cannot survive more than a year; thus, the adult midges at $t$ are all born from midge eggs left by adult midges in the previous generation at $t-1$. Accordingly, the midge density $\rho_t$ at $t$ does not directly depend on $\rho_{t-1}$, but it depends on the density $\overline{\rho}_t$ of midge eggs left by the previous generation of adult midges at time $t-1$. When $\overline{\rho}_t({\bf r}) / \psi_t({\bf r})$ is small, $\exp [-\overline{\rho}_t({\bf r}) / \psi_t({\bf r})] \approx 1-\overline{\rho}_t({\bf r}) / \psi_t({\bf r})$, and the right-hand side of Eq. (\[eq:rho\_tr\]) takes the form of the corresponding term in the standard logistic equation. In the original logistic equation, the population growth is controlled by the carrying capacity. The population can exceed the carrying capacity in the logistic equation, but then the growth rate becomes negative, reducing the population afterward. However, our growth model governed by Eq. (\[eq:rho\_tr\]) is somewhat different. If we replace the exponential term by the standard form in the original logistic equation, the midge density can be negative, causing our growth model to fail. In order to avoid this catastrophe in evolving dynamics, we have, thus, introduced the form $xe^{-x}$, instead of the quadratic form in the conventional logistic equation. The exponential form is equivalent to the standard logistic form $x(1-x)$ when $x$ becomes smalls and prevents the midge density from being negative when the egg density exceeds the tree density. Therefore, the advantage of our growth model (\[eq:rho\_tr\]) is twofold: it is consistent with the conventional logistic equation when the egg density is small, and it also controls the unwanted catastrophic failure in the model. One can recognize then that the tree density $\psi$ plays the role of the carrying capacity of midge eggs, which appears to be a reasonable approximation since adult midges lay eggs in pine trees. From this reasoning, one can see that we normalize the egg density $\bar\rho$ with respect to the tree density $\psi$ in such a way that the maximum egg density is defined to be proportional to the tree density. The proportionality constant can be absorbed into the definition of the growth rate $\lambda$, giving us Eq. (\[eq:rho\_tr\]). The position-dependent survival rate $\lambda({\bf r})$ in Eq. (\[eq:rho\_tr\]) is affected by local climate conditions like the temperature and the precipitation. In our model, one unit of time corresponds to one year; thus, the temperature $T$ and the precipitation $P$ need to be defined in some average sense for each year. We note that most midges become adults from pupa and lay eggs in June; thus, we use the time-averaged temperature in June as $T$. In existing literature on the PNGM [@ANN-PNGM-Chon], the moisture of soil has been shown to be one of the crucial factors for a midge to grow into an adult. We, thus, calculate the average precipitation from March to May and use it for $P$. The survival of the midge larvae to become an adult midge is known to depend on strongly the temperature [@PNGM-2007]: Survival probability shows a slightly skewed bell shape for the temperature range between $12^{\circ}\text{C}$ and $30^{\circ}\text{C}$. We thus use such suitable climate conditions for the PNGM to grow to adults and write the survival rate $\lambda({\bf r})$ as $$\label{eq:lambda} \lambda({\bf r}; T, P, \psi) = A \psi_t({\bf r}) [T_t({\bf r}) - T_{\rm min}] [T_{\rm max} - T_t({\bf r}) ] G[P_t({\bf r})],$$ where $A$ is the normalization constant to make $\lambda$ in the interval $[0,1]$ and $P ({\bf r})$ is the above mentioned time-averaged precipitation at ${\bf r}$. To mimic the suitable temperature range, we have chosen in Eq. (\[eq:lambda\]) a concave quadratic form to approximate the bell-shaped curve reported in Ref.  with $T_{\rm min} = 12^{\circ}\text{C}$ and $T_{\rm max} = 30^{\circ}\text{C}$. Although more rain has been reported to be better in Ref. , we suppose that the marginal gain must be very small if the precipitation is too large. As a rough approximation for such a dependence on $P$, we write $$\label{eq:G} G[P_t({\bf r})] = \begin{cases} 0, & \text{for } P_t({\bf r}) < P_{\rm min}, \\ [P_t({\bf r}) - P_{\rm min}] / (P_{\rm max} - P_{\rm min} ), & \text{for } P_{\rm min} \leq P_t({\bf r}) \leq P_{\rm max}, \\ 1, & \text{for } P_t({\bf r}) > P_{\rm max}, \end{cases}$$ where $P_{\rm min} = 20$mm and $P_{\rm max} = 100$mm are suitably chosen based on the average precipitation from March to May in Gyeonggi province in Korea. Since $G(P)$ has a value between 0 and 1, the normalization constant $A$ in Eq. (\[eq:lambda\]) is written as $4/(T_{\rm max} - T_{\rm min})^2$ to make the maximum survival rate unity. Once Eq. (\[eq:rho\_tr\]) combined with Eqs. (\[eq:lambda\]) and (\[eq:G\]) yields the midge density at every site at time $t$, we need to describe how adult midges lay eggs in space. A plausible assumption is that adult midges at ${\bf r}$ lay eggs on trees located not far from ${\bf r}$. Accordingly, the heterogeneity of the tree density must be taken into account for the spread pattern of eggs. In our notation, adult midges at time $t$ are from eggs at time $t$ which were laid by adult midges at $t-1$. Accordingly, the egg density $\overline{\rho}_{t}$ at time $t$ must be related to midge density $\rho_{t-1}$ at time $t-1$, and we write the relation in the form: $$\label{eq:rhobar} \overline{\rho}_{t}({\bf r}) = \sum_{{\bf r}' \in {\cal N}({\bf r})} \rho_{t-1} ({\bf r}') \omega_{t-1}({\bf r}', {\bf r}),$$ where ${\cal N}({\bf r})$ is the set of discrete lattice points within a distance of 5km from ${\bf r}$ since the speed of midge spread has been reported to be about 5$\sim$6km/year [@Park_thesis]. In Eq. (\[eq:rhobar\]), $\omega_{t-1}({\bf r}', {\bf r})$ controls how many eggs are laid at ${\bf r}$ by adult midges at ${\bf r}'$, and we assume the following form: $$\label{eq:omega} \omega_t({\bf r}', {\bf r}) = g_m \frac{\psi_t ({\bf r}) }{ \sum_{{\bf r}'' \in {\cal N}({\bf r}')} \psi_t ({\bf r}'')},$$ which implies that the adult midges at ${\bf r}'$ move into their local neighbors and lay eggs in proportion to the local tree density. In this process, we assume that all midges have the same reproductive capability; thus, the growth parameter of a midge, $g_m$ in Eq. (\[eq:omega\]), is suitably set to a uniform value of $5.0$ in our simulations. Various pest species including PNGMs need host vegetation for reproduction. Once invaded by parasites, hosts become weak due to the lack of water and nutrients and can face fatal situations in harsh circumstances. The pine tree is the host vegetation for PNGMs, and once infected by PNGMs it loses leaves. Our equations, Eqs. (\[eq:rho\_tr\])-(\[eq:omega\]), so far deal with how the midge density and egg density evolve in time. The last ingredient in our model is to mimic the effect of midges on pine trees. In the absence of midges, the tree density increases at a constant rate every year until the maximum possible density is approached. On the other hand, a large midge density reduces the tree density. The density $\psi_t({\bf r})$ of pine trees is thus assumed to evolve in time as $$\label{eq:psi_tr} \psi_t({\bf r}) = \min[(1+g_p) \psi_{t-1}({\bf r}) - \rho_t({\bf r}), \psi_{\rm max} ] ,$$ where $g_p$ is the growth rate for pine trees per year and is set to 0.05 in the present work, and $\min(x,y) = x$ for $x < y$ and $\min(x,y) = y$ otherwise. The tree density in reality cannot grow indefinitely, which is reflected in the form of the condition $\psi_t({\bf r}) \leq \psi_{\rm max}$. When the tree density at a location approaches the upper limit $\psi_{\rm max}$, which is set to 1.0 in the present work, the tree density stops increasing. Simulation Procedure {#subsec:preset} -------------------- In summary of our model for the spatio-temporal evolution of PNGMs, we implement the growth dynamics for the midge density in Eq. (\[eq:rho\_tr\]) with the egg survival rate in Eq. (\[eq:lambda\]) which depends on climate conditions like temperature and precipitation. The previous generation of midges lay eggs, depending on local tree density, as in Eq. (\[eq:rhobar\]), and the growth of trees is affected by the midge density as in Eq. (\[eq:psi\_tr\]). Although our dynamics must be a rough approximation of reality, we have tried to use the known results reported in existing literature. One advantage of our model is that we can try various future climate conditions through the use of the egg survival rate in Eq. (\[eq:lambda\]) which depends on temperature and precipitation. The time evolution of the midge density, the egg density, and the tree density are governed by the framework presented in Sec. \[subsec:model\]. We use two future climate scenarios, RCP4.5 and RCP8.5, for this purpose and download the climate data from Climate Information Portal [@CIP]. The original data have a temporal resolution of one month and a spatial resolution of 1km, and we make average over time, as explained in Sec. \[subsec:model\], to get the temperature $T_t({\bf r}$) and the precipitation $P_t({\bf r})$. In model simulations, we use a two-dimensional square lattice with a lattice constant having a linear size 1km. We use $\psi_{t=0}({\bf r}) = 1.0$ as an initial condition for the tree density. Of course, pine trees are not spread uniformly across South Korea, and we have more trees in mountainous areas. Only for simplicity, we use a uniform distribution of the tree density. We choose three harbor cities, Incheon, Mokpo, and Busan, and we assume that the initial PNGM breakout spread starts from there. The reason is that the midge population enters Korea mostly through timber imported from abroad. We assume that all midge eggs hatch to become midge worms, but only part of the worm population becomes adult midges. Accordingly, we use the uniform initial condition $\rho_{t=0}({\bf r}) = 0$ everywhere, but the initial egg density $\overline{\rho}_{t=0}({\bf r}_i) = 0.005$ is assigned at one single lattice point ${\bf r}_i$ depending on which harbor city is the source position. Once initial conditions for $\rho$, $\overline{\rho}$ and $\psi$ are given with fixed all parameter values, we first calculate the evolution of the midge density at each site from Eq. (\[eq:rho\_tr\]). In this procedure, climate conditions and the density of trees are used to calculate the midge density \[see Eqs. (\[eq:lambda\]) and (\[eq:G\])\]. We then update the density of eggs and the density of trees by using Eq. (\[eq:rhobar\]) and Eq. (\[eq:psi\_tr\]), respectively. Adult midges are assumed to put their eggs following Eq. (\[eq:omega\]). We assume that the initial breakout starts from one of the above mentioned three harbor cities at year 2021 to investigate the spreading pattern in later years till 2050. Results {#sec:results} ======= ![image](fig1.eps){width="95.00000%"} In the 1920’s, PNGMs began to spread in Korea. The source locations of spread were harbor cities like Incheon, Mokpo, and Busan, where imported pine timber were unloaded. If timber were infected by PNGM worms, the initial spread of PNGM could start from these harbor cities. We first study how PNGMs would spread in the future if infected timber is imported to one of the harbor cities in Korea. The key ingredient is the future climate conditions, and we use the predictions based on the scenarios RCP4.5 and RCP8.5. We emphasize that our main goal here is to study how future climate changes can alter the pattern of parasite spread in general. Even though our model parameters fit better for PNGM, the model can easily be generalized for other parasite insects. In our simulations of spread, we assume that the midge spread starts in year 2021 and compute how the midge density evolves in time until the 2050. For simplicity, we pick $t=0$ for the year 2021; thus, 2050 corresponds to $t=29$. For the initial condition of midge density, we use $\rho_0({\bf r}) = 0.0$ for all locations; i.e., no adult midges exist at $t=0$. For the initial values of the egg density, we set $\overline{\rho}_0({\bf r}) = 0.005$ for one lattice point in the harbor city area where the initial spread occurs. Of course, any location ,except for this source lattice point, is assigned $\overline{\rho}_0({\bf r}) = 0$. For the location of the source of midge spread, we pick three cities, Incheon, Mokpo, and Busan based on what happened in 1920’s. We then simulate our model for all six ($ = 3 \times 2$) different cases, i.e., three source locations and two climate conditions. The spread of midges is shown to be not so fast and even after almost 30 years from the initial breakout of spread, midges are found not to have spread across the country as shown in Fig. \[fig:PNGM2050\] where we display midge densities in (a) and (c) together with tree densities in (b) and (d). For simplicity, we plot the results from three different breakout locations (Incheon, Mokpo, and Busan) altogether in one Korean map for a given future climate scenario RCP4.5 for (a) and (b), and RCP8.5 for (c) and (d). As is clearly seen, the midge density propagates in space like a wavefront, and three density waves starting from three locations do not meet yet. Our model dictates how midges tend to migrate to sites where climate conditions are better and where more trees exist. The midge density propagates to locations far from the initial breakout site as time goes on. Infected pine trees tend to die \[see Eq. (\[eq:psi\_tr\])\] and it takes a long time for pine trees to recover their initial level of tree density. Consequently, the locations where midge densities are high tend to form a circle-like structure, the radius of which expands in time. When pine trees die of PNGM infection, PNGMs can hardly flourish later because of the lack of the trees in which PNGM worms can survive. We thus expect PNGM density eventually to spread from the initial source to the whole country after which they become extinct. Without PNGMs, pine trees grow back and approach a suitable level of tree density. The circular shape of the wavefront of midge density originates from the isotropy in the model. However, in reality, the pine trees do not grow everywhere, and there are regions where pine trees grow better or worse exist. We believe that our model has room for further improvement by making the maximum tree density $\psi_{\rm max}$ depend on ${\bf r}$. ![(a) The radius of spread in Eq. (\[eq:Rit\]) increases as time proceeds. Three harbor cities, Incheon, Mokpo, and Busan, are used as the initial breakout sites of the PNGM spread, and the climate condition is based on the RCP4.5 scenario. After an early stage in which the radius increases very slowly, the radius kicks off beyond the outbreak points which depend on source cities. The radii of spread for Mokpo and Busan show sudden changes in the spreading speed at around 2026, but Incheon shows a much later outbreak point of 2038. (b) The midge density profile at different times, 2025, 2035, and 2045, for RCP4.5-based climate condition. The distance $d$ from the source position with Busan as the source city is used for the horizontal axis. []{data-label="fig:radius_density"}](fig2.eps){width="45.00000%"} We define the spread radius $R_i(t)$ for the midge density when the spread has started from the source city $i$ at position ${\bf r}_i$ as $$\label{eq:Rit} R_i(t) \equiv \sum_{\bf r} | {\bf r} - {\bf r}_i | \rho_t({\bf r}) .$$ For convenience, we choose ${\bf r}_i$ as one lattice point in the harbor city $i \in \{$[Incheon, Mokpo, Busan]{}$\}$, where the initial condition $\overline{\rho}_0({\bf r}_i) = 0.005$ has been assigned. At $t=0$, $\rho_0({\bf r})$ is localized to this source location; thus, $R_i(t=0) = 0$. As time evolves, $\rho_t({\bf r})$ extends to cover a broader region; thus, $R_i(t)$ increases. Figure \[fig:radius\_density\](a) displays the temporal change of $R_i(t)$ for each source city when the climate scenario RCP4.5 is used. In the initial stage of PNGM spread, the radius first increases very slowly, and then suddenly kicks off after some years. The radius increases at a rate of 1km/year for first five to six years for Mokpo and Busan as a source locations. Similar stagnation is observed also for Incheon as source location, but it lasts for a much longer time until around the year 2038 with the slower increasing rate of 0.6km/year for the spread radius. Interestingly a similar stagnation behavior has been observed in reality for the past spread pattern of PNGMs [@Park_thesis]. In Ref. , the radius of spread was shown to kick off after a stagnation period, and the outbreak point coincides well with the instant when the population approaches the carrying capacity. When the midge density is small, the damage from parasites is soon repaired, which allows the midge to lay eggs uniformly around its current position. In this case, the spatial midge density exhibits a unimodal shape with the center at the source location. As the midge population grows further, the ruined tree density cannot recover in a short time, but keeps decreasing gradually due to parasitizing midges. In such case of high midge density, the diffusion of midges exhibits a bias toward an outgoing direction where more trees still exist. Consequently, the location of the maximum midge density drifts away from the original source location, and the radius of spread increases faster. When the midge spread starts from Incheon, the initial stage of stagnation is found to be longer than it is for the other source cites Mokpo and Busan. The difference appears to originate from the different climate conditions, the precipitation from March to May in particular. We observe that according to the RCP4.5-based prediction, precipitation around Incheon gradually increases after year 2035 and thus $G(P)$ in Eq. (\[eq:G\]) takes the almost maximum value of unity at years 2039 and 2040. This then leads to an increase in the midge density, which soon reduces the tree density. If the tree density becomes smaller around the Incheon area, the midges then migrate outward, and the radius in Fig. \[fig:radius\_density\](a) kicks off at around year 2040. After the early stage of migration, the locations where the midge densities are larger begin to move out from the source position. When this happens, the speed of diffusion is observed to become faster. The rate of increase of the radius for Mokpo and Busan as source cites after the kickoff occurs are about 3.5km/year and 3.8km/year, respectively, while the corresponding value for Incheon is 4.7km/year. In Fig. \[fig:radius\_density\](b), the density of midges with Busan as the source city is shown as a function of the distance from Busan at three different instants, 2025, 2035, and 2045. In the early stage of diffusion at year 2025, the density shows a Gaussian-like shape with a maximum at the position of the source city (at null distance) as expected. After the early stage of diffusion, the Gaussian-like shape with its maximum at the origin begins to change, and the maximum shifts away from the origin. This is due to the decrease in the tree density near the origin, which drives midges away in an outward direction, as explained above. Of course, if midges move away from the origin, the tree density near the source location can recover. However, the midges do not come back to the origin because they have to overcome the harsh region in which the tree density is lower. ![The radius of spread $R_i(t)$ in Eq. (\[eq:Rit\]) for Incheon as the source city with climate conditions based on RCP4.5 and RCP8.5. A sudden increase in the radius of spread occurs earlier for RCP8.5 than for RCP4.5, which can be explained by the sufficiently large value of precipitation in RCP8.5 (see text for details). []{data-label="fig:RCP_45_85"}](fig3.eps){width="45.00000%"} We then investigate how different RCP scenarios affect the prediction of midge density in the future. We use Incheon as initial source position and compare the radius of spread obtained from RCP4.5 and RCP8.5 in Fig. \[fig:RCP\_45\_85\]. For RCP4.5, the radius kicks off at around $2039$ while it kicks off at around $2031$ for RCP8.5. As explained above, the sudden change in the slope in Fig. \[fig:RCP\_45\_85\] at these outbreak points originates from the competition between two growth rates, that for trees and that for midges. The difference in the outbreak points of RCP4.5 and RCP8.5 can be explained as follows: Precipitation significantly increases in the northern part of South Korea as the greater greenhouse gas emission is assumed. Increased precipitation yields the increase of the midge population \[see Eqs. (\[eq:lambda\]) and (\[eq:G\])\], which shifts the outbreak point to an earlier time. We recognize that the difference of such outbreak points between RCP4.5 and RCP8.5 is almost indiscernible for other source cities, Mokpo and Busan. It suggests that the climate conditions in southern cities are already suitable for the fast growth of the midge density. Conclusion ========== We have proposed a spatio-temporal spread model of an insect species like the pine needle gall midge (PNGM) that parasitizes trees. The model includes the density of adult midges, the density of midge eggs, and the density of pine trees as dynamic variables and describes how their dynamics are coupled to each other. One of the main research goals of the present paper has been to investigate how future climate conditions can alter the spread pattern of insects. For this purpose, we have used climate predictions based on two standard scenarios, RCP4.5 and RCP8.5, with different future estimates of greenhouse gas emission. We have downloaded future climate data for each scenario which contains grid data for temperature and precipitation with a temporal resolution of one month. The density of PNGMs is calculated by the growth equation similar to the logistic equation, in which we use the climate-dependent survival rate. The adult midges are modeled to lay eggs near their current positions in distance of 5km, and the density of eggs depends on the local tree density. From our extensive simulations, we observed that the radius of spread as a function of time has different increase rates in the early and the late stages of diffusion: In the early stage in which the midge density is still small the diffusion of midges is slow whereas after the outbreak point of spread the diffusion becomes much faster. We emphasize that a similar result of a change in diffusion speed has been found in the field research [@Park_thesis]. For a variety of different pests including PNGM investigated in the present paper, the survival rate of parasites is greatly influenced by the climate condition: More midges survive to become adults when both precipitation and temperature are sufficiently high. We have observed in our simulations that when Mokpo and Busan (located along the southern cost of South Korea) are used as the source sites of spread, the outbreak point occurs at a much earlier time than it does when Incheon (located in the mid-western coast of Korean peninsula) is the source site. We have investigated the reason for the difference between northern and southern source sites and have found that it originates from the different climate conditions, precipitation in particular. Southern regions of South Korea have sufficiently high precipitation for midge population to grow fast, which then significantly reduces the tree density near the source site. If this happens, midges tend to migrate to a region with a high tree density and thus spread faster in a radially outward direction from the source site. Consequently, the outbreak point for the faster diffusion occurs in an early stage if the precipitation near the source site is sufficiently high. As the greenhouse gas emission is increased, the RCP scenario changes from RCP4.5 to RCP8.5, and climate variables such as temperature and precipitation in the future are greatly influenced. Two different climate conditions, RCP4.5 and RCP8.5, have been found to result in almost the same spread pattern with a hardly recognizable change of the outbreak points for Mokpo and Busan as source cities. In contrast, RCP8.5 yields a much earlier outbreak point than RCP4.5 for Incheon. We interpret that this difference in diffusion behavior between southern and northern source cites in South Korea should originate from the difference in the precipitation in the two regions. In the southern part of South Korea, both RCP4.5 and RCP8.5 predict sufficiently high precipitation and the spread kicks off early. However, for the northern part of South Korea, the precipitation predicted by RCP8.5 is higher than that predicted by RCP4.5, leading to a difference in the outbreak point for the case of Incheon as the source site. In more detail, an average precipitation higher than 50mm has been observed to induce an increase in the midge density in successive $2 \sim 3$ years, which reduces the host tree density; then, midges spread in an outward direction faster seeking regions with a high tree density. Such a condition of high precipitation near Incheon has been found to be well satisfied at early times in RCP8.5, but not in RCP4.5, leading to a shift in the outbreak point to an early time for RCP8.5. We believe that our spatio-temporal growth model for PNGM spread can easily be generalized for similar parasite insects. The present model can also be generalized to mimic the spreading of parasites through road traffic. For the case when infected timber is moved to other cities through ground transportation, changing our model equation to cover such a long-distance spread is straightforward. Acknowledgment ============== This study was carried out with the support of the Research Program of Rural Development Administration, Republic of Korea (Project No. PJ01156304) References ========== [10]{} B. Meerson and P. V. Sasorov, Phys. Rev. E [**83**]{}, 011129 (2011). M. G. Clerc, D. Escaff, and V. M. Kenkre, Phys. Rev. E [**72**]{}, 056217 (2005). C. Escudero, J. Buceta, F. J. de la Rubia, and K. Lindenberg, Phys. Rev. E [**69**]{}, 021908 (2004). J. D. Murray, [*Mathematical Biology*]{}, (Springer, Berlin, 1993). A. Tsoularis and J. Wallace, Math. Biosci. [**179**]{}, 21 (2002). E. E. Holmes, M. A. Lewis, J. E. Banks, and R. R. Veit, Ecology [**75**]{}, 17 (1994). H. Caswell and R. Etter, B. Math. Biol. [**61**]{}, 625 (1999). Y. Harada and Y. Iwasa, Res. Popul. Ecol. [**36**]{}, 237 (1994). D. E. Hiebeler and B. R. Morin, J. Theor. Biol. [**246**]{}, 136 (2007). M. Ikegamia, D. F. Whighamb, and M. J. A. Wergera, Ecol. Model. [**234**]{}, 51 (2012). C. J. Rhodes and R. M. Anderson, J. Theor. Biol. [**180**]{}, 125 (1996). H. Xuan, L. Xu, and L. Li, Ann. Oper. Res. [**168**]{}, 81 (2009). T.-S. Chon, S. D. Lee, and B.-Y. Lee, New Phys.: Sae Mulli [**38**]{}, 184 (1998). M. A. Brockhurst, A. Buckling, V. Poullain, and M. E. Hochberg, Evolution [**61**]{}, 1238 (2006). S. D. Lee, S. Park, Y.-S. Park, Y.-J. Chung, B.-Y. Lee, and T.-S. Chon, Ecol. Model. [**203**]{}, 157 (2007). G. Szabó and C. Tőke, Phys. Rev. E [**58**]{}, 69 (1998). P.-P. Li, J. Kea, L.-L. Jiang, X.-Z. Yuan, and Z. Lin, Eur. Phys. J. B [**86**]{}, 168 (2013). J.-H. Cho and S.-H. Lee, J. Korean Phys. Soc. [**64**]{}, 746 (2014). K. N. Park and J. S. Hyun, J. Korean For. Soc. [**61**]{}, 20 (1983). Y. Son, J.-H. Lee, and Y.-J. Chung, J. Appl. Entomol. [**131**]{}, 674 (2007). B. Y. Lee, T. Miura, and Y. Hirashima, ESAKJA [**23**]{}, 119 (1985). T.-S. Chon, Y.-S. Park, J.-M. Kim, B.-Y. Lee, Y.-J. Chung, and Y. Kim, Environ. Entomol. [**29(6)**]{}, 1208 (2000). Y.-S. Park and Y.-J. Chung, Forest Ecol. Manag. [**222**]{}, 222 (2006). K.-W. Ahn, H.-S. Lee, D.-C. Seo, and S.-H. Shin, J. Korean Soc. Geosp. Inf. Syst. [**6**]{}, 105 (1998) (in Korean). IPCC IS92 Scenarios <http://sedac.ipcc-data.org/ddc/is92/>. N. Nakiććenović, J. Alcamo, G. Davis, *et al.*, in [*Special Report on Emissions Scenarios: A special report of Working Group III of the Intergovernmental Panel on Climate Change*]{}, edited by N. Nakiććenović and R. Swart, (Cambridge University Press, Cambridge, 2000). M. Collins, R. Knutti, J. Arblaster, J.-L. Dufresne, T. Fichefet, P. Friedlingstein, X. Gao, W. J. Gutowski, T. Johns, G. Krinner, M. Shongwe, C. Tebaldi, A. J. Weaver, and M. Wehner, in [*Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change*]{}, edited by T. F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S. K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex, and P. M. Midgley, (Cambridge University Press, Cambridge, 2013), Chap. 12. F. Giorgi, E. Coppola, F. Solmon, L. Mariotti, *et al.*, Clim. Res. [**52**]{}, 7 (2012). H.-S. Kang, D.-H. Cha, and D.-K. Lee, J. Geophys. Res. [**110**]{}, D10105 (2005). H.-M. H. Juang, S.-Y. Hong, and M. Kanamitsu, Bull. Amer. Meteor. Soc. [**78**]{}, 2125 (1997). W.C. Skamarock, J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, W. Wang, and J. G. Powers, [*A description of the Advanced Research WRF version 2*]{}, (National Center for Atmospheric Research, Boulder, 2005). T. Davies, M. J. P. Cullen, A. J. Malcolm, M. H. Mawson, A. Staniforth, A. A. White, and N. Wood, Q. J. R. Meteorol. Soc. [**131**]{},1759 (2005); G. M. Martin, M. A. Ringer, V. D. Pope, A. Jones, C. Dearden, and T. J. Hinton, J. Climate [**19**]{}, 1274 (2006). M.-S. Suh, S.-G. Oh, D.-K. Lee, D.-H. Cha, S.-J. Choi, C.-S. Jin, and S.-Y. Hong, J. Climate. [**25**]{}, 7067 (2012). Climate Information Portal (Korean) <http://www.climate.go.kr/> S. H. Park, Master thesis, Pusan National University, 2000.
Show HN: No more “idea websites” – here's a problem solving website - aaronz8 https://www.thinkero.us/app ====== minimaxir The first comment in your previous submission is accurate: [https://news.ycombinator.com/item?id=6077395](https://news.ycombinator.com/item?id=6077395) Ideas are just ideas. Actually solving problems requires a different skillset that can't be accomplished with the "magic" of crowdsourcing. Calling this a "problem solving website" is incredibly misleading. ~~~ chrisjleaf The entire point of the site is that solving issues comes first. That's why our algorithms rank ideas based on the importance of the issues they solve. This way the ideas that solve the most important problems will rise to the top of the list that is what makes Thinkerous different from other "idea list" websites: we target problems directly while idea lists don't. ~~~ minimaxir > _That 's why our algorithms rank ideas based on the importance of the issues > they solve._ What? Ranking by the number of likes/upvotes (which is the method I see when looking at the Hot view) is not "ranking ideas based on the importance of the issues they solve." I know CMU teaches actual algorithms for this type of ranking, because I went there. :P ~~~ chrisjleaf Perhaps you're underestimating the complexity of the algorithm. The linking between ideas and issues is not superficial, the ranking of an idea is based directly upon the ranking of the issues it is linked to. Issues are a bit simpler in that they mostly are based on activity related to them. However doesn't it make sense that an issue that's popular would reflect the magnitude of it's importance? ~~~ minimaxir > _However doesn 't it make sense that an issue that's popular would reflect > the magnitude of it's importance?_ Not exactly. You're confusing correlation with causation. On sites with Reddit-like rankings, the number of votes which a submission receives is one of the primary _causes_ for the subsequent number of comments/activity of those respective submissions. And "importance" is not necessarily causal to submission ranking (aggregators are weird). ~~~ onedev Hey not sure if you're aware, but this isn't TechCrunch...heh. ~~~ minimaxir Commenting is universal. :) ------ UXDork Where are the "idea websites?" ~~~ fiatjaf Good question. ------ fiatjaf Here's something I've always wanted to build, much better than I thought it. Here are some thoughts: \- Flagging can be good. Some things just don't fit, so they must be put out. \- Maybe you could make it easier to give negative feedback. Some buttons (customizable?) to just click: "I don't see this as a real problem" or "This is more a dream of yours than a problem". Could also help with flagging. \- For ideas, there could be an option to quickly point the potential common problems an idea can suffer of (and also count the "votes" to these), such as: insufficient market; network effect; solution looking for a problem; this already exists; this has been tried. \- Pull requests. Let people at least try to modify others' ideas, add information etc. Maybe this will help people feel like their feedback is valuable. ~~~ aaronz8 Thanks! We'll definitely look into 1-click feedback and flagging. Regarding Pull Requests - we're in the process of adding a "team" functionality, but maybe an intermediate level of involvement (between joining the team and just commenting) would be useful. Thanks again! ------ markbnj At a glance many of the submissions seem to be problem statements, or simply maxims. Is that the idea? To list and rank problems? Or is the idea to have some sort of solution in mind as well? ~~~ chrisjleaf We differentiate issues and ideas with icons (an umbrella and light bulb respectively). Issues are typically statements of facts regarding problems that people have seen or experienced themselves. These issues are meant to be tied to an idea which solves them and are separated on purpose (problems and solutions don't follow have one-to-one relationships). Ideas play an important role on the site - to provide a call to action to actually solve a problem. For this reason if a user submits an issue they have a solution for we encourage them to submit the problem and solution separately so other people can contribute other related submissions. ------ AndrewKemendo _sharing ideas that solve impactful issues_ Except ideas don't solve impactful issues, groups of people applying time and resources to a problem solves impactful issues. I think it would be more accurate to say that the service is for "sharing ideas for how to have an impact on complex issues." I'm still not really sure what it is supposed to do. ~~~ aaronz8 Sorry, that is a bit unclear. Essentially, we want to make it easier for people to find issues that many people can identify with, and in the process, show people what some potential solutions are. By filtering out some of the noise, the ideas that could solve widespread issues can get more resources put behind them and hopefully a greater chance of succeeding. This might make more sense in the enterprise software realm, but we saw some interest in the startup community as well so we wanted to test it out. ------ hkon Looks like an idea site to me? ~~~ aaronz8 Maybe [https://www.thinkero.us/app/?type=issue](https://www.thinkero.us/app/?type=issue) will change your mind? ------ bsbechtel Is this like Quirky for everything (not just products)? ------ aikah Can you give some instances of what you call "idea websites"? ~~~ aaronz8 Any website where ideas and voting on ideas are the key focus of the platform, and don't go one step further of finding the underlying problems that these ideas trying to solve. Almost all innovation management and feature-request platforms are like this. For example: [http://www.ideastorm.com/](http://www.ideastorm.com/) [https://ideas.sap.com/](https://ideas.sap.com/) [https://success.salesforce.com/ideaSearch](https://success.salesforce.com/ideaSearch) [http://engagetacoma.mindmixer.com/activity](http://engagetacoma.mindmixer.com/activity) ------ perks Getting a 504 Gateway Time-out over here ~~~ AnkhMorporkian That's the first problem to be solved.
PROBLEM TO BE SOLVED: To provide a test tube stirrer capable of shaking the bottom of each test tube to effectively perform agitation of contents in the test tubes. SOLUTION: A test tube stirrer comprises: a test tube rack 3 provided with a plurality of test tube holes arranged in lengthwise and lateral directions for erecting the test tubes 5; test tube holding bodies 16 each provided under each test tube hole, and made of an elastic member holding an upper end outer periphery of the test tube in a free-end state in which the lower end of the test tube provided in the test tube hole is freely movable; oscillation means 52 arranged below the test tube rack 3 and provided with a plurality of junctions 51 arranged in a lateral line, the junctions 51 joining with the bottoms of all test tubes in a lateral line, respectively in a line unit, when the oscillation means 52 is raised; forward and backward motion driving means M1, 32, 33, and 34 for moving the oscillation means forward and backward along the longitudinal direction of the test tubes 5 arranged and set in lengthwise and lateral directions; and vertical motion driving means M2, 37, and 38 for raising and lowering the oscillation means 52 moved by the forward and backward motion driving means with respect to the bottoms of the test tubes. COPYRIGHT: (C)2012,JPO&INPIT PROBLEM TO BE SOLVED: To provide a test tube stirrer capable of shaking the bottom of each test tube to effectively perform agitation of contents in the test tubes. SOLUTION: A test tube stirrer comprises: a test tube rack 3 provided with a plurality of test tube holes arranged in lengthwise and lateral directions for erecting the test tubes 5; test tube holding bodies 16 each provided under each test tube hole, and made of an elastic member holding an upper end outer periphery of the test tube in a free-end state in which the lower end of the test tube provided in the test tube hole is freely movable; oscillation means 52 arranged below the test tube rack 3 and provided with a plurality of junctions 51 arranged in a lateral line, the junctions 51 joining with the bottoms of all test tubes in a lateral line, respectively in a line unit, when the oscillation means 52 is raised; forward and backward motion driving means M1, 32, 33, and 34 for moving the oscillation means forward and backward along the longitudinal direction of the test tubes 5 arranged and set in lengthwise and lateral directions; and vertical motion driving means M2, 37, and 38 for raising and lowering the oscillation means 52 moved by the forward and backward motion driving means with respect to the bottoms of the test tubes. COPYRIGHT: (C)2012,JPO&INPIT
FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DESCRIPTION OF SYMBOLS DETAILED DESCRIPTION OF THE INVENTION Embodiment 1 Embodiment 2 The present invention relates to a photoelectric-conversion-layer-stack-type color solid-state imaging device in which incident light of one of the three primary colors is detected by a photoelectric conversion layer laid on a semiconductor substrate and incident light of the other two colors that has passed through the photoelectric conversion layer is detected by photoelectric conversion elements (photodiodes) formed in the semiconductor substrate. In particular, the invention relates to a photoelectric-conversion-layer-stack-type color solid-state imaging device which is high in color separation performance and efficiency of light utilization. In single-plate color solid-state imaging devices as typified by CCD image sensors and CMOS image sensors, three or four kinds of color filters are arranged in mosaic form on an arrangement of photoelectric conversion pixels. With this structure, color signals corresponding to the color filters are output from the pixels, respectively, and a color image is generated by performing signal processing on those color signals. However, color solid-state imaging devices in which color filters are arranged in mosaic form have a problem that they are low in efficiency of light utilization and sensitivity because ⅔ of incident light is absorbed by the color filters in the case where they are color filters for the primary colors. The fact that each pixel produces a color signal of only one color raises a problem of low resolution. In particular, false colors appear noticeably. To solve the above problems, imaging devices having a structure that photoelectric conversion layers are stacked in three layers on a semiconductor substrate on which signal reading circuits are formed are being studied and developed (refer to JP-T-2002-502120 (The symbol “JP-T” as used herein means a published Japanese translation of a PCT patent application.) (corresponding to U.S. Pat. No. 6,300,612) and JP-A-2002-83946, for example). For example, these imaging devices have a pixel structure that photoelectric conversion layers which generate signal charges (electrons or holes) in response to blue (B) light, green (G) light, and red (R) light are laid in this order from the light incidence surface. Furthermore, these imaging devices are provided with signal reading circuits capable of independently reading, on a pixel-by-pixel basis, signal charges generated by the photoelectric conversion layers. In imaging devices having the above structure, almost all of incident light is photoelectrically converted into signal charges to be read and hence the efficiency of utilization of visible light is close to 100%. Furthermore, since each pixel produces color signals of the three colors (R, G, and B), these imaging devices can generate good, high-resolution images (no false colors appear noticeably) with high sensitivity. In the imaging device disclosed in JP-T-2002-513145 (U.S. Pat. No. 5,965,875), triple wells (photodiodes) for detecting optical signals are formed in a silicon substrate and signals having different spectra (i.e., having peaks at B (blue), G (green), and R (red) wavelengths in this order from the surface) are obtained so as to correspond to different depths in the silicon substrate. This utilizes the fact that the distance of entrance of incident light into the silicon substrate depends on the wavelength. Like the imaging devices disclosed in JP-T-2002-502120 (corresponding to U.S. Pat. No. 6,300,612) and JP-A-2002-83946, this imaging device can produce good, high-resolution images (no false colors appear noticeably) with high sensitivity. However, in the imaging devices disclosed in JP-T-2002-502120 (corresponding to U.S. Pat. No. 6,300,612) and JP-A-2002-83946, it is necessary that photoelectric conversion layers be formed in order in three layers on a semiconductor substrate and vertical interconnections be formed which transmit R, G, and B signal charges generated in the respective photoelectric conversion layers to the signal reading circuits formed on the semiconductor substrate. As such, these imaging devices have problems that they are difficult to manufacture and they are costly because of low production yields. On the other hand, the imaging device disclosed in JP-T-2002-513145 (U.S. Pat. No. 5,965,875) is configured in such a manner that blue light is detected by the shallowest photodiodes, red light is detected by the deepest photodiodes, and green light is detected by the intermediate photodiodes. However, the shallowest photodiodes also generate photocharges when receiving green or red light, as a result of which the spectra of R, G, and B signals are not separated sufficiently from each other. Therefore, to obtain true R, and B signals, it is necessary to perform addition/subtraction processing on output signals of photodiodes, which means a heavy computation load. Another problem is that the addition/subtraction processing lowers the S/N ratio of an image signal. FIGS. 5 and 6 The imaging device disclosed in JP-A-2003-332551 () has been proposed as one capable of solving the problems of the imaging devices of JP-T-2002-502120 (corresponding to U.S. Pat. No. 6,300,612), JP-A-2002-83946 and JP-T-2002-513145 (U.S. Pat. No. 5,965,875). This imaging device is a hybrid type of the imaging devices of JP-T-2002-502120 (corresponding to U.S. Pat. No. 6,300,612) and JP-A-2002-83946 and the imaging device of JP-T-2002-513145 (U.S. Pat. No. 5,965,875) and is configured as follows. Only a photoelectric conversion layer (one layer) that is sensitive to green (G) light is laid on a semiconductor substrate and, as in the conventional image sensors, incident light of blue (B) and red (R) that has passed through the photoelectric conversion layer is detected by two sets of photodiodes that are formed in the semiconductor substrate so as to be arranged in its depth direction. Since it is sufficient to form only one photoelectric conversion layer (one layer), the manufacturing process is simplified and cost increase or reduction in yield can be avoided. Furthermore, since green light which is in an intermediate wavelength range is absorbed by the photoelectric conversion layer, the separation between the spectral characteristics of the photodiodes for blue light and those for red light which are formed in the semiconductor substrate is improved, whereby the color reproduction performance is improved and the S/N ratio is increased. Although the color separation performance is improved, the above-described hybrid imaging device is still insufficient to take high-quality color images because it attains red/blue separation relying on the wavelength dependence of the distance of light entrance into the semiconductor substrate. An object of the present invention is to provide a hybrid photoelectric-conversion-layer-stack-type color solid-state imaging device having high color separation performance. The invention provides a photoelectric-conversion-layer-stack-type color solid-state imaging device characterized by comprising a semiconductor substrate; a photoelectric conversion layer laid over the semiconductor substrate, for absorbing light of a first color among three primary colors and thereby generating photocharges; plural charge storage regions arranged in a surface layer of the semiconductor substrate, for storing the photocharges; plural first photodiodes arranged in the surface layer of the substrate, for detecting mixed light of second and third colors among the three primary colors that has passed through the photoelectric conversion layer and for storing generated photocharges; plural second photodiodes arranged in the surface layer of the semiconductor substrate, for detecting light of the second color of the mixed light that has passed through the photoelectric conversion layer and for storing generated photocharges; color filter layers formed over the second photodiodes, for interrupting light of the third color; and signal reading units for reading out amounts of the charges stored in the charge storage regions and the photodiodes, respectively. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to a preferable embodiment of the invention is characterized in that the color filter layers are made of an inorganic material. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to a preferable embodiment of the invention is characterized in that the inorganic material is amorphous silicon or polysilicon. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to a preferable embodiment of the invention is characterized in that average transmittance of the inorganic material for light of the third color is less than or equal to ½ of that for light of the second color. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to a preferable embodiment of the invention is characterized in that the first color is green, the second color is red, and the third color is blue. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to a preferable embodiment of the invention is characterized in that each of the signal reading units comprises a MOS transistor or a charge-coupled device. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to a preferable embodiment of the invention is characterized by further comprising microlenses for converging incident light on top portions of the photodiodes, respectively. According to the invention, the color separation performance is improved by the color filter layers and the efficiency of light utilization is increased because no color filter layers are formed over the first photodiodes. Where the color filter layers are made of an inorganic material, an existing semiconductor integrated circuit manufacturing technology can be used for forming the layers under the photoelectric conversion layer, whereby the production yield can be increased. 10 : Photoelectric-conversion-layer-stack-type color solid-state imaging device 12 : Pixel 12 a : Green (G) and red (R) detecting pixel 12 b : Green (G) and magenta (Mg) detecting pixel 21 : Semiconductor substrate 22 : p-type well layer 23 : n-type region 24 : Surface p-type layer 25 : Charge storage region 27 : Transparent insulating layer 28 : Pixel electrode layer 29 53 , : Vertical interconnection 30 : Green-sensitive photoelectric conversion layer 31 : Common electrode layer (counter electrode layer) 33 : Color filter layer made of inorganic material 41 42 43 44 , , , : Signal reading circuit 52 : Color filter layer made of organic material One embodiment of the present invention will be hereinafter described with reference to the drawings. FIG. 1 10 12 11 schematically shows the surface of a photoelectric-conversion-layer-stack-type color solid-state imaging device according to the embodiment of the invention. In the photoelectric-conversion-layer-stack-type color solid-state imaging device according to the embodiment, plural pixels are arranged in square Lattice form on a photodetecting surface of a substrate . 12 12 12 12 12 12 12 a b a b a b The pixels are classified into two kinds of pixels and . The pixels and the pixels are formed on the photodetecting surface in checkered form. Alternatively, rows (or columns) of pixels arranged in stripes and rows (or columns) of pixels arranged in stripes are arranged alternately. 13 11 14 15 A row-selection scanning section is provided adjacent to the left sideline of the substrate and an image signal processing section is provided adjacent to the bottom sideline. A control section for generating timing pulses and control signals is provided at a proper position. 12 12 13 16 17 14 1 Signal reading circuits (not shown) are provided for each pixel . The signal reading circuits for each pixel are connected to the column-selection scanning section via a reset signal line and a row-selection signal line and connected to the image signal processing section via two column signal lines l and 13 14 For example, the signal reading circuits may be transistor circuits having a 3-transistor or 4-transistor structure as used in existing CMOS image sensors. Likewise, the column-selection scanning section and the image signal processing section may be the same as used in existing CMOS image sensors. 10 12 Although the photoelectric-conversion-layer-stack-type color solid-state imaging device of the illustrated example incorporates the MOS signal reading circuits, it may employ such a configuration that signal charges produced by the respective pixels are read out by charge transfer channels (vertical charge transfer channels VCCDs and a horizontal charge transfer channel HCCD) like existing CCD (charge-coupled device) solid-state imaging devices do. FIG. 2 FIG. 1 FIG. 1 12 12 22 21 11 23 22 12 12 24 23 a b a b is a schematic sectional view of two kinds of pixels and that are enclosed by a broken-line rectangle II in . A p-type well layer is formed in a surface layer of an n-type semiconductor substrate (denoted by symbol in ). And an n-type semiconductor layer (n-type region) for detecting incident light is formed in a surface portion of the p-type well layer in each of the pixels and . As a result, pn junctions, that is, photodiodes (photoelectric conversion elements) are formed. A surface p-type layer for dark current suppression is formed on the surface side of each n-type semiconductor layer as in the case of known CCD image sensors and CMOS image sensors. 25 23 22 25 A small-area charge storage region is formed between each adjoining pair of n-type semiconductor layers in the p-type well layer . Each charge storage region is shielded from light by a shield layer (not shown) so that no light shines on it. 27 21 28 12 27 28 25 29 A transparent insulating layer is laid on the surface of the semiconductor substrate , and a transparent pixel electrode layer which is divided so as to correspond to the respective pixels is laid on the surface of the transparent insulating layer . Each section of the pixel electrode layer is connected to the corresponding charge storage region via a vertical interconnection . 30 28 28 31 30 32 A photoelectric conversion layer which is sensitive to green light is laid on the pixel electrode layer so as to cover all the pixels, and a transparent common electrode layer (a counter electrode layer opposed to the pixel electrode layer ) is laid on the photoelectric conversion layer . A transparent protective layer is laid as a top layer. 28 31 31 For example, each of the transparent electrode layers and may be an ITO layer or a thin metal layer. The common electrode layer may be such that a single layer covers all the pixels, it is divided so as to correspond to the respective pixels and the sections are connected to each other by wiring, or it is divided into columns or rows which are connected to each other by wiring. 30 28 The photoelectric conversion layer maybe made of an organic semiconductor material, Alq, or a quinacridone compound or formed by laying nanosilicon having an optimum grain size. Any of these materials is laid on the pixel electrode layer by sputtering, a laser abrasion method, printing, spraying, or the like. 10 33 27 12 27 12 a b. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to the embodiment is characterized in that a polysilicon layer (or amorphous silicon layer) to serve as a color filter layer is buried in that portion of the transparent insulating layer which corresponds to the pixel and no color layer is provided in that portion of the transparent insulating layer which corresponds to the pixel 33 29 29 33 33 29 The color filter layer is separated from the nearby vertical interconnection . This is because polysilicon is conductive and hence signal charge flowing through the vertical interconnection may flow into the color filter layer if the color filter layer is in contact with the vertical interconnection . 33 33 FIG. 3A FIG. 3B FIG. 3B For example, the color filter layer is made of a material having such a transmittance curve as to cut blue light and transmit red light but cut infrared light (see ) or a material having such a F transmittance curve as to cut blue light and transmit red light as well as infrared light (see ). The transmittance curve as shown in is obtained if the color filter layer is made of polysilicon or amorphous silicon. It is preferable that the average transmittance for red light R be two times or more higher than that for blue light D. If the selection ratio of red light to blue light is smaller than 2, the color reproduction performance or the S/N ratio may be lowered due to color contamination. 12 21 In this embodiment, two signal reading circuits are provided for each pixel . Although the signal reading circuits are formed on the semiconductor substrate by using an integrated circuit technology, the details of their formation process will not be described because it is the same as that of known CMOS image sensors. 41 42 12 41 25 12 18 42 23 12 19 a a a A first signal reading circuit and a second signal reading circuit are provided for the pixel . The input terminal of the signal reading circuit is connected to the charge storage region of the pixel , and its output terminal is connected to a column signal line . The input terminal of the signal reading circuit is connected to the n-type semiconductor layer of the pixel and its output terminal is connected to a column signal line . 43 44 12 43 25 12 18 44 23 12 19 b b b A third signal reading circuit and a fourth signal reading circuit are provided for the pixel . The input terminal of the signal reading circuit is connected to the charge storage region of the pixel , and its output terminal is connected to a column signal line . The input terminal of the signal reading circuit is connected to the n-type semiconductor layer of the pixel and its output terminal is connected to a column signal line . 10 30 12 12 30 25 12 12 29 a b a b When light coming from an object shines on the photoelectric-conversion-layer-stack-type color solid-state imaging device having the above configuration, green light of the incident light is absorbed by sections of the photoelectric conversion layer that correspond to pixels and and signal charges generated in the photoelectric conversion layer flow into the charge storage regions corresponding to the pixels and via the vertical interconnections . 30 12 30 27 33 23 23 a Blue light and red light of the incident light pass through the photoelectric conversion layer . In each pixel , the blue light and the red light that have passed through the photoelectric conversion layer enter the transparent insulating layer but the shorter-wavelength blue light is absorbed by the polysilicon layer and does not reach the n-type semiconductor layer . That is, signal charge that is produced through photoelectric conversion by the n-type semiconductor layer and stored there corresponds to the light quantity of the red light. 12 27 23 b In each pixel , since no color filter layer is formed in the transparent insulating layer , both of blue light and red light enter the n-type semiconductor layer and are photoelectrically converted and generated charge is stored there. The quantity of this signal charge corresponds to the quantity of red/blue mixed light, that is, magenta (Mg) light. 25 23 12 12 41 44 14 a b Signals corresponding to the charges stored in the charge storage regions and the n-type semiconductor layers of the pixels and are read by the signal reading circuits -, processed by the image signal processing section , and then output as image data. Since the output image data are green (G) image data, red (R) image data, and magenta (Mg: red R plus blue B) image data, image data of the three primary colors (R, G, and B) can easily be obtained by signal processing. 12 b In this embodiment, each pixel is not provided with a color filter for cutting red light and blue image data B is obtained by signal processing. This is to increase the efficiency of light utilization. Where color separation is performed by color filters, light that is cut by the color filters does not contribute to photoelectric conversion and hence is useless though the color separation performance is high. 30 33 33 In contrast, in this embodiment, the color filter is provided for only one of the two kinds of pixels, which minimizes the amount of light that is rendered useless. In addition, since light of green (G) which is the intermediate color among the three primary colors R, G, and B is separated by the photoelectric conversion layer , the material of the color filters for separating red light R from magenta light Mg (red light R plus blue light B) can be selected easily. Alternatively, the color filters may be made of a material which transmits blue light and cuts red light. 33 30 33 30 30 33 30 Finely controlling the material components of the color filters enables another configuration in which the photoelectric conversion layer separates red light R and the color filters cut blue light B or green light G of cyan light Cy (blue light B plus green light G) that has passed through the photoelectric conversion layer . A further configuration is enabled in which the photoelectric conversion layer separates blue light B and the color filters cut red light R or green light G of yellow light Ye (red light R plus green light G) that has passed through the photoelectric conversion layer . Exemplary materials of the photoelectric conversion layer for separating red light are inorganic materials such as GaAlAs and Si and organic materials such as ZnPc (zinc phthalocyanine)/Alq3 (quinolinole aluminum complex). Exemplary materials of the photoelectric conversion layer for separating blue light are inorganic materials such as InAlP and organic materials such as C6/PHPPS (coumarin 6 (C6)-doped poly(m-hexoxyphenyl)phenylsilane). 30 30 30 Where the photoelectric conversion layer is made of an inorganic material, it is preferable to use electrons as signal charge because the electrons of hole-electron pairs generated through absorption of light by the photoelectric conversion layer have higher motility. This is because carriers having high mobility are low in the probability of extinction during transport as well as in the probability of being captured by trap states. On the other hand, where the photoelectric conversion layer is made of an organic semiconductor material, it is preferable to use holes as signal charge because holes have higher mobility. 33 27 33 41 44 33 25 33 FIG. 2 In this embodiment, the color filter layers are made of an inorganic material such as amorphous silicon or polysilicon. Although in the transparent insulating layer in which the color filter layers are buried is a single layer, in practice it is a multilayer structure consisting of a silicon nitride layer and a silicon oxide layer, for example, and wiring layers for connecting the signal reading circuits - to the n-type semiconductor layers and the charge storage regions are formed between those layers. The color filter layers may be formed by sputtering or evaporation in forming one of those layers. 33 28 21 32 30 29 FIG. 2 Where the color filter layers are made of an inorganic material as in the embodiment, an existing semiconductor integrated circuit manufacturing technology can be used as it is from the start to the step of forming the pixel electrode layer (see ) on the surface of the semiconductor substrate (to the step of forming the protective layer in the case where the photoelectric conversion layer is made of an inorganic material) and the vertical interconnections can be formed easily. As a result, the production yield of the photoelectric-conversion-layer-stack-type color solid-state imaging device can be increased and hence its manufacturing cost can be reduced. 33 In general, the color filter layers being made of an inorganic material can be made thinner than color filter layers made of an organic material are because the former exhibit a larger light absorption coefficient. As a result, the overall height of the solid-state imaging device can be reduced and hence shading can be suppressed. The device can thus be miniaturized easily. 32 12 12 30 12 12 23 a b a b In the above embodiment, no microlenses are provided. However, microlenses (top lenses) may be provided on those portions of the protective layer which are located in the pixels and . Alternatively, microlenses (inner lenses) may be provided beneath those portions of the photoelectric conversion layer which are located in the pixels and . The microlenses serve to converge incident light on the photodetecting surfaces of the n-type semiconductor layers . FIG. 4 FIG. 2 FIG. 2 FIG. 2 is a schematic sectional view of a hybrid photoelectric-conversion-layer-stack-type color solid-state imaging device according to a second embodiment of the invention. The photoelectric-conversion-layer-stack-type color solid-state imaging device according to this embodiment has approximately the same configuration as that according to the first embodiment shown in and is different from the latter only in that the color filter layers are made of an organic material. Therefore, the same layers etc. as shown in are given the same symbols as the corresponding ones in and will not be described below. Only different layers etc. will be described. 51 27 28 12 52 51 a In the photoelectric-conversion-layer-stack-type color solid-state imaging device according to this embodiment, a smooth layer made of an organic material is formed between the transparent insulating layer and the pixel electrode layer . In each pixel , a color filter layer for transmission of red light which is made of an organic material is formed in the smooth layer . 52 The color filter layers can be formed by using a color filter material and a forming method that are usually employed in manufacturing an existing CCD image sensor or CMOS image sensor. 27 51 52 53 29 28 51 In this embodiment, an existing semiconductor integrated circuit manufacturing technology is used from the start to the step of forming the transparent insulating layer and the organic material layers and are formed thereon. Therefore, the overall thickness of the imaging device is larger than in the first embodiment. However, this embodiment is suitable for cost reduction because an existing manufacturing method and materials can be used. It is noted that vertical interconnections for connecting the vertical interconnections to the pixel electrode layer need to be formed in the organic material layer . Each of the above-described embodiments makes it possible to manufacture, at a low cost, a photoelectric-conversion-layer-stack-type color solid-state imaging device which is high in color separation performance and efficiency of light utilization. The hybrid photoelectric-conversion-layer-stack-type color solid-state imaging device according to the invention can take color images that are superior in color reproduction performance and high in sensitivity and resolution because the color separation performance of the plural photodiodes formed in the semiconductor substrate is improved. With an additional advantage that it can be manufactured at a low cost, it is useful when used in place of conventional CCD image sensors or CMOS image sensors. This application is based on Japanese Patent application JP 2006-139111, filed May 18, 2006, the entire content of which is hereby incorporated by reference, the same as if set forth at length. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 schematically shows the surface of a photoelectric-conversion-layer-stack-type color solid-state imaging device according to a first embodiment of the present invention. FIG. 2 FIG. 1 is a schematic sectional view of a part enclosed by a broken-like rectangle II in . FIGS. 3A and 3B FIG. 2 shows incident light wavelength vs. transmittance curves of a color filter layer shown in . FIG. 4 is a schematic sectional view of a photoelectric-conversion-layer-stack-type color solid-state imaging device according to a second embodiment of the invention.
Tips on how to Write Essays – 4 Easy Dissertation Writing Tips ON reddit if you might have been consistently struggling with how to create essays, this post will display you four easy steps to writing constantly high quality documents. The key things a person need to give attention to are the essay subject, the opening paragraph, the overall construction from the essay, and even your essay content and analysis. This article also gives links to two excellent resources regarding essay writing. one. Picking an Issue for Your Essay The first step when operating out the way to create essays is to decide what your theme or theme will be. Knowing the theme of your respective essay permits you to emphasis your efforts. A person can immerse your self in finding out there all there is usually to know about a certain topic without any risk of getting diverted. If possible, go with a subject a person are interested inside, because this will make writing the essay much easier. Even if you happen to be given a theme, try to find a good ‘angle’ with it that will has some interest to you. Very good resources for essay material would be the internet, created or e-books, magazines and even interviews involving people versed in your chosen subject. Once you have found your theme, the next thing to pay attention to is the construction of your essay. 2. Structuring Your Composition Part of studying how to write essays is to be able to understand the importance involving structure. Structure assists the reader to comprehend where your essay or dissertation is going plus what you are striving to tell them. Think about the structure like a ‘framework’ close to that you can build your own writing, Firstly, while researching your subject, write down the main points in department of transportation point form, only using a few words – these will constitute the main composition for the essay. It doesn’t matter a lot at this phase what order that they are in – you can sort that out after. Under each key point, jot lower 2 or 3 sub points that go into a bit a lot more detail about that specific aspect of your current essay. Once a person have this standard structure set up, an individual can start considering about how many terms to write for each part of your essay. 3. Number of words inside of your essay This is a very important aspect of tips on how to write essays. Let’s say huler1996 reviews have 2000 words and phrases to write intended for the whole composition and 5 main points, with 2 subpoints for each. Keep in mind that you will also need to have an introductory and even concluding paragraph, so that makes this about 12 paragraphs in total. This indicates you will want to talk about 150-200 words per part or sub point. Once you learn to break it straight down in this approach, you can observe that learning just how to write works is not overpowering – all a person have to carry out is write the short item of text message for each with the ideas you will be presenting. Once a person have the framework written down inside of note form, with the number of terms for every paragraph, a person can start to function on the specifics of your essay content. 4. Composition Content and Examination Look at what you have read for every single of the major points of your essay or dissertation and work out exactly how you can discuss about it in your own words and phrases, or in a more educational way. Look with your essay analysis notes and decide for yourself in case the writers have produced claims which, inside your opinion, shortage substance. If necessary, compare and contrast different claims in addition to write down which ones is more valid, within your opinion, and make clear why to your reader. Remember that will each paragraph demands to lead in the next. This ‘smooths out’ the structure and helps the particular essay to ‘flow’ better. Analysis can be a challenging thing to handle when you will be first starting to master how to publish essays, but it is worth persevering with because it will make your documents much more beneficial and readable. Summary In this article you possess seen there will be only four steps to writing a great essay. Learning to write essays is a crucial part of bettering your communication expertise. It will be time well expended and many equipment available to make your current task much simpler.
https://jiao186.com/2022/08/13/tips-on-how-to-write-essays-4-easy-dissertation-writing-tips-on-reddit/
Melting and Boiling Every type of matter can be found in one state or another. These states, which can also be referred to as phases, are solids, liquids, and gases. But how does matter change from one state to another? All you need to do is apply some heat and pressure. The amount of heat/pressure that needs to be applied to change a state of matter depends on what kind of matter you are working with. Some elements or matter don’t need a great deal of heat to change phases, while others require enormously high temperatures in order to move into the next state. Standard State If you ever hear a scientist mention the phrase “standard state”, you should know this refers to the state an element or piece of matter is most likely to be in at normal temperatures. Pretty much every element’s standard state is that of a solid. This is true for substances such as lead or tin. There are only two elements with a liquid standard state, which are mercury and bromine. Hydrogen, nitrogen, and oxygen are all gases in their standard state. Melting As you have read before, the only way to change the state of matter for any given substance is through heat. If you are talking about changing a solid to a liquid, then this is a process called melting. Every element and piece of matter in the world has a melting point, though some have a much higher melting temperature than others. When an element is heated, its molecules gain more energy and begin to more about more freely. This changes the element from a solid to a liquid. Boiling If an element or piece of matter is going to be changed from a liquid to a gas, even more heat will need to be applied. This will allow the substance to reach what is known as its boiling point. Once the substance has been heated enough to reach its boiling point, the molecules will begin to move about even more freely, turning the substance from a liquid to a gas. Every type of liquid has a boiling point, though some are much higher than others. Evaporation Another way that a liquid can be changed to a gas is through a process known as evaporation. This type of change doesn’t need high temperatures, since it will not change all of the liquid at the same time. Instead, if a liquid is left under a small heat source, the molecules at the surface will heat up, allowing them to become a gas, but leaving the rest of the liquid intact. Related: States of Matter Facts Freezing and Condensation This process can also work in reverse. If heat is removed from a substance, it can be changed from a liquid to a solid or a gas to a liquid. When a gas is changed to a liquid, the process is called condensation. If a liquid is turned back into a solid, it is called freezing. The temperatures for this to occur depend on what type of matter or element is being changed.
https://www.coolkidfacts.com/melting-boiling/
“Vasculitis” is the term used for several disorders related to blood vessel inflammation. Vasculitis is classified as an autoimmune disease, because blood vessels become inflamed when your immune system attacks and damages your arteries and veins. Vasculitis Symptoms and Diagnosis Symptoms of vasculitis may vary depending on where in your body the condition develops. The most common symptoms include: - Fever - Fatigue - Generalized pain and/or headaches - Shortness of breath and/or coughing (if lungs are affected) - Skin rashes - Numbness or pain in the hands or feet. Vasculitis can affect anyone, but it is not a common condition: only one to two new cases per 50,000 people are found each year, according to the Society for Vascular Surgery. There are three types of vasculitis: small-, medium- or large-vessel disease. Research has shown that both genetic factors (inheritance) and environmental factors (such as bacterial, viral and fungal infections) may be causes of vasculitis. Vasculitis is not always easy to diagnose, as symptoms could be a sign of many other conditions. If your doctor thinks you may have vasculitis, based on your symptoms and a physical exam, you will be referred to a vascular surgeon for further examination and testing. Tests that can help determine whether you have vasculitis, and the type, include: - Blood tests to check for inflammation and blood proteins that are common in vasculitis patients - Biopsy to check tissue that shows signs of being affected by vasculitis - Angiography, an X-ray of the blood vessels - MRI or CT scans Vasculitis can occur from time to time (episodic) or last a lifetime. Patients tend to have episodes, or flares, over several years at a time. Vasculitis Treatment at BIDMC The aim of treating vasculitis is to reduce inflammation by suppressing parts of the immune system. When vasculitis is more severe, treatment is also aimed at preventing damage to vital organs. Treatments include: - Medications, such as glucocorticoids (steroids) or newer immunosuppressant drugs can be prescribed to help suppress parts of the immune system - Plasmapheresis (blood filtration to remove specific proteins) - Angioplasty, if vessels are blocked - Bypass surgery, if an artery is severely blocked or narrowed Learn More Our expert vascular surgeons offer care and management, as well as a full range of treatment options, for vasculitis.
https://www.bidmc.org/conditions-and-treatments/heart-and-vascular/vasculitis
We observe the success of artificial neural networks in simulating human performance on a number of tasks: such as image recognition, natural language processing, etc. However, there are limits to state-of-the- art AI that separate it from human-like intelligence. Humans can learn a new skill without forgetting what they have already learned and they can improve their activity and gradually become better learners. Today’s AI algorithms are limited in how much previous knowledge they are able to keep through each new training phase and how much they can reuse. In practice, this means that you need to build a new algorithm for each new specific task. There is domain called AGI where will be possible to find solutions for this problem. Artificial general intelligence(AGI) describes research that aims to create machines capable of general intelligent action. “General” means that one AI program realizes number of different tasks and the same code can be used in many applications. We must focus on self-improvement techniques e.g. Reinforcement Learning and integrate it with deep learning, recurrent networks.
https://medias.ircam.fr/xea8923
Mix 4 cups flour, 1 cup Parmesan, yeast and salt in large bowl. Add water and oil; stir until mixture forms soft dough. Knead on lightly floured surface 5 minutes or until smooth and elastic, gradually adding remaining flour. - Step two Cut dough into 4 pieces. (Each piece is enough for a 12-inch pizza.) Spray 1 dough piece with cooking spray; cover loosely with plastic wrap. Let stand 15 minutes. See tip for how to store remaining dough for another use. - Step three TO MAKE ONE PIZZA: Heat oven to 400°F. Roll 1 dough piece (after standing 15 minutes) into 12-inch round on lightly floured surface; place on pizza pan. Bake 6 minutes. Top with tomato sauce, drained tomatoes, mozzarella, pepperoni and 1 tablespoon Parmesan. - Step four Bake 15 minutes or until mozzarella is melted and edge of crust is golden brown. Cut pizza into 6 slices. Tips How to Store Remaining Pizza Dough: Place each piece of remaining dough in separate freezer-weight resealable plastic bags sprayed with cooking spray; refrigerate up to 2 days or freeze up to 3 months before using to make additional pizzas. If freezing the dough, thaw overnight in refrigerator before using. Preparation images Cheesy Pepperoni Parmesan Pizza You may also like recipes Tips & Ideas Login to your Ready Set Eat AccountLog in with Facebook Log in with Google OR Don't have an account? Create one for a personalized experience! By creating a profile and answering a few questions, we can transform your experience on Ready Set Eat into one that’s tailored to your cooking style and tastes.Personalize My Experience Reset Password Please enter your email address and click submit. You will be sent an email with a re-set password link within a few minutes.
https://www.readyseteat.com/recipes-Cheesy-Pepperoni-Parmesan-Pizza-6941
Paving slabs are available in a range of shapes, sizes and designs. In this case we used 440x440x40*mm slabs in a sand colour and spaced 50mm apart. Laying a straight path using slabs is relatively easy – all you do is measure the total length to be covered (and total width if you are going to lay a single row or double up), add the spacing between each slab, and calculate the number of slabs you will need. If, however, you wish to build a curve into the path then it becomes a little more complicated – but not overly so. This is how you calculate the number of slabs requires, let’s say, for a 90° curve. Let’s say each slab is 440mm along each edge and you wish to have their inner corners touching. You want a reasonably wide curve with a radius of, say 3m. The circumference of a circle is 2πr (or πd) where π is pi – 3.142, ‘r’ is the radius. You can also calculate the circumference by multiplying the diameter by pi – hence πd, where ‘d’ is the diameter – they both amount to exactly the same. So, the circumference of a circle with a radius of 3m is 2x3x3.142 = 18.852m. Round that off to 18.85m, and then divide by 4 – because you want only ¼ of the circle. That comes to 4.713m, or 4713mm. divide that by 440 = 10.7 slabs. What you want then, is to use 10 slabs, for a total length of the ¼ circle being 4400mm, or 4.4m. Now work it backwards… 4.4×4=17.6m. 17.6÷2÷3.142=2.8 – hence your required radius must now be 2.8m. And that’s all it is – simple substitution. We wanted the path to line up with the existing paved patio, so we used a builder’s line, tightly stretched along the junction between the slabs, to align the path. When laying the first slab, make sure that its surface is exactly level with the paving it joins – any difference in levels could cause someone to trip. Add and remove soil from under the slab and constantly check the slab is level on all axes. Ensure that one you have it level, you lift it carefully and add soil to any areas where it seems the underside of the slab might be ‘hanging in midair (it is important that the base of each slab is fully supported to avoid any cracking when a person steps on it. This is how the slab is aligned with the line – note that it does not touch the line (if it does the line will be pushed to one side and your path with be crooked). The first three slabs laid… note the use of 50mm timber offcuts as spacers. When removing soil, remove as little as possible; digging too deep means that the soil used to refill the space is uncompacted and will subside in time – as will the slab on it. Getting the radius correct. Here we used stout rope with very limited stretchability, but if you do not have any handy then use a length of chain. Avoid using string… in pulling it taut you can stretch it and your perfect curve will end up ovoid – something to avoid, pardon the pun. The way to get the curve is to push your pointer into the soil at the last corner of the straight section, and walk back, as close as you at 90° to the straight section, and push your pivot into the soil. Now go back to your pointer, move while using the point to make a line in the soil to where you want the curve to end. You should now have a perfect curve. Lay your slabs out along the curved line you have just scored. When satisfied, dig out their respective positions and place them in their final positions – taking care to do all the levelling and filling and backfilling detailed above. The completed first part of the path and curve in the background. …Taking the time to also take some of the lawn-grass runners you dug out to give the gaps an early start to filling with grass. After the curve, this path straightened out again. It met the existing brick paving on the other side at an angle of about 60°. Just step this way… a view back to the patio area. Note that the soil excavated from the project has been used as topsoil for the surrounding lawn. Estimated time: Depends on the size of the project… length of path, number of slabs to be laid etc. These materials are available at Selected Mica Stores. To find out which is your closest Mica and whether or not they stock the items required, please go to our store locator HERE, find your store and call them. If your local Mica does not stock exactly what you need they will be able to order it for you or suggest an alternative product or a reputable source.
http://www.mica.co.za/paving-a-step-in-the-right-direction/
--- abstract: 'Natural language object retrieval is a highly useful yet challenging task for robots in human-centric environments. Previous work has primarily focused on commands specifying the desired object’s type such as “scissors" and/or visual attributes such as “red," thus limiting the robot to only known object classes. We develop a model to retrieve objects based on descriptions of their usage. The model takes in a language command containing a verb, for example “Hand me something to *cut*," and RGB images of candidate objects and selects the object that best satisfies the task specified by the verb. Our model directly predicts an object’s appearance from the object’s use specified by a verb phrase. We do not need to explicitly specify an object’s class label. Our approach allows us to predict high level concepts like an object’s utility based on the language query. Based on contextual information present in the language commands, our model can generalize to unseen object classes and unknown nouns in the commands. Our model correctly selects objects out of sets of five candidates to fulfill natural language commands, and achieves an average accuracy of 62.3% on a held-out test set of unseen ImageNet object classes and 53.0% on unseen object classes *and* unknown nouns. Our model also achieves an average accuracy of 54.7% on unseen YCB object classes, which have a different image distribution from ImageNet objects. We demonstrate our model on a KUKA LBR iiwa robot arm, enabling the robot to retrieve objects based on natural language descriptions of their usage[^1]. We also present a new dataset of 655 verb-object pairs denoting object usage over 50 verbs and 216 object classes[^2].' author: - | Thao Nguyen, Nakul Gopalan, Roma Patel, Matt Corsaro, Ellie Pavlick, Stefanie Tellex\ {thaonguyen, romapatel, matthew\_corsaro, ellie\_pavlick}@brown.edu\ nakul\[email protected], [email protected] bibliography: - 'ref.bib' title: | Robot Object Retrieval\ with Contextual Natural Language Queries --- Introduction ============ A key bottleneck in widespread deployment of robots in human-centric environments is the ability for non-expert users to communicate with robots. Natural language is one of the most popular communication modalities due to the familiarity and comfort it affords a majority of users. However, training a robot to understand open-ended natural language commands is challenging since humans will inevitably produce words that were never seen in the robot’s training data. These unknown words can come from paraphrasing such as using “saucer” instead of “plate,” or from novel object classes in the robot’s environments, for example a kitchen with a “rolling pin” when the robot has never seen a rolling pin before. ![Our robot receives segmented RGB images of the objects in the scene and a natural language command such as “Give me something to contain," and correctly retrieved the Minion (yellow cartoon character)-shaped container.[]{data-label="fig:robot1"}](figures/robot_0.jpg){width="1.0\linewidth"} We aim to develop a model that can handle open-ended commands with unknown words and object classes. As a first step in solving this challenging problem, we focus on the natural language object retrieval task — selecting the correct object based on an indirect natural language command with constraints on the functionality of the object. More specifically, our work focuses on fulfilling commands requesting an object for a task specified by a verb such as “Hand me a box cutter to **cut**." Being able to handle these types of commands is highly useful for a robot agent in human-centric environments, as people usually ask for an object with a specific usage in mind. The robot would be able to correctly fetch the desired object for the given task, such as cut, without needing to have seen the object, a box cutter for example, or the word representing the object, such as the noun “box cutter." In addition, the robot has the freedom to substitute objects as long as the selected object satisfies the specified task. This is particularly useful in cases where the robot cannot locate the specific object the human asked for but found another object that can satisfy the given task, such as a knife instead of a box cutter to cut. There has been much prior work on natural language object retrieval [@krishnamurthy2013jointly; @hu2016natural; @chen2018text2shape; @cohen2019grounding] and similar areas such as image captioning and image retrieval [@patterson2012sun; @mao2014deep; @vinyals2015show; @xu2015show]. However, previous work primarily focuses on natural language commands that either specify the object class such as “scissors" or describe the object’s visual attributes such as “red," “curved," “has handle,” and cannot handle unknown object classes or words. Our work, in contrast, anchors the desired object to its usage (specified by a verb) and reasons about the verb to handle unknown objects and nouns on-the-fly. Our work demonstrates that an object’s appearance provides sufficient signals to predict whether the object is suitable for a specific task, without needing to explicitly classify the object class and visual attributes. Our model takes in RGB images of objects and a natural language command containing a verb, generates embeddings of the input language command and images, and selects the image most similar to the given command in embedding space. The selected image should represent the object that best satisfies the task specified by the verb in the command. We train our model on natural language command-RGB image pairs. The evaluation task for the model is to retrieve the correct object from a set of five images, given a natural language command. We use ILSVRC2012 [@ILSVRC15] images and language commands generated from verb-object pairs extracted from Wikipedia for training and evaluation of our model. Our model achieves an average retrieval accuracy of 62.3% on a held-out test set of unseen ILSVRC2012 object classes and 53.0% on unseen object classes *and* unknown nouns. Our model also achieves an average accuracy of 54.7% on unseen YCB object classes. We also demonstrate our model on a KUKA LBR iiwa robot arm, enabling the robot to retrieve objects based on natural language commands, and present a new dataset of 655 verb-object pairs denoting object usage over 50 verbs and 216 object classes. Related Work ============ Natural language object retrieval refers to the task of finding and recovering an object specified by a human user using natural language. The computer vision and natural language grounding communities attempt to solve object retrieval by locating or *grounding* the object specified in an image using natural language [@krishnamurthy2013jointly; @hu2016natural]. @krishnamurthy2013jointly use a dataset of RGB images with segmented objects and their natural language descriptions to learn the grounding of words to objects in the image by exploiting repeated occurrences of segmented objects within images, along with their descriptions in natural language. @hu2016natural use a similar approach albeit using deep neural networks to avoid parsing and feature construction by hand. @chen2018text2shape learn joint embeddings of language descriptions and colored 3D objects for text-to-shape retrieval and generation of colored 3D shapes from natural language. @cohen2019grounding learn joint embeddings of language descriptions and segmented depth images of objects for object retrieval within instances of the same object class. Our work, in contrast, learns an embedding across object classes based on their suitability for a given task specified using natural language. Our object embeddings are not conditioned on the output class of objects, but on the relevancy of the object for the specified task. This, therefore, allows us to retrieve objects based on descriptions of their usage and importantly allows handling of unknown nouns and unseen object classes. Another relevant line of work is image captioning and image retrieval, which also aims to jointly model a natural language sequence and image content. The SUN scene attribute dataset [@patterson2012sun] maps images to attributes such as “hills," “houses," “bicycle racks," “sun," etc. Such understanding of image attributes provides scene category predictions and high level scene descriptions, for example “human hiking in a rainy field." Methods based on recurrent neural networks (RNNs) [@mao2014deep; @vinyals2015show; @xu2015show] trained to directly model the probability distribution of generating a word given previous words and an image have shown to be effective in image caption generation, natural language image retrieval, and image caption retrieval. Our work is most similar to earlier attribute based image retrieval work. However, these models are trained on attributes that are directly specified and not inferred from indirect task based queries. Implicit task-based object attributes are harder to learn but are also more general than directly specifiable object visual attributes, and are useful for a natural language object retrieval system to have in its toolbox. Also related to our work are methods on interactive object retrieval [@whitney2017reducing] and language grounding [@shridhar2018interactive; @hatori2018interactively]. These methods perform inference over dialogue to deduce the right object based on the specifications provided by the human user. They specifically use known object copra and directly specify the object attributes. Our work, in contrast, retrieve objects based on contextual information about the task being specified by the natural language command. We are not performing inference over dialogue, but it is a natural next step for our work where our joint embedding can prove useful in the case of novel objects. Similar to previous work, our work aims to learn joint object representations from visual and language information using RNNs. However, previous work primarily focuses on natural language commands specifying the object type and visual attributes, such as “scissors," “red," “curved," “has handle." In contrast, our work focuses on fulfilling commands requesting an object for a task specified by a verb, for example “Hand me something to **cut**." To handle such commands, a possible approach is to rely on accurate classification of the object type and visual attributes and an external knowledge base to query for valid verb-object or verb-attribute pairings. However, that approach would be limited to known objects and words. Our work, on the other hand, bypasses explicit classification of object type and attributes, and directly maps object use that is specified by the verb to object appearance that is captured by the image. Our work uses the context of the verb to implicitly infer object attributes that are required for the task, and can generalize to unseen object classes and unknown nouns in the language commands. Approach ======== To fulfill natural language commands requesting an object for a task specified by a verb, our model generates embeddings for the language command and candidate objects and selects the object that is closest to the command in embedding space. Our model is trained using pairs of natural language object requests containing verbs and ground truth objects that satisfy the requests. We describe our model and data collection process in detail in Sections \[sec:model\] and \[sec:data\_collection\]. Model {#sec:model} ----- ![Diagram of the language-vision embedding model. The model encodes given natural language commands and RGB images, and minimizes the cosine embedding loss between the language and image embeddings during training. At inference time, the model calculates the cosine similarities between the embeddings of the language command and candidate images, and selects the image most similar to the command in embedding space.[]{data-label="fig:model"}](figures/model.jpg){width="1.0\linewidth"} Given a natural language command and images of candidate objects, we want our model to correctly select the object that best satisfies the command. Our model does this by generating embeddings for the input natural language command and images, calculating the cosine similarities between the image embeddings and the language embedding, and selecting the image most similar to the command in embedding space. A diagram of our model is shown in Figure \[fig:model\]. Our model consists of separate image and language encoders, each in charge of generating embeddings for the input natural language commands and RGB images, respectively. During training, our model minimizes the cosine embedding loss between the embeddings of language command-RGB image pairs, thus maximizing the likelihood of the target image given the command. We describe the component image and language encoders and our model training process below. ### Image Encoder To encode each RGB image, we use the average pooling layer of a pretrained ResNet101 [@he2016deep]. We chose ResNet101 due to ResNet’s good performance on robots [@mallick2018deep]. The use of deep pretrained representations enables our model to leverage prior information of complex image features to allow for better encoding of the visual information from the images. We, therefore, get an embedding of size 2048 from the pretrained ResNet model for each RGB image. ### Language Encoder To encode each natural language command, our language encoder consists of a recurrent neural network (RNN) [@rnn] followed by a fully connected layer. We randomly initialize word embeddings for each language command that are then trained from scratch. The model, therefore, produces an embedding vector that is the same size of the embedding produced by the image encoder. ### Training Process We train our model to optimize an objective function that attempts to bring the corresponding language and image embeddings closer to each other in embedding space. We achieve this by reducing the cosine embedding loss between the low-dimensional embeddings produced by the image encoder, which takes in an RGB image of the object, and the embedding produced by the language encoder, which takes in the referring natural language command, during training. We describe the training data in Section \[sec:data\]. Positive training samples consist of pairs of natural language commands, each containing one verb, and RGB images of objects that can be paired with that verb. We obtain negative samples by randomly sampling an image of a different object that does not correspond with the verb and pairing the image with the language command, resulting in a dataset of equally balanced positive and negative samples. We use Adam [@kingma2014adam] as an optimiser with a learning rate of 0.0001 and train for 50 epochs until convergence. Data Collection {#sec:data_collection} --------------- To train and evaluate our model, we require pairs of natural language commands containing verbs and RGB images of objects. To obtain these command-image pairs, we need verb-object pairs denoting valid object usage such as “cut" for a “knife." We also require RGB images for the objects. Since we are interested in testing our model’s generalization capability on unseen object classes and nouns, we require a large number of object classes that can be paired with the verbs for a sufficient number of held-out object classes. To the best of our knowledge, no existing datasets of verb-object pairs met our requirements. @chao2015mining mine the web for the knowledge of semantic affordance — given an object, determining whether an action can be performed on it — resulting in a dataset of verb-noun combinations. However, their dataset is only on 20 object classes, and focuses on verbs denoting actions that can be performed on the objects, such as “hunt" a “bird," rather than the objects’ usage. Other works on semantic affordances [@myers2015affordance; @do2018affordancenet] also provide datasets of objects labeled with their affordances. However, these datasets are on fewer than 20 object classes and fewer than 10 affordances. We, therefore, decided to collect our own dataset of valid verb-object pairs and use it to generate natural language commands paired with RGB images for our model. We describe our data below. -------------------- ----------------- --------------------- -- contain – bucket hit – hammer wear – necklace contain – wardrobe hit – racket wear – suit cut – cleaver play – baseball wrap – cloak cut – hatchet play – violin wrap – handkerchief eat – banana serve – plate write – notebook eat – pizza serve – tray write – quill \[tab:verb-obj\] -------------------- ----------------- --------------------- -- : Example verb-object pairs from our dataset ### Vision Data We use RGB images and object classes from the ILSVRC2012 validation set [@ILSVRC15]. We choose this dataset as it has 1000 object classes and a variety of images per object class, and we want our model to work on many different object classes and object instances. The ImageNet object classes such as “violin,” “suit,” and “quill” are usually nouns that occur frequently in textual data in correspondence with other verbs such as “play,” “wear,” “write.” ### Language Data We extracted sentences from Wikipedia containing the ImageNet object classes and used spaCy [@spacy2] to parse the sentences and extract corresponding verb-object pairs. We originally sought out to extract verb-object pairs from the common-sense knowledge base ConceptNet [@speer2017conceptnet], which ended up being too small and was missing many valid verb-object pairings. We then decided to use Wikipedia instead for its large text corpus. However, the resulting dataset was highly noisy with 20,198 verb-object pairs, containing many abstract verbs such as “name,” “feature,” “use,” or nouns in the wrong word sense such as “suit” in “follow suit” and “file suit” that were not relevant for the natural language object retrieval task we were interested in. Therefore, we manually annotated the verb-object pairs to retain only pairs that contain concrete verbs paired with nouns in the correct sense. This resulted in a dataset with 655 verb-object pairs over 50 verbs and 216 object classes. Example verb-object pairs from our dataset are shown in Table \[tab:verb-obj\]. Verb-Object Language Command Image -------------------- ------------------------------------ -------------------------------------------------------------------------------- *Give me an item that can contain* ![image](figures/cup1.JPEG){width="0.15\linewidth"} *I need something to contain* ![image](figures/cup2.JPEG){width="0.15\linewidth" height="0.12\linewidth"} *Hand me something to play* ![image](figures/drum1.JPEG){width="0.15\linewidth"} *I want an object to play* ![image](figures/drum2.JPEG){width="0.15\linewidth"} *An item to wear* ![image](figures/kimono1.JPEG){width="0.15\linewidth" height="0.16\linewidth"} *Give me something to wear* ![image](figures/kimono2.JPEG){width="0.15\linewidth" height="0.13\linewidth"} \[tab:train-data\] : Example training data (command-image pairs) ### Training and Testing Data {#sec:data} We use 80% of the 216 object classes and their corresponding verb-object pairs to generate our training data. The training data consist of natural language command-RGB image pairs. For each verb-object pair, language commands are generated from the pair using templates, such as $\texttt{<Hand me something to> <verb>}$, and then paired with different image instances of the object class. Examples of the training data are shown in Table \[tab:train-data\]. Rather than using only the verbs and/or nouns from the verb-object pairs as language data for our model, we generate and give our model natural language sentences so that it can handle more realistic language commands, as most people do not ask for objects with one or two-word commands such as “knife” or “knife cut.” We hold out 20% of the object classes for testing. Test examples consisting of language command-set of five images pairs are randomly generated from objects in the test set and their corresponding verb-object pairs. The evaluation task is to select the correct object from a set of five images given a natural language command. Experiments and Results ======================= The aim of our evaluation is to test our model’s ability to accurately select objects based on natural language descriptions of their usage specified by verbs, given unseen object classes and unknown nouns in the language commands. Generalization to unseen object classes is much more difficult than just to unseen instances of known object classes, as different instances of the same object class such as two bottles would usually look more alike than instances from different object classes such as a bottle and a bowl, even if those object classes can be used for similar tasks such as “contain." The trained model is tested on natural language object retrieval tasks: retrieving the correct object from a set of five images of different objects, given a natural language command containing a verb that can only be paired with the correct object. The evaluation task is modeling a typical retrieval task in the wild where there are a few objects on a table and the robot has to pick the correct one. Retrieval examples consisting of a language command paired with a set of five images are randomly generated from objects in the test set and their corresponding verb-object pairs. We test our models on several different test sets and report average top-1 and top-2 retrieval accuracies. Top-1 accuracy means that the model’s top choice is the correct answer, and top-2 accuracy means that the correct answer is among the model’s top-2 choices. Model Top-1 *(Std. Error)* Top-2 *(Std. Error)* ----------------- ---------------------- ---------------------- Random 20.0 45.0 Data size 535 51.0 *(4.50)* 71.0 *(2.84)* Data size 1070 53.7 *(0.86)* 74.0 *(1.60)* Data size 1605 58.7 *(3.85)* 77.0 *(2.13)* Data size 2140 58.8 *(3.31)* 77.0 *(2.50)* Data size 2675 59.0 *(1.68)* 76.9 *(3.55)* Data size 3210 58.2 *(1.70)* 77.8 *(2.30)* Data size 3745 **62.3 *(2.48)*** 79.8 *(1.94)* Data size 4280 61.5 *(2.25)* 77.4 *(2.44)* Data size 4815 61.5 *(1.71)* 77.7 *(1.43)* Data size 5350 **62.3 *(2.18)*** **80.2 *(1.23)*** Human baseline 78.0 *(1.72)* \[tab:heldout\] : Retrieval accuracies (%) on unseen object classes in the held-out object set Held-out Object Set ------------------- We first test our model on object sets held out from our dataset, which have a similar image distribution to that of the training set, as the images all come from the ILSVRC2012 validation set. We evaluate our model on 2 different test splits representing increasingly difficult scenarios. We describe the test splits and our model’s performance in each case below. For both cases, the test objects are held-out, meaning our model has never seen any instances belonging to the test object classes during training. ### Unseen Object Classes {#sec:unseen} We first train and test our model on natural language commands containing only the verbs from the verb-object pairs, such as “Hand me something to $\texttt{<verb>}$." This simplified setting where the test commands look like those in the training data, for example “Give me something to contain,” helps us look at how well the model has learned to generalize the concepts associated with the verbs, such as “contain” requires objects with convexity. It removes the additional challenges that might arise with seeing unseen nouns in the commands, such as the fact that the embeddings for these nouns would be untrained. However, this is still a challenging problem because the object classes in the test set have never been seen by the model before. With 20% of the objects and corresponding verb-object pairs in our dataset held out for testing, the training set contains 535 verb-object pairs. We trained separate models on increasing sizes of training data generated from the 535 verb-object pairs. Training data was augmented by generating different natural language commands containing the verbs, and pairing the commands with different images of the objects from the verb-object pairs in the training set. Examples of the training data are shown in Table \[tab:train-data\]. The test set with 43 held-out objects and 120 corresponding verb-object pairs and retrieval examples are fixed for all models. We tested each model trained on different data sizes 5 times and report their average top-1 and top-2 retrieval accuracies and standard errors in Table \[tab:heldout\] and Figure \[fig:ret\]. Model performance generally increases with larger training size. All our models significantly outperform a random model (which has 20% top-1 and 45% top-2 retrieval accuracy) and achieve accuracies in the 50% – 62% range for top-1, and 70% – 80% for top-2. Our best average retrieval accuracy is 62.3% for top-1 and 80.2% for top-2 with standard errors of 2.18% and 1.23%, respectively. Our models were able to generalize to unseen object classes. Our models were able to select the correct object to satisfy the task specified by most verbs in our dataset such as “contain," “write," “don," “rotate," “hit," etc. The objects paired with these verbs usually have similar visual appearances and attributes, for example a gown and a suit can both be paired with “don" and are both made of fabric. However, our models performed imperfectly on more abstract verbs such as “play" and “protect" as it is less obvious which object attributes are required for the tasks specified by these verbs, and the objects that can satisfy the tasks come in a larger variety of visual appearances, for example a harp and a volleyball can both be paired with “play." Example success and failure cases for our best model are shown in Figures \[fig:ex11\] and \[fig:ex12\]. Our model correctly selected images of screws and goblets to satisfy natural language commands “Hand me something to rotate“ and ”Give me something with which I can serve," respectively. The model failed to select the swing given the command “I want something to play," and did not select the shield in response to “An object with which I can protect." However, the instance of a shield in the object retrieval task shown in Figure \[fig:ex12\] does not look like a shield but more like a plate, and such object instance outliers can definitely throw the model off. ### Unseen Object Classes and Unknown Nouns Next, we train and test our model on natural language commands containing both verbs and objects, for example “Give me the $\texttt{<object>}$ to $\texttt{<verb>}$." The model is tested on object retrieval tasks with both unseen object classes *and* unknown nouns. Testing our model in this setting is necessary because when a deep net such as our model is dealing with an unknown word, it will map the word to a random, untrained embedding. That random embedding could completely throw off the model’s understanding, for example the model might pick the image for whatever noun the random embedding happens to be closest to. We need to know how our model would behave with truly unknown words in the input to get a sense of how it would work in the real world. An example task in this setting is the model getting the command “Give me the dax to cut" with “dax" being an unknown word to the model, while also being shown objects it has never seen before. This task is more difficult than object retrieval with only unseen object classes, as the model has to figure out that unknown words such as “dax" adds no information and avoid being affected by the noise added by the unknown words. Other than the inclusion of nouns in the natural language commands, the setup for this experiment is the same as that with only unseen object classes, as described in Section \[sec:unseen\]. Each model trained on different data sizes was tested 5 times. Our best average retrieval accuracy is 53.0% for top-1 and 72.8% for top-2 with standard errors of 1.33% and 3.11%, respectively. Our models were indeed negatively affected by the unknown words in the language commands. However, decline in performance is to be expected as this is a more difficult task. Furthermore, our models’ performance still demonstrate at least some generalization to unseen object classes *and* unknown nouns. A way to better handle unknown words and boost model performance in this setting would be to use pretrained word embeddings such as Word2vec [@mikolov2013efficient] or GloVe [@pennington2014glove] instead of random untrained embeddings. Example success and failure cases for our best model are shown in Figures \[fig:ex21\] and \[fig:ex22\]. Our model was able to correctly select the paintbrush when asked to “Bring me the paintbrush to write," and picked the hammer to satisfy the command “Get me the hammer to hit." Unfortunately, the model incorrectly selected the canoe and hammer given the commands “Pass me the puck to play," and “I need the screw to insert," respectively. However, from the given image, canoeing does seem like a fun activity and maybe even something to “play." In addition, the image representing the hammer also includes a screwdriver, an object that can be used to “insert." Human Retrieval Baseline ------------------------ ![Amazon Mechanical Turk interface and example task for human baseline experiment.[]{data-label="fig:amt"}](figures/amt.png){width="1.0\linewidth"} We also compare our models’ performance to a human baseline for the retrieval task. Humans are experts in natural language understanding and object grounding. The experiment was done on Amazon Mechanical Turk (AMT). We showed AMT workers five images and one language command such as “Give me something to $\texttt{<verb>}$," and asked them to select the image with the object that best satisfies the command. The AMT interface and an example task for the experiment is shown in Figure \[fig:amt\]. We collected 5 answers for each of the 120 retrieval tasks. The average top-1 human retrieval accuracy is 78.0% with standard error of 1.72%, shown in Table \[tab:heldout\] and Figure \[fig:ret1\]. Even human users are not perfect at this task, as the given images of objects are not segmented and thus sometimes it can be confusing as to what object the image is supposed to be capturing, or the image only shows an partial/low-quality view of the object. In addition, the object usage being asked for in the language command can occasionally be unconventional such as using a “spoon" to “cut," and thus might not be obvious to the average AMT worker who is spending very little time on each task. Our models’ performances are not as good as the human baseline but not far apart. Furthermore, the imperfect human performance proves how difficult of a task this is and how impressive our models’ results are. YCB Object Set -------------- Finally, we run an evaluation to test whether the proposed model can perform natural language object retrieval on objects commonly seen and interacted with by real robots. For this evaluation, we test our best model on images of objects from the YCB Object and Model Set [@calli2015ycb]. The YCB object set is designed for benchmarking robotic manipulation and consists of objects of daily life with different shapes, sizes, textures, etc. We did not use the YCB object set as our training image set because it has a much smaller number of object classes and only 1 instance per object class in comparison to ImageNet’s 1000 object classes and 50 images per class. ------------- ------ -------- ------- construct eat open serve contain grow play write cut hit rotate \[tab:ycb\] ------------- ------ -------- ------- : Verbs annotated with YCB objects ------------------------- ---------------- ---------------------------- construct – power drill eat – apple play – tennis ball contain – chips can grow – pear rotate – adjustable wrench contain – windex bottle hit – spoon serve – bowl cut – scissors open – padlock write – large marker \[tab:ycb-vo\] ------------------------- ---------------- ---------------------------- : Example annotated verb-object pairs for the YCB set Of the 65 object classes with corresponding RGB images in the YCB dataset, we select 33 object classes to test our model on, excluding classes our model has seen during training and picking only one class in the case of identical objects of differing sizes such as “S clamp," “M clamp," “L clamp," “XL clamp." Each object class in the YCB dataset is represented by one object instance, with corresponding RGB images of the object instance from multiple camera angles. We represent each selected object class by a single front-facing image of the object, taken from the YCB dataset. From the 50 verbs our model was trained on, we select 11 verbs (shown in Table \[tab:ycb\]) that are most compatible with the 33 YCB objects and annotated valid verb-object pairings among the selected objects and verbs, resulting in 64 verb-object pairs. Examples of the annotated verb-object pairs are shown in Table \[tab:ycb-vo\]. Natural language commands containing only the verbs were generated from the verb-object pairs using templates, and retrieval examples consisting of sets of five images paired with language commands were randomly generated from the annotated verb-object pairs. Our model, without being retrained on images from the YCB dataset, was tested 5 times and achieved average retrieval accuracies of 54.7% and 71.9% with standard errors of 1.99% and 2.40% for top-1 and top-2, respectively. Although these results are far from perfect, they still demonstrate generalization on a dataset with a different distribution from the model’s training data. In addition, these results would enable the robot to significantly reduce its search space from all the candidate objects, and can employ strategies such as question asking to further disambiguate and retrieve the correct object. Notably, our model correctly identified a fork, a spoon, and scissors as objects that can be used to cut, while only having seen knife-like object classes such as cleavers and hatchets paired with the verb “cut" in its training data. In addition, our model selected a chips can and mustard bottle when asked for something to “eat," which in retrospect are very reasonable pairings that we mistakenly left out of our verb-object pair annotations. Robot Demonstrations -------------------- ![Natural language object retrieval tasks demonstrated on our robot. The given language commands are: “Give me something to wear" (top left), “Give me an item that can write" (top right), “Hand me something to eat" (bottom left), and “An object to contain" (bottom right). Solid boxes denote the robot’s top choice for each task. The dashed-line box denotes the robot’s top second choice. Our robot retrieved the correct object for each task.[]{data-label="fig:robot2"}](figures/robot_1.jpg){width="1.0\linewidth"} We implement our trained model on a KUKA LBR iiwa robot arm with a Robotiq 3-finger adaptive gripper. We pass a natural language command into our model along with manually segmented RGB images of objects in the scene, captured by an Intel RealSense camera. The robot then grasps the observed object with the highest cosine similarity in embedding space with the language command. We use object classes that our model has not seen during training for the demonstrations. We tested our robot on four object retrieval tasks. Images capturing the tasks are shown in Figure \[fig:robot2\]. The robot correctly selected the T-shirt for the task of “Give me something to wear." When asked to “Give me an item that can write," it was able to pick out the marker from other distracting objects that are also partly red and have slim bodies. Next, it accurately identified the pear and chips can as the top two items that would satisfy “Hand me something to eat." This is the only test case with more than one possible correct answer. Finally, when asked for “An object to contain," the robot selected the empty Minion-shaped bottle. Video recordings of the robot demonstrations can be found online[^3]. We mostly use YCB objects for the demonstrations with the exception of the Minion-shaped bottle in the last case, which was to test our model on an odd-looking object. While most of these are common objects we see in our daily life, not all of them belong to the COCO dataset of common objects in context [@lin2014microsoft]. The objects that are not part of the 91 object types in COCO are the T-shirt, marker, pear, clamp, lock, and of course Minion bottle. As it was trained on the COCO dataset, Mask R-CNN [@he2017mask], the state-of-the-art method for object segmentation and classification, was unable to correctly segment and classify these six objects. With such classification results, relying on accurate classification of objects and querying of an external knowledge base for valid verb-object pairs to select the object that satisfies the language command does not work in these cases. In contrast, our model was able to select the correct object based on the command without needing to explicitly classify the candidate objects or having seen the object classes. Conclusion ========== Understanding open-ended natural language commands is a challenging but important problem. We address a sliver of the problem by focusing on object retrieval based on descriptions of the object’s usage. We propose an object retrieval model that learns from contextual information from both language and vision to generalize to unseen object classes and unknown nouns. Given natural language commands, our model correctly selects objects out of sets of five candidates, and achieves an average accuracy of 62.3% on a held-out set of unseen ImageNet object classes and 53.0% on unseen object classes *and* unknown nouns. Our model also achieves an accuracy of 54.7% on unseen YCB object classes. We demonstrate our model on a KUKA LBR iiwa robot arm, enabling the robot to retrieve objects based on natural language descriptions of their usage. Along with our model, we also present a newly created dataset of 655 verb-object pairs denoting object usage over 50 verbs and 216 object classes, as well as the methods used to create this dataset. To the best of our knowledge, this is the first dataset built to perform this task, and could potentially be used for a range of object retrieval tasks. Our model currently allows us to reduce the problem of task based object retrieval to an attribute classification problem. However, a much richer model would perform explicit inference to determine the desired object from oblique natural language. Incorporating dialogue into this framework to perform inference can be a way to incorporate human preference more directly and provide a more intuitive interface. Acknowledgments =============== The authors would like to thank Prof. James Tompkin for advice on selecting the image dataset and encoder, and Eric Rosen for help with video editing. This work is supported by the National Science Foundation under award numbers IIS-1652561 and IIS-1717569, NASA under award number NNX16AR61G, and with support from the Hyundai NGV under the Hyundai-Brown Idea Incubation award and the Alfred P. Sloan Foundation. [^1]: Video recordings of the robot demonstrations can be found at <https://youtu.be/WMAdGhMmXEQ>. [^2]: The dataset and code for the project can be found at <https://github.com/Thaonguyen3095/affordance-language>. [^3]: <https://youtu.be/WMAdGhMmXEQ>
When managing a computer, we may want to automate some tasks to run in certain periods or at the same time on each day/week/month. On a desktop, we may schedule update checks or virus scans. On a server, it's not uncommon for a myriad of checks and clean up routines to be scheduled to ensure applications are running optimally. In this article, we will look at the cron program. This utility is a task scheduler for Unix-like systems. After going over cron basics, we'll look at the crontab command to manage our scheduled tasks. What is Cron? What is a Crontab? To master cron and scheduling, it helps to have a grasp of various but similar terminology used. Cron is the program that schedules scripts or commands to run at user-specified times. A cron expression is a string that details the schedule to trigger a command. A cron table is a configuration file containing the shell commands in each line preceded by a cron expression. Here's an image with a system crontab that references two external scripts: Generally, cron tables look like this: <cron-expression> <command> <cron-expression> <command> When your computer first starts up, cron searches for all crontabs in a system configured directories. The directory varies from OS to OS but it's typically /etc/crontab for special system-wide crontabs and a local directory for the user who's logged in. These are both loaded into memory. The cron program wakes up every minute and checks the crontab in memory to see if any commands need to be run at the current minute. It executes all scheduled commands and sleeps again. There are two methods by which the cron jobs can be scheduled: - Edit the crontabdirectly We can view the cron jobs we have scheduled by running this command: $ crontab -l If we would like to add or edit a cron job, we can then use this command: $ crontab -e This is the preferred method to edit crontabs because it selects the user crontab and not the system one, and it also catches syntax errors. - Copying the scripts to the /etc/cron.*directories** If scheduling the scripts for accurate time slots is not required, the scripts can also be moved inside certain pre-built cron schedule folders, for the crontab to pick them up for execution. These pre-defined directories that act as placeholders include: |Location||Execution Schedule| |/etc/cron.hourly/||Once every hour| |/etc/cron.daily/||Once per day, everyday| |/etc/cron.monthly/||Once per month, every month| |/etc/cron.weekly/||Once per week, every week| The users can copy their scripts to the appropriate directories depending on which frequency they need to be run. The crontab file present in /etc/crontab contains the cron expressions defined for each directory and it checks every minute if the time is right for the scripts are to be executed: Note that scripts placed in these special folders are system-wide cron jobs and not user-specific ones. Creating a Cron Expression Each entry of the crontab contains a cron expression, a user name, and the command to be executed separated with white spaces or tabs: * * * * * user-name command-or-file-to-be-executed # <-----------> <-------> <----------------------------> # Cron # Expression Note: The user-name column is used when editing the system-wide crontab. A user-specific crontab does not need it because that info is already known. The 5 asterisks ( * * * * *) in the cron expression can be substituted with numbers. Their positions indicate their value: |* Position||Description||Allowed numeric values| |1||Minute||0-59| |2||Hour||0-23| |3||Day of Month||1-31| |4||Month||0-12| |5||Day of Week [0 represents Sunday & Saturday represents 6]||0-6| Alongside numbers, there are special characters we can use when editing a crontab: |Characters||Description| |*||Defining this will run the command for each time frame. For example, * defined under the minute column, will execute the script every minute| |,||To repeat a task in multiple different time frames, commas can be used. For example, to run a script every 10 minutes, the minute column can be given as 0,10,20,30,40,50| |/||An alternative, simpler version of ",". To run a script every 10 minutes, the minute column can be given as */10| |-||Specifies a range of values. To run a script every minute for the first 10 minutes, the expression in the minute column can be given as: 0-10| That's a lot of options to understand, it might be best to show how they work with some examples: - Here's how to run echo "hello there!"on the "4th day of every week" (i.e. on Thursday) at 12:10 hours: 10 12 * * 4 root echo "hello there!" - And here's a more complicated example where we save disk usage every 20 minutes on the first 10 days of the last three months of the year: */20 * 1-10 10,11,12 * df -h >> /tmp/diskusage Cron expressions can be overwhelming at first sight. There are numerous tools available online to simplify the creation of expressions; crontab.guru is one of the widely popular tools available online at https://crontab.guru/. Let's use crontab.guru to check if our second express is valid: https://crontab.guru/#/201-1010,11,12*. Hands-on Guide: Creating a Cron Job to Email Disk and Memory Usage In an IT department, system administrators and other operators frequently monitor computer resources such as memory and disk size. Crontab is commonly used to ensure monitoring scripts are run continuously. We are going to create a cron job that monitors disk and RAM usage using Watchmon. If either metric is above a certain threshold, we will get an email to alert us. We'll put this script in a cron job that runs every minute. The following demo was tested on Ubuntu 20.04.1 LTS. This example is configured to send an email to a Gmail account, you may need to tweak your configurations for other email providers. - You first need to install watchmon. Follow their README to learn how to set it up: > git clone https://github.com/datawrangl3r/watchmon > cd watchmon > bash setup.sh - The script will start the cron process, install http://www.mutt.org/, a command-line SMTP client, to send emails. Once installed, use your text editor to the mutt.rcfile in your home directory: /home/user/mutt/mutt.rc. Edit your file so it looks like this, changing the template data with your email: set ssl_starttls=yes set ssl_force_tls=yes set imap_user = "<a href="/cdn-cgi/l/email-protection.html" class="__cf_email__" data-cfemail="a9cac1c8c7ceccf6ddc1c0daf6dcdaccdbf6c7c8c4cce9cec4c8c0c587cac6c4">[email protected]</a>" set imap_pass = "PASSWORD" set from="<a href="/cdn-cgi/l/email-protection.html" class="__cf_email__" data-cfemail="5d3e353c333a38022935342e02282e382f02333c30381d3a303c3431733e3230">[email protected]</a>" set realname="Your Name" set folder = "imaps://imap.gmail.com/" set spoolfile = "imaps://imap.gmail.com/INBOX" set postponed="imaps://imap.gmail.com/[Gmail]/Drafts" set header_cache = "~/.mutt/cache/headers" set message_cachedir = "~/.mutt/cache/bodies" set certificate_file = "~/.mutt/certificates" set smtp_url = "smtps://<a href="/cdn-cgi/l/email-protection.html" class="__cf_email__" data-cfemail="c8aba0a9a6afad97bca0a1bb97bdbbadba97a6a9a5ad88afa5a9a1a4e6aba7a5">[email protected]</a>:<a href="/cdn-cgi/l/email-protection.html" class="__cf_email__" data-cfemail="47170614141008150307342a333769202a262e2b6924282a">[email protected]</a>:465/" set move = no set imap_keepalive = 900 - Since Google doesn't allow its services to be accessible by less secure apps, we need to enable access for the email account to be used in the SMTP settings. Visit https://myaccount.google.com/security, and toggle the less secure app access to ON, as shown in the gif below: - Now that our scripts and email account configurations are ready, we can set up our crontab. Edit your crontab by invoking: $ crontab -e Now enter the corresponding cron expression and path to the executable with arguments. Note that the path on your machine will differ from what is shown below: * * * * * /mnt/c/Users/sathy/Documents/datawrangler/watchmon/watchmon.sh -t=40 <a href="/cdn-cgi/l/email-protection.html" class="__cf_email__" data-cfemail="331e560e4052475b4a5240524152475b5a0a0373545e525a5f1d505c5e">[email protected]</a> This can be configured as shown below: The cron utility makes sure to execute the script every minute and as a result, an email is sent from the machine to the recipients until the disk space or memory of the machine is cleared. Cron Jobs & Dealing with Failures Although crontabs are very powerful, they can't alert the user if an underlying job is not triggered or the scheduled executable encounters an error. The logs for each job run can be found in the /var/log/syslog file. Be sure to review your logs after activities are scheduled. Aside from cron and command errors, many crontabs are plagued by human errors. When setting up cron jobs, here are some common reasons why your cron job may not run as you'd expect it to: - The scripts don't have the execute permission to it. Those executables that are to be run as cron jobs should have the permission as 755i.e. rwx-rw-rw. The cron utility can only execute scripts if the user has executable permissions for it. - The crontab is syntactically correct but the wrong path was used for the jobs. It's important to specify the absolute path to the executable and the command itself. Relative paths need to be avoided at all costs. - The command or script is not being run with the necessary environment variables to work. In these cases, make sure that the script you run can set up the required env vars when running as a cron job. Conclusion In this article, we have looked at the cron program, cron expressions, and crontabs. We used the crontab command to list and edit cron jobs. Crontabs are some of the best tools for Unix-like systems to automate repetitive tasks. What would you schedule with a cron job?
https://www.codevelop.art/how-to-use-the-crontab-command-in-unix.html
My favorite type of Pilot pen, is the Pilot G-2 07 pens that cone in a big pack with four black pens, and a whole bunch of colored pens. They are a little expensive, about a The pens last very long for pens like them. I use them every day and extensively too, taking pages and pages of noted with them (about 10-15 pages a day), and for the past two months, none of them have run out yet. Normally pens run out within a few weeks use, or they dry out even though there is still plenty of ink inside, because the tip has gotten clogged or stopped up, of dried up somehow. These pens do not ...
https://www.reviewstream.com/reviews/?p=88504
You are seriously injured in a Hartford Motor Vehicle Accident your entire world is turned upside down. This is the exact reason why we wrote the book "The Crash Course On Personal Injury Claims in Connecticut" so you can educate yourself on what you need to do to get better quicker and to help your attorney get more money for you. The Hartford Motor Vehicle Accident Lawyers conclude their closing statements and now it is time for the trial judge to turn your case over to the jury for deliberations. Prior to the jury getting your case, the judge will have to instruct the jury on how to apply the law to the facts of your case. The way in which the judge does this is to "charge" the jury. The judge might give the following jury charge as it relates to the issue of Standard of Proof: 3.2-1 Standard of Proof-Revised to January 1, 2008: In order to meet (his/her) burden of proof, a party must satisfy you that (his/her) claims on an issue are more probable than not. You may have heard in criminal cases that proof must be beyond a reasonable doubt, but I must emphasize to you that this is not a criminal case, and you are not deciding criminal guilt or innocence. In civil cases such as this one, a different standard of proof applies. The party who asserts a claim has the burden of proving it by a fair preponderance of the evidence, that is, the better or weightier evidence must establish that, more probably than not, the assertion is true. In weighing the evidence, keep in mind that it is the quality and not the quantity of evidence that is important; one piece of believable evidence may weigh so heavily in your mind as to overcome a multitude of less credible evidence. The weight to be accorded each piece of evidence is for you to decide. As an example of what I mean, imagine in your mind the scales of justice. Put all the credible evidence on the scales regardless of which party offered it, separating the evidence favoring each side. If the scales remain even, or if they tip against the party making the claim, then that party has failed to establish that assertion. Only if the scales incline, even slightly, in favor of the assertion may you find the assertion has been proved by a fair preponderance of the evidence. After your Hartford Motor Vehicle Accident case, it is important that you get accurate and prompt legal advice. You need answers to your important questions so you take the necessary steps to develop your case. Injured parties call us with many legal questions and accident related issues, which is why we wrote the book "The Crash Course on Personal Injury Claims in Connecticut". Contact us today or call us toll free at 888-244-5480 to get a FREE copy of our book. You have questions, we have the answers. Do not speak to anyone from the insurance company, do not hire an attorney and do not sign any papers until you read our free book. Don't delay. Get the information you need today.
https://www.hcwlaw.com/blog/2011/june/hartford-motor-vehicle-accident-standard-of-proo/
Bready & District Ulster-Scots Development Association facilitate the examination, exploration and appreciation by all of Ulster-Scots history, heritage, and culture in an open and positive manner, to encourage mutual understanding between peoples of different traditions without distinction of sex, age, race, colour, nationality, ethnic origin, or political, religious or other opinion, the object of which is to improve the quality of life of the said inhabitants. We promote dialogue and co-operation between associations, and liaise with a range of statutory, voluntary and public organisations in order to achieve these objectives and to represent the interests of all the community. We offer guidance on funding opportunities, governance, event planning and training needs as well as assisting member groups to access funding from a range of sources. Deliver over 30 talks and workshops to a wide range of audiences including local history groups and community organisations, as well as local heritage tours in the local area. Present interactive history, language and music workshops in schools. Produce educational material on a range of relevant topics as well as maintain an informative and current website and social media presence. Provide, manage and resource local historical artifacts and databases that are used to assist visitors to research their heritage and links with the area. We deliver outreach tuition programs in Ulster-Scots Music and Highland dance to over 500 children in various schools and community groups in the North West, as well as programs for good relations departments in local councils, promoting Ulster-Scots Culture. "We are passionate about promoting the culture and heritage of the Ulster-Scots of Bready and the surrounding areas"
https://breadyulsterscots.com/about/
Q: Laplace Transforms and third-order derivatives The question is to calculate the Laplace transform of $(1 + t.e^{-t})^3$. I know that this can be done using a property where the problem is of the form of $t.f(t)$. However, I seem to be messing up the third order derivative. This is a DIY exercise from the textbook and hence, no worked out solution. I expanded the bracket $(1 + t^3e^{-3t} + 3te^{-t} + 3t^2.e^{-2t})$ and I mess up the third order derivative for $t^3.e^{-3t}$ Can someone please how me how that is done ? A: Hints: $$(1+t e^{-t})^3 = 1 + 3 t e^{-t} + 3 t^2 e^{-2 t} + t^3 e^{-3 t}$$ $$\int_0^{\infty} dt \, t^k \, e^{-s t} = (-1)^k \frac{d^k}{ds^k} \frac{1}{s} = \frac{1}{s^{k+1}}$$ Now work out $$\int_0^{\infty} dt \, e^{-s t} + 3 \int_0^{\infty} dt\,t \, e^{-(s+1) t}+ 3 \int_0^{\infty} dt\,t^2 \, e^{-(s+2) t}+ \int_0^{\infty} dt\,t^3 \, e^{-(s+3) t}$$
. There are many, many distinctions that can be drawn at the group level, political, financial, religious, tribal, on the right side of the line in the diagram below — but every one of them in constituted of individuals, for each of whom the considerations on the left side would, to a greater or lesser extent, apply: ** How is this multiplied? Each time a man stands up for an ideal, or acts to improve the lot of others, or strikes out against injustice, he sends forth a tiny ripple of hope, and crossing each other from a million different centers of energy and daring, those ripples build a current that can sweep down the mightiest walls of oppression and resistance. Nice words, nice indeed. Bobby Kennedy‘s words. But did the darker aspects of “the complex compound of pure and impure impulses” — in every individual, every “tiny ripple” — get lost in the niceness of the words? I ask because for optimists it so often does, and that’s the great weakness of positive movements. And because for the cynical, so often, no glimpse of the positive aspects can make it through their fog. ** Consider this, for each individual: As every man goes through life he fills in a number of forms for the record, each containing a number of questions . .. There are thus hundreds of little threads radiating from every man, millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider’s web, and if they materialized as rubber bands, buses; trams and even people would all lose the ability to move, and the wind would be unable to carry torn-up newspapers or autumn leaves along the streets of the city. They are not visible, they are not material, but every man is constantly aware of their existence…. Each man, permanently aware of his own invisible threads, naturally develops a respect for the people who manipulate the threads. That’s not somebody writing about Cambridge Analytica or FaceBook — that’s Alexander Solzhenitsyn, in Cancer Ward, 1968, courtesy Bruce Schneier. I may add further one and many instances and illuminations in the comments section as I find them — you are invited to do the same..
https://zenpundit.com/?cat=131
IXL's eighth-grade skills will be aligned to the Tennessee Academic Standards soon! Until then, you can view a complete list of eighth-grade standards below. 8.NS.A.1 Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually or terminates, and convert a decimal expansion which repeats eventually or terminates into a rational number. 8.NS.A.2 Use rational approximations of irrational numbers to compare the size of irrational numbers locating them approximately on a number line diagram. Estimate the value of irrational expressions such as π². 8.EE.A.2 Use square root and cube root symbols to represent solutions to equations of the form x² = p and x³ = p, where p is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that √2 is irrational. 8.EE.A.3 Use numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities and to express how many times as much one is than the other. 8.EE.B.6 Use similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane; know and derive the equation y = mx for a line through the origin and the equation y = mx + b for a line intercepting the vertical axis at b. 8.EE.C Analyze and solve linear equations and systems of two linear equations. 8.EE.C.7.a Give examples of linear equations in one variable with one solution, infinitely many solutions, or no solutions. Show which of these possibilities is the case by successively transforming the given equation into simpler forms, until an equivalent equation of the form x = a, a = a, or a = b results (where a and b are different numbers). 8.EE.C.7.b Solve linear equations with rational number coefficients, including equations whose solutions require expanding expressions using the distributive property and collecting like terms. 8.EE.C.8 Analyze and solve systems of two linear equations. 8.EE.C.8.a Understand that solutions to a system of two linear equations in two variables correspond to points of intersection of their graphs, because points of intersection satisfy both equations simultaneously. 8.EE.C.8.b Solve systems of two linear equations in two variables algebraically, and estimate solutions by graphing the equations. Solve simple cases by inspection. 8.EE.C.8.c Solve real-world and mathematical problems leading to two linear equations in two variables. 8.F.A.3 Know and interpret the equation y = mx + b as defining a linear function, whose graph is a straight line; give examples of functions that are not linear. 8.F.B.4 Construct a function to model a linear relationship between two quantities. Determine the rate of change and initial value of the function from a description of a relationship or from two (x, y) values, including reading these from a table or from a graph. Interpret the rate of change and initial value of a linear function in terms of the situation it models and in terms of its graph or a table of values. 8.G.A Understand and describe the effects of transformations on two-dimensional figures and use informal arguments to establish facts about angles. 8.G.A.1.a Lines are taken to lines, and line segments to line segments of the same length. 8.G.A.1.b Angles are taken to angles of the same measure. 8.G.A.1.c Parallel lines are taken to parallel lines. 8.G.A.2 Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates. 8.G.A.3 Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. 8.G.B.4 Explain a proof of the Pythagorean Theorem and its converse. 8.G.B.5 Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. 8.G.B.6 Apply the Pythagorean Theorem to find the distance between two points in a coordinate system. 8.G.C.7 Know and understand the formulas for the volumes of cones, cylinders, and spheres, and use them to solve real-world and mathematical problems. 8.SP.A.2 Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line and informally assess the model fit by judging the closeness of the data points to the line. 8.SP.B.4 Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation. Understand that, just as with simple events, the probability of a compound event is the fraction of outcomes in the sample space for which the compound event occurs. Represent sample spaces for compound events using methods such as organized lists, tables, and tree diagrams. For an event described in everyday language (e.g., "rolling double sixes"), identify the outcomes in the sample space which compose the event.
https://www.ixl.com/standards/tennessee/math/grade-8
Abstract: Blockchain is an innovative technology that disrupts different industries and offers decentralized, secure, and immutable platforms. Its first appearance is connected with monetary cryptocurrency transactions, followed by adaptation in several domains. We believe that blockchain can provide a reliable environment by utilizing its unique characteristics to offer a more secure, costless, and robust mechanism suitable for a voting application. Although the technology has captured the interest of governments worldwide, blockchain as a service is still limited due to lack of application development experience, technology complexity, and absence of standardized design, architecture, and best practices. Therefore, this study aims to build an imperial example for a blockchain electronic voting (e-voting) application using digital identity management for fulfilling immutable, transparent, and secure distributed blockchain features. The paper reviews the current types of e-voting systems and discusses the standard processes. We propose a conceptual design for a blockchain providing a digital identity management service to secure the e-voting application results. The blockchain development process implemented in this study follows the Proof of Concept to verify the e-voting application’s function for illustrating the architecture and description of the application’s business process model. The development is based on the Ethereum platform, which allows the implementation of the Proof of Work consensus algorithm. The developed e-voting application saves time, requires fewer processes, and results in higher accuracy, more transparency, considerable voters’ privacy, and accountable system management. We expect that the e-voting blockchain application will impact governmental processes during the election, reduce spending, support digital transformation, and ensure fairness of results. Keywords: Blockchain; election; e-voting; Ethereum; smart contracts; solidity; truffle 1 Introduction For centuries, the expense of voting or electronic voting (e-voting) increases as the population grows; it involves a high level of integrity and security requirements. Traditional voting systems are conducted either manually based on paper verified by the individual presence in a particular location or sent by traditional mail in a secured envelope. Such a system requires substantial manual work, time, and costs and is vulnerable to fraud . Therefore, using a reliable technology that can solve such issues is urgently needed. Blockchain technology is becoming one of the world’s key emergent technology in the wake of Bitcoin’s growing acceptability and success . Many studies have supported the use of blockchain technology in several domains, including education , healthcare , IoT , shipping , and government . Furthermore, using blockchain technology enables digital authentication and secure storage for data and information, such as marriage or death certificates, assets or bank account books, and medical records . The blockchain distributed, unalterable, indisputable, public ledger enhances all eligible citizens’ voting processes for thriving social democracy issues . Using blockchain technology to develop the e-voting application increases security and eliminates many vulnerabilities, such as a single point of failure and denial of service attack in the regular electronic system. Unlike traditional voting, the blockchain provides better data transparency and traceability because the blockchain network’s operation is based on the peer-to-peer principle. The distributed control ledger also prevents any mode of failure or integrity tampering given that each new block references the previous one, and a network consensus protocol controls the entry of any new block. Smart contracts’ availability allows the execution of reliable, immutable, and trackable transactions as well without third parties [9–12]. Finally, it ensures that the blockchain network remains permanent and uneditable and that transactions remain protected against corruption and deletion . Therefore, continuing efforts are made to improve the election process and develop e-voting applications by leveraging blockchain’s unique features, such as immutability, transparency, and anonymity, to provide a better voting environment. However, implementing and deploying a blockchain-based e-voting application is challenging because of the complexity of this technology, lack of development experience, and requirement of professional users’ skills. Thus, this study investigates the features needed to build blockchain-based e-voting applications for addressing the research question “Do we need all the blockchain features to build all different types of applications?” In other words, “What are the main features that make the voting process less complicated for development and adoption?” We build an e-voting application for municipal councils’ election based on Proof of Concept (PoC) to answer these questions. In some places, most municipal council elections are still held via a traditional mechanism to deal with some vulnerabilities related to fraud, security, and transparency. Therefore, this study’s primary motivation was to fully utilize the blockchain features for improving municipal elections by creating an e-voting application. Overall, the key contributions of this study can be summarized as follows: 1. Highlighting the blockchain features needed to empower e-voting applications and maintain the fairness and fitness of the voting process. 2. Designing a conceptual model of e-voting application on the blockchain with application architecture and business process model that worked successfully for developing an e-voting application based on PoC for municipal council elections using the Ethereum platform. 3. Developing an e-voting application based on PoC for municipal council elections using the Ethereum platform. 4. Evaluating the proposed application to ensure that it has utilized the required blockchain features of the e-voting process. 5. Producing a state-of-the-art working application with an Arabic interface. The rest of the paper is organized as follows. Section 2 provides an overview of blockchain technology and its features and platforms. Section 3 summarizes the work related to the blockchain features used to build an e-voting application. In Section 4, the current voting process used for municipal council elections is presented. Sections 5 and 6 introduce the proposed application and present its architecture, design, and implementation stages. Section 7 evaluates the proposed application and compares it against others. Section 8 contains the implementation, limitations, and future work. Section 9 presents the conclusion. 2 Blockchain Technology The blockchain is a public distributed ledger that is shared and secure, wherein a growing list of transaction records (blocks) can be stored without being erased or changed. This technology was introduced in 2008 as a platform for a digital cryptocurrency called Bitcoin by Nakamoto. All computers or nodes that run blockchain’s protocol store a copy of recorded transactions for enabling P2P transactions without an intermediary through machine consensus [9,10]. 2.1 Blockchain Features By nature, blockchain provides a secure platform that is composed of architectural elements and processes/logic. Architectural elements contain two elements: decentralization and cryptographic hashes. Decentralization means that no third party or authority is needed to authenticate the transactions. Thus, data are controlled by a decentralized network. The blockchain network operates based on the peer-to-peer principle of providing better data transparency and traceability. Meanwhile, a cryptographic hash is an algorithm that uses input and produces an output called a hash [14,15]. It is used for hashing the transactions and in some types of consensus algorithms, such as Proof of Work (PoW). Blockchain processes and logic contain four processes: consensus algorithm, smart contract, data authentication, and digital signature. A consensus algorithm is a decision-making process where a group of active nodes reach agreement quickly and speedily. However, the rules of consensus can be modified to fit several circumstances. In other words, the consensus is similar to a voting system, where the majority wins, and the minority has to support it. A smart contract is a digital contract designed to facilitate, validate, or execute performance or negotiate the contract. It allows the performance of reliable, immutable, and trackable transactions without third parties . Moreover, it is executed by nodes within the network; all nodes must derive the same execution results, and these results are recorded on the blockchain . Data authentication is a process of sending the real identity with a pseudonym to the authentication center to secure the pseudonym’s signature. This process is useful for conferring everyone with a pseudonym . Finally, a digital signature is a process used to verify the data’s authentication . The users own a pair of private and public keys to access and sign the transaction. For example, when the sender signs the transaction, the hash is generated and encrypted using the private key (sign phase). Then, the receiver obtains the encrypted hash associated with the original data and validates the transaction by using the sender public key with decrypted hash and the hash obtained from the received data. These architectural elements and processes of blockchain provide the following features: a) Immutability: This feature means something that cannot be changed or modified. It helps ensure that the blockchain network will remain permanent and unalterable in addition to protecting all transactions against corruption and deletion . A piece of information can only be altered in all nodes in a blockchain network. This case is impossible. b) Transparency and Data Integrity: This feature is automatic whereby network nodes can check and trace the transaction. Thus, the transaction cannot be changed unless all network nodes reach a consensus related to such precise change. c) Persistency: If blocks need to confirm and sign every transaction, this case makes any changes or misuse of the transaction impossible . d) Anonymity: This feature is used to reduce the possibility of tracking the sender and the transaction recipient. e) Data Validation: This feature is the process of ensuring that the transactions are eligible. In the following parts of the paper, we will emphasize the importance of these features in developing a blockchain-based e-voting application. We will also state the way some of these features have been employed in the developed application. 2.2 Blockchain Platforms Many platforms with different features are available for blockchain. Tab. 1 provides a comparison of these features. As mentioned previously in the blockchain technology section, architectural elements and processes of blockchain provide many features. However, this study aims to highlight the adequate features to build an e-voting application for making the public voting process easier, faster, and more transparent. 3 Existing E-Voting Solutions Two categories of voting systems have been addressed in prior studies: the direct recording electronic system (e-system) and the Internet voting system (I-voting). An e-voting system, rather than the traditional paper ballot system, is utilized in polling stations . Most e-voting systems that have been developed aim to reduce the cost of the election process and guarantee the election’s integrity by fulfilling the requirements of security and privacy [21–23]. However, e-voting election can be impacted by manipulation in some cases [22–24]. Although I-voting is better than the e-voting system in terms of specific security aspects, the I-voting system still has drawbacks, such as transparency, security, and credibility [21–25]. The I-voting system’s centralization makes it vulnerable to attacks that imperil election results or voter information . Thus, the e-voting and I-voting systems cannot address stringent privacy and security requirements. All the previously proposed systems suffer from challenges or trading off between some blockchain features and others, such as low processing speed, linearly proportional time for voting to the ring size [26,27], and centralized security servers. Thus, we attempt to build an effective and more efficient e-voting application by adding the features that more completely satisfy the e-voting process requirements and provide solutions to avoid those challenges in the implementation and deployment phases. For example, one study proposed using zero-knowledge proof in the e-voting application. The system progresses through four stages and consists of two entities: the voter and the administrator. The first stage is the preparation, which consists of three phases. In the first phase, the voter creates an account and registers a bitcoin address to obtain the right to vote. In the second phase, the voter receives the voting cost from the administrator in his or her bitcoin address, exchanges the received cost to a secure digital commitment of Zerocoin, and registers this commitment to the administrative system. Thereafter, the administrator announces the list of Zerocoin commitments. In the third phase, the voter can exchange the Zerocoin for Bitcoin and deposits the declared commitment in the zero-knowledge proof. In the voting stage, the voter performs the voting and initiates a commitment to prevent voting data leakage. Thus, the voter establishes a transaction using the open return part of the protocol. In the counting stage, the administrator checks all the transactions to set valid Zerocoin commitments while exchanging Zerocoin to Bitcoin. In the publishing stage, the administrator counts the votes and publishes the results. Eliminating the tracing link between the votes and the voters is important. However utilizing this system is difficult for an election with many voters because its processing speed is slow. Furthermore, the study of proposed an e-voting system based on the Ethereum blockchain by using a one-time ring signature mechanism to ensure that the voter with one key-pair could not sign a vote more than once. It entails a particular group verifying the vote and enabling anyone who has the right to access this network to access it and obtain the result of voting without the third party. This way reduces the election’s cost. The ring signature has been applied in the system to ensure that the relationships between voters and their ballots are not revealed. However, the voters’ spending time signing their ballot with a one-time ring signatures scheme is linearly proportional to the ring size, which is considered acceptable for maintaining voter anonymity. Another study proposed a solution to solve the personal authentication problem by developing extensions for the standard security protocols to include user privacy and anonymity. It relied on using crypto credentials as anonymous identities. These extended security protocols can enhance user privacy and anonymity and security (authentication and validation of transactions) in the blockchain applications. These protocols have been implemented based on federated security architecture, which contains secure proxy servers and security protocols. This solution’s strengths reside in providing the highest level of assurance (level 4) and all sensitive data under strong protection and ensuring user control over it. However, the protocols continue to use centralized security servers to date. All the proposed systems suffer from challenges or trading off between some blockchain features and others. Thus, we attempt to build an effective and more efficient e-voting application by adding the features that more completely satisfy the requirements of the e-voting process and provide solutions to avoid challenges in the implementation and deployment phases. 4 Current Voting Process in Municipal Councils The applicants for the country’s administrative positions are assigned through local elections, wherein all citizens participate in the decision making. Municipal council elections are conducted every 4 years, wherein the process of election consists of five stages . The first stage is about submitting all the required information on the participating citizens. Then, electoral rolls are published for any corrections and contestations. The failure to register in the defined registration phase results in losing the right to candidacy or voting on the voting days. The second stage is concurrent with the first stage, which is about finalizing the voter or candidate profiles. Thereafter, the third stage starts with the publication of the final candidate names. A candidate cannot declare his or her candidacy and start his or her electoral campaign before publishing the candidate names’ final list. The last stage is the ballot day or voting day, which represents the most crucial part of the whole election process. All the previous stages are considered preparatory to this stage. Voters select their suitable candidates, and the list of successful candidates and the final results are published upon completion of the vote-counting process. Fig. 1 shows the BPMN of the current voting process. 5 Design of Proposed E-Voting Application Based on Blockchain We build a blockchain-based e-voting application and add it as a new e-voting service to the election application, which is a subsystem deployed on Ethereum smart contract. This study identifies only some features that can help build an e-voting application, as shown in Tab. 2. The proposed application consists of two different scenarios, namely, the ordinary user and admin (Fig. 2). Initially, the user creates his or her voting document with some required inputs, such as name, national ID, phone, and other mandatory personal details. Then, an admin validates these applications based on the desired policy. Only valid users can complete the voting process while the system synchronously creates the blockchain blocks. Figs. 1 and 2 show the difference between the voting process in the current and the proposed applications. The existing application lacks an e-voting service where the voters must go to the electoral centers to participate in the election and cast their votes. By contrast, the proposed application employed the Ethereum private blockchain to automate the voting process and provide a better voting environment by leveraging blockchain technology features. In the architecture and workflow of the proposed application, the voter logs in to the application by submitting the ID information. If the voter is authorized, then Ethereum coins are released, and the voter can vote. After voting, the application communicates with the smart contract to show the candidates’ list and the total number of votes for each candidate (Fig. 3). Every voter (using a browser) connects with the application directly. The benefit of decentralization is that individuals do not need to rely on a single central server, which might disappear at any point. Any transaction performed in the Ethereum network is recorded into blocks, and every block is linked to the following block. 6 Implementation of the Proposed Application 1. Node Package Manager (NPM): It allows downloading and using many free packages. 2. Truffle Framework: It provides a development environment, asset pipeline, and testing environment using the Ethereum virtual machine . It allows building distributed applications, offers different kits to write smart contracts with the Solidity programming language, and has a built-in smart contract compilation. 3. Ganache: It is a local in-memory blockchain that gives ten external accounts with associated addresses on the local Ethereum blockchain. 4. MetaMask: It provides an extension for Google Chrome to link to the local Ethereum with a personal account and cooperate with the smart contract we built. The primary function of this application is voting, whereby the voters select a candidate and then can pay some Ethers to cast their votes. The application ensures that all voters are legal and authorized. It also prevents any voter from voting for more than one candidate, guarantees that all the votes are counted correctly, and ensures the confidentiality of voting whereby no one can know which candidate the voter selected. The voting process in the application involves two main functions, which are discussed as follows: Adding the Council Candidates: Herein, the smart contract and the local Ethereum blockchain (Ganache) are migrated first. Then, we declare the candidate and a function to add a candidate with the codes obtained from . Fig. 4 shows the election page before the voting process. Casting a Vote: This function enables the voters to vote. It also takes the candidate ID as an argument and adds the voter account to the voters mapping to track the voter’s account. Moreover, it applies some conditions to ensure that the voter ID is valid and has not voted before. The voter who has sufficient funds can select a candidate by clicking on the “Vote” button. When the voter selects the desired candidate, his or her voting will be either confirmed or rejected by MetaMask (Figs. 5 and 6), and the funds of that account are decreased. However, when the voter tries to vote again, the application will reject his or her vote. The election page is updated with each candidate’s results, as shown in Fig. 7, when the voting is completed successfully. 7 Evaluation of the Proposed Approach Regarding the features used in the proposed e-voting system in terms of anonymity (used to reduce the possibility of tracking the voters and their votes by using the wallet address), we found that this way does not provide a high degree of anonymity. Specifically, the process of voting requires a coin wallet for each voter; thereafter, the voters transfer their coins to the candidates of their choice. Similarly, the study of proposed using zero-knowledge proof in the e-voting application to solve the problem of anonymity and privacy. However, as mentioned in related work, utilizing this system with many voters is difficult because the processing speed is relatively slow. In terms of privacy and transparency, the voting process of the proposed system has high transparency given that the application provides the voter with election results after each voting process. This step ensures that each voter’s vote has been counted. However, data privacy tends to be less because the nature of the relationship between the two features is an inverse relationship. Using an intermediate unit between the voters and the candidates to convert transferred coins to another and using a one-time ring signature would be a reasonable solution to solve anonymity and make the process of tracking the voters more difficult [26–32]. Tab. 3 compares this application and other blockchain voting applications. The current practice of physically conducting elections has some vulnerabilities related to security and transparency. Thus, the primary purpose of this study was to utilize blockchain technology to improve the mechanism for municipal elections through a suitable application [33–40]. A blockchain-based e-voting application was built to fully utilize the blockchain features and demonstrate the blockchain-based application’s benefits compared with the existing application. The application was designed and implemented using an Ethereum private blockchain. As mentioned previously, the Truffle Framework, Ganache, and Solidity language were used as the development environment to build the application [41–48]. A block contains three data cells: (1) block header that contains information about the voter identity as a hash value produced by the PoW, the source IP address, the target IP address, and the voter actions; (2) the timestamp that indicates the voting time; (3) and the transactions used for the voting process. During a block validation, transactional knowledgebase (TK) uses a private key (Pn) with the source IP address and hash algorithms SHA-256 using Eq. 1. In the abovementioned equations, H and Vc denote the header and voter action, respectively; the miner node computes the PoW using Eqs. 2, 3, and 4. In Eq. 2, Mrk denotes the Merkle root, t is a timestamp, and N is nonce generated by PoW. The value of PoW is saved to the ledger distributed nodes. This procedure is repeated to generate all validated blocks producing a blockchain. 8 Conclusion We proposed using an e-voting blockchain application to improve the traditional voting process and made it secure, untampered, cheaper, and faster. Studying the issues of previous voting systems ensured the need for developing an improved, reliable process for an e-voting application. The work sheds light into blockchain limitations and enhancing features of e-voting applications. We introduced a conceptual design for a blockchain e-voting application, which follows the PoC and PoW consensus algorithms. We described the architecture and verified the processes using the Ethereum platform with the Arabic language interface for allowing hundreds of transactions to be processed at once using computational techniques. We successfully developed a blockchain e-voting application for the Arabic language based on the Ethereum platform to support fewer processes, costs, and less time with enhanced security. The developed application allows voters to select a candidate. The application ensures that all the voters are legal and authorized to be part of the election. It also prevents voters from voting for more than one candidate and ensures the confidentiality of voting whereby no one can know which candidate the voter selected. The main limitation of this study is that the developed e-voting application is a PoC and not linked to the official system of the elections’ council. In other words, the proposed application is built as a private local blockchain, and it has been tested locally without real-time network deployment. The Quorum blockchain can be integrated with an Ethereum platform in the future work for allowing people to easily cast their votes via their smart devices. The blockchain will evolve with fog computing, smartphones, and other smart devices. Its limitations will also be reduced with artificial intelligence, IoT, teleportation, and 6G technology in the future. Funding Statement: This work is funded by the Deanship of Scientific research (DSR), King Abdulaziz University, Jeddah, under grant No. (DF-618-165-1441). The authors, therefore, gratefully acknowledge DSR technical and financial support. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study. References 1. N. Weaver. (2016). “Secure the vote today,” . [Online]. Available: https://www.lawfareblog.com/secure-vote-today. 2. S. Nakamoto. (2008). “Bitcoin: A peer-to-peer electronic cash system,” . [Online]. Available: https://bitcoin.org/bitcoin.pdf. 3. A. Grech and A. F. Camilleri. (2017). Blockchain in Education. Luxembourg: Publications Office of the European Union. 4. A. Ekblaw, A. Azaria, J. D. Halamka and A. Lippman. (2016). “A case study for blockchain in healthcare: MedRec prototype for electronic health records and medical research data,” in Proc. OBD, Vienna, Austria, pp. 13. 5. M. A. Walker, A. Dubey, A. Laszka and D. C. Schmidt. (2017). “Platibart: A platform for transactive IoT blockchain applications with repeatable testing,” in Proc. M4IoT, USA, pp. 17–22. 6. N. Hackius and M. Petersen. (2017). “Blockchain in logistics and supply chain: Trick or treat?,” in Proc. HICL, Berlin, Germany, pp. 3–18. 7. R. Osgood. (2016). The future of democracy: Blockchain voting. COMP116: Information Security, 1–21. 8. G. Wood. (2014). “Ethereum: A secure decentralized generalized transaction ledger,” Ethereum Project Yellow Paper, vol. 151, no. 2014, pp. 1–32. 9. Z. Zheng, S. Xie, H. Dai, X. Chen and H. Wang. (2017). “An overview of blockchain technology: Architecture, consensus, and future trends, in Proc. BigData Congress, USA, pp. 557–564. 10. Q. F. Hassan. (2018). “Blockchain-based security solutions for iot systems,” in IEEE Internet of Things A to Z: Technologies and Applications, pp. 255–274. 11. S. Wang, Y. Yuan, X. Wang, J. Li and R. Qin. (2018). “An overview of smart contract: Architecture, applications, and future trends,” in 2018 IEEE Intelligent Vehicles Symposium (IVChangshu, pp. 108–113. 12. D. Yaga, P. Mell, N. Roby and K. Scarfone. (2019). Blockchain technology overview, Gaithersburg, MA: NIST, U.S. Department of Commerce. 13. S. Kawther, A. Wali, D. Alahmadi, A.Babour and F.Al Qahtani. (2019). “Building a blockchain application: A showcase for healthcare providers and insurance companies,” in Proc. FTC, San Francisco, CA, pp. 785–801. 14. Z. Zheng, S. Xie, H. N. Dai, X. Chen and H. Wang. (2018). “Blockchain challenges and opportunities: A survey,” International Journal of Web and Grid Services, vol. 14, no. 4, pp. 352–375. 15. A. Alketbi, Q. Nasir and M. A. Talib. (2018). “Blockchain for government services—Use cases, security benefits, and challenges,” in Proc. L&T, KSA, pp. 112–119. 16. S. Wu and D. Galindo. (2018). “Valuation and improvement of two blockchain-based e-voting system: Agora and proof of vote (Master theses),” Birmingham University, Birmingham, UK. 17. T.-T. Kuo, H. Z. Rojas and L. Ohno-Machado. (2019). “Comparison of blockchain platforms: A systematic review and healthcare examples,” Journal of the American Medical Informatics Association, vol. 26, no. 5, pp. 462–478. 18. D. Vujičić, D. Jagodić and S. Ranđić. (2018). “Blockchain technology, bitcoin, and ethereum: A brief overview,” in 17th Int. Sym. Infoteh-iahorina (INFOTEHEast Sarajevo, pp. 1–6. 19. Etherscan, “The Ethereum Blockchain Explorer,” 2020. [Online]. Available: https://etherscan.io/. 20. J. Filiba. (2017). “Ethereum breaks one million transactions in a single day,” Coinsquare News, . [Online]. Available: https://news.coinsquare.com/digital-currency/ethereum-one-million-transaction-day/. 21. A. B. Ayed. (2017). “A conceptual secure blockchain-based electronic voting system,” International Journal of Network Security & Its Applications, vol. 9, no. 3, pp. 01–09. 22. S. M. Anggriane, S. M. Nasution and F. Azmi. (2016). “Advanced e-voting system using paillier homomorphic encryption algorithm,” in Proc. ICIC, Indonesia, pp. 338–342. 23. California Secretary of State, “Top-to-bottom review,” 2007. [Online]. Available: http://www.sos.ca.gov/elections/voting-systems/oversight/top-bottom-review/. 24. P. Paillier. (1999). “Public-key cryptosystems based on composite degree residuosity classes,” in Proc. Eurocrypt, Prague, pp. 223–238. 25. Ministry of Local Government and Modernisation, “Internet voting pilot to be discontinued,” Government. no. 2014. [Online]. Available: https://www.regjeringen.no/en/aktuelt/Internet-voting-pilot-tobediscontinued/id764300/. 26. Y. Takabatake, D. Kotani and Y. Okabe. (2016). “An anonymous distributed electronic voting system using zerocoin,” IEICE Techinical Report, vol. 54, no. 11, pp. 127–131. 27. W. J. Lai, Y. C. Hsieh, C. W. Hsueh and J. L. Wu. (2018). “Date: A decentralized, anonymous, and transparent e-voting system,” in Proc. HotICN, China, pp. 24–29. 28. N. bin Abdullah and S. Muftic. (2015). “Security protocols with privacy and anonymity of users,” Universal Journal of Communications and Networks, vol. 3, no. 4, pp. 89–98. 29. S. E-Government, “Saudi—National Portal—Elections in the Kingdom of Saudi Arabia.” 2029. [Online]. Available: https://www.saudi.gov.sa/wps/portal/snp/pages/electionsInTheKingdomOfSaudiArabia. 30. S. Pareek, A. Upadhyay, S. Doulani, S. Tyagi and A. Varma. (2018). “E-voting using ethereum blockchain,” International Journal for Research Trends and Innovation, vol. 3, no. 11, pp. 30–34. 31. G. McCubbin. (2018). “The ultimate ethereum dapp tutorial, how to build a full stack decentralized application step-by-step,” . [Online]. Available: http://www.dappuniversity.com/articles/the-ultimate-ethereum-dapp-tutorial. 32. W. Lai and J. Wu, “An efficient and effective Decentralized Anonymous Voting System,” ArXiv, vol. abs/1804.06674, 2018. 33. Y. Liu and Q. Wang. (2017). “An e-voting protocol based on blockchain,” IACR Cryptology ePrint Archive, vol. 2017, pp. 1043. 34. R. Hanifatunnisa and B. Rahardjo. (2017). “Blockchain-based e-voting recording system design,” in Proc. TSSA, Indonesia, pp. 1–6. 35. F. S. Hardwick, A. Gioulis, R. N. Akram and K. Markantonakis. (2018). “E-voting with blockchain: An e-voting protocol with decentralization and voter privacy,” in Proc. iThings and GreenCom and CPSCom and SmartData, Canada, pp. 1561–1567. 36. F. R. Batubara, J. Ubacht and M. Janssen. (2018). “Challenges of blockchain technology adoption for e-government: A systematic literature review,” in Proc. DG.O, Netherlands, pp. 76. 37. H. V. Patil, K. G. Rathi and M. V. Tribhuwan. (2018). “A study on decentralized e-voting system using blockchain technology,” International Research Journal of Engineering and Technology (IRJET), vol. 5, no. 11, pp. 48–53. 38. C. Sullivan and E. Burger. (2017). “E-residency and blockchain,” Computer Law & Security Review, vol. 33, no. 4, pp. 470–481. 39. S. Ølnes and A. Jansen. (2017). “Blockchain technology as s support infrastructure in e-government,” in Proc. ICEG, Russia, pp. 215–227. 40. M. Sharples and J. Domingue. (2016). “The blockchain and kudos: A distributed system for the educational record, reputation and reward,” in Proc. EC-TEL, France, pp. 490–496. 41. S. Ølnes. (2016). “Beyond bitcoin enabling smart government using blockchain technology,” in Proc. ICEG, Portugal, pp. 253–264. 42. N. Kshetri and J. Voas. (2018). “Blockchain-enabled e-voting,” IEEE Software, vol. 35, no. 4, pp. 95–99. 43. A. A. Mutlag, M. Khanapi Abd Ghani, M. A. Mohammed, M. S. Maashi, O. Mohd et al. (2020). , “MAFC: Multi-agent fog computing model for healthcare critical tasks management,” Sensors, vol. 20, no. 7, pp. 1853. 44. R. Krishnamurthy, G. Rathee, N. Jaglan. (2020). “An enhanced security mechanism through blockchain for E-polling/counting process using IoT devices,” Wireless Networks, vol. 26, no. 4, pp. 2391–2402. 45. S. A. Mostafa, S. S. Gunasekaran, A. Mustapha, M. A. Mohammed and W. M. Abduallah. (2019). “Modelling an adjustable autonomous multi-agent internet of things system for elderly smart home,” in Proc. AHFE, Washington, pp. 301–311. 46. O. A. Mahdi, Y. R. B. Al-Mayouf, A. B. Ghazi, A. W. A. Wahab and M. Y. I. B. Idris. (2018). “An energy-aware and load-balancing routing scheme for wireless sensor networks,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 12, no. 3, pp. 1312–1319. 47. K. H. Abdulkareem, M. A. Mohammed, S. S. Gunasekaran, M. N. Al-Mhiqani, A. A. Mutlag et al. (2019). , “A review of fog computing and machine learning: Concepts, applications, challenges, and open issues,” IEEE Access, vol. 7, pp. 153123–153140. 48. A. A. Mutlag, M. K. Abd Ghani, N. Arunkumar, M. A. Mohammed and O. Mohd. (2019). “Enabling technologies for fog computing in healthcare IoT systems,” Future Generation Computer Systems, vol. 90, pp. 62–78.
https://www.techscience.com/ueditor/files/TSP_IASC_27-3/TSP_IASC_14827/TSP_IASC_14827/TSP_IASC_14827.html
This invention relates to a pair of ski brakes and a pair of skis which ski brakes when mounted on the skis hold by means of braking arms the skis together when the running surfaces of the skis of said pair are brought into contact with each other wherein the braking arms of at least one brake are provided each on the inside with a shoulder for overlying edges of the respective other ski. From US-A-4,213,629 a ski brake mechanism is known having braking arms on which a provided structure for facilitating a connection of two skis together in a position wherein the running surfaces thereof engage one another at the tips and tails thereof. The structure on the arms of the ski brake is a projection which is received into a crevasse or notch on the opposite side thereof. In previously known interlocking braking structures, attempts have been made to provide ski brakes characterized in having a certain elasticity relative to the transverse movement of their braking arms with respect to the skis' longitudinal axis. However, this approach has had the disadvantage that the braking arms of the ski brakes on adjacent skis tend to abrade the skis' edges as the braking arms are moved from their standby position to their operative position, and vice versa. The resulting abrasion is naturally undesirable for a number of reasons, including the fact that the skis suffer damage as a consequence thereof. Another disadvantage of such an approach resides in the fact that the skis must be offset along their longitudinal axis to accommodate their interlocking, a disposition that interferes with the skis' encasement for storage, or for transportation. Ski brakes capable of interlocking skis are also known, for example, from US-A-4,062,553. Because the braking arms of such brakes desirably should not protrude from the sides of the skis during skiing, that is, their "overhang" should be minimized, and since the width of each ski of a pair is identical, a problem has heretofore existed as to how skis can be interlocked and held together with adequate reliability and without interference of their braking arms when the skis are placed next to each other. It is the object of the invention to provide an arrangement for holding skis together in pairs without objectionable offset along their longitudinal axis. This object is solved by a pair of ski brakes and a pair of skis according to the first part of the main claim which is modified according to the features of the characterizing part of the main claim. According to this inventive solution the shoulders of said braking arms of said one brakeable be held by said overlaying braking arms against the force of a spring in engagement with the edges of the ski which carries the other brake when the skis contact each other. In view of the preceding, therefore, it is a first aspect of this invention to hold skis together in interlocked pairs. A second aspect of this invention is to allow skis of a pair to be held together by means of their attached ski brakes. Another aspect of this invention is to allow skis arranged in pairs to be held together without additional components or cost. Yet another aspect of this invention is to hold skis of a pair together without damaging the edges thereof due to abrasion of the ski edges by the braking arms of the ski brakes. The foregoing and other aspects of the invention are provided by a ski brake that can cooperate with another such brake to interlock skis of a pair provided with such brakes comprising: two shaped members; spring means; and a base plate, wherein said shaped members include an actuator arm portion and a braking arm portion and are rotatably mounted in said base plate, the free ends of said actuator arm portions being connected by said spring means, and said braking arm portions being provided with engaging means adapted to lockingly engage skis, whereby when the running surfaces of the skis of said pair are brought into contact with each other, the braking arm portions of one of said ski brakes mounted on a first of said skis overlies the braking arm portions of another of said ski brakes mounted on a second of said skis, causing the engaging means of the ski brake on said second ski to lockingly engage the first ski. The invention will be better understood when reference is had to the following drawings in which like-numbers refer to like-parts, and in which: FIG. 1 is a partial side elevation of two skis held together by the improved ski brakes of the invention. FIG. 2 is a cross-section of two skis held together by ski brakes of the invention, taken through the point of attachment of the ski brakes to the skis. The interlocking of skis according to the invention is accomplished by providing the braking arms of the ski braking mechanisms at least on one of the skis with locking shoulders. During the interlocking action, the shoulders on the braking arms of the ski brake of one of the skis are brought into engaging contact with the edges of the other ski, interlocking the two skis together. In their interlocked position, the braking arms of the brake on a first of the skis overlie the braking arms of the brake on the second ski, forcing the braking arms of the second ski against the first ski in a position in which the shoulders on the braking arms of the second ski brake are brought into locking contact with the edges of the running surfaces of the first ski. In carrying out the interlocking procedure, described more fully in the following, the running surfaces of the skis are placed together so that the somewhat divergent braking arms of the ski brake that are not intended to interlock with a ski will overlie and press against the braking arms of the ski brake that is intended to interlock with a ski. This causes the shoulders of the latter braking arms to engage and press against the edges of the latter ski, holding the two skis securely together with no longitudinal offset. The interlocked condition described permits the skis to be readily transported or stored, for example, in carrying bags, boxes and the like, when the skis are not in use. The shoulders of the braking arms described are desirably fabricated at an oblique angle, relative to the longitudinal axis of the braking arms, so that the shoulders are positioned substantially parallel to the longitudinal axis of the ski when the braking arms are disposed in their active, or braking mode. This position allows the shoulders to be disposed substantially parallel to the edges of the adjacent ski, permitting the shoulders to engage the ski edges contiguously in the desired interlocking position. Figure 1 illustrates a partial side elevation of two skis that are held together by the ski brakes of the invention, while Figure 2 shows a cross-section of two skis held together by the ski brakes of the invention, the cross-section being taken through the point of attachment of the ski brakes to the skis. In the Figures, ski brakes 1 and 2 are mounted respectively on skis 3 and 4, by means of base plates 5 and 6, respectively. If desired, the base plates may also have ski binding parts attached thereto. Each of the ski brakes shown includes two shaped members, 7 and 8, disposed in a position exhibiting mirror image symmetry with respect to each other. The shaped members are typically fabricated from round wire or rods configured in the shape that can better be seen in Fig. 2. In effect, the shaped members 7 and 8 function as two-armed lever members, that revolve about their intermediate portions 9 and 10, respectively. Portions 11 and 12 of the shaped members comprise the actuating arms of the ski brakes, and portions 13 and 14 their braking arms. Each of the braking arms 13 and 14 is provided with a sheath, desirably fabricated from plastic, 15 and 16 respectively. The free ends of the actuating arms 11 and 12 are angled toward each other and interconnected by a coil spring 17. Each of the sheaths is provided with locking shoulders 18 and 19, for respectively engaging edges 20 and 21 of a ski. When the ski brakes 1 and 2 are in their operative braking position, the coil spring urges the free ends of the actuating arms 7 and 8 together, causing the braking arms 13 and 14 to diverge, as is shown with respect to the ski brake of the lower ski 4 of Fig. 2. When the brake is moved to its inactive, or standby position, the braking arms 13 and 14 are retracted toward each other against the force of the coil spring 17, the change being effected without contact between the braking arms and the sides of the skis. As shown in Fig. 2, the ski braking arms 13 and 14 of the upper ski brake 1 have been forced into an innermost position by the pressure of the overlying ski braking arms of the ski brake of ski 4, which are in their outermost position. In such innermost position, the shoulders 18 and 19 of ski brake 1 are brought into locking engagement, respectively with edges 20 and 21 of ski 4, securely holding the skis 3 and 4 to each other. When the ski brakes are in their operative braking position, the braking arms assume a position intermediate between the innermost and outermost positions described in connection with Fig. 2. In such intermediate position, sheaths 15 and 16 are urged apart by spring 17 so that shoulders 18 and 19 are disposed spaced from the edges of the ski, allowing the braking arms to be moved freely from their operative, braking position, to their standby position. In effecting interlocking of the skis, the running surfaces of the skis are placed face-to-face with each other, and the ski whose edges are to be engaged is moved slightly longitudinally relative to the other ski, in the direction of the front of the skis. The moved ski is then moved backward toward the rear of the skis, causing the braking arms of the moved ski to overlie the braking arms of the other, stationary ski. The force thus generated by the overlying braking arms on the braking arms of the stationary ski causes the shoulders 18 and 19 of the braking arms of the stationary ski to move to their innermost position, and to engage edges 20 and 21 of the moved ski, resulting in the skis being interlocked together. When the running surface of the skis are juxtaposed to each other in the position described, the camber of the skis will cause the running surfaces of the skis to be spaced apart adjacent to their ski brakes, as shown in Fig. 2. As a consequence, and although more than one shoulder can be provided in each of sheaths 15 and 16 in order to accommodate the interlocking of skis of different thickness, multiple shoulders are not necessary when the distance between the running surfaces of the skis can be sufficiently varied by forcing the skis together against their somewhat elastic camber. While the invention has been disclosed in relation to the locking engagement between the shoulders of the sheaths and the protruding steel edges 20 and 21, other edges can be used for the interlocking, for example, top edges provided in the skis, or the edges of a plate mounted on the skis. Brief Description of the Drawings Detailed Description of the Invention
Q: Why do the conversion specifiers, %o and %x, work differently for printf() and scanf() in C? I am learning C from the book "C Primer Plus" by Stephen Prata. In chapter 4, the author states that in printf(), %o and %x, denote unsigned octal integers and unsigned hexadecimal integers respectively, but in scanf(), %o and %x, interpret signed octal integers and signed hexadecimal integers respectively. Why is it so? I wrote the following program in VS 2015 to check the author's statement: #include <stdio.h> int main(void) #pragma warning(disable : 4996) { int a, b, c; printf("Enter number: "); scanf("%x %x", &a, &b); c = a + b; printf("Answer = %x\n", c); while (getchar() != EOF) getchar(); return 0; } The code proved the author's claim. If the inputs had a pair integers where the absolute value of the positive integer was bigger than the absolute value of the negative integer, then everything worked fine. But if the inputs had a pair integers where the absolute value of the positive integer was smaller than the absolute value of the negative integer, then the output was what you would expect from unsigned 2's complement. For example: Enter number: -5 6 Answer = 1 and Enter number: -6 5 Answer = ffffffff A: The C standard says that for printf-like functions (7.21.6.1 fprintf): o,u,x,X The unsigned int argument is converted to unsigned octal (o), unsigned decimal (u), or unsigned hexadecimal notation (x or X) While for scanf-like functions it says (7.21.6.2 fscanf): x Matches an optionally signed hexadecimal integer, whose format is the same as expected for the subject sequence of the strtoul function with the value 16 for the base argument. The corresponding argument shall be a pointer to unsigned integer. So as an extra feature, you can write a negative hex number and scanf will convert it to the corresponding unsigned number in the system's format (two's complement). For example unsigned int x; scanf("%x", &x); // enter -1 printf("%x", x); // will print ffffffff Why they felt like scanf needed this mildly useful feature, I have no idea. Perhaps it is there for consistency with other conversion specifiers. However, the book seems to be using the function incorrectly, since the standard explicitly states that you must pass a pointer to unsigned int. If you pass a pointer to a signed int, you are formally invoking undefined behavior.
King Size Snickers are on their way out: Mars Inc., the company that makes chocolate candy bars like Twix and Milky Way, plans to halt production of any chocolate product with more than 250 calories per serving by the end of 2013. Spokeswoman Marlene Machut said the plan to stop shipping any chocolate product that exceeded 250 calories per portion by the end of 2013 was part of Mars’ “broad-based commitment to health and nutrition.” As part of a “broader push for responsible snacking,” the company has also vowed to reduce sodium by 25 percent in all its products by 2015.
Q: Intuition behind kernel is trivial implies homomorphism is injective. Let $G$ and $H$ be groups, and let $\pi : G \rightarrow H$ be a homomorphism. If $\pi$ is injective, then $\ker(\pi) = \{e_G\}$. If my understanding is correct, this is because identities are mapped to identities in a homomorphism, and if $\pi$ is injective, there is only one element in the preimage of $\{e_H\}$, namely $e_G$. Does such an intuition exist as to explain why: if $\ker(\pi) = \{e_G\}$, then $\pi$ is injective? I can prove this symbolically, but I feel like I have no clue why this should be true. A: It should not be surprising that if a homomorphism $\phi$ is injective, then its kernel is trivial. After all, injectivity requires that the preimage of every element of the image be unique. (And the homomorphism property requires that the preimage of the identity be the identity, in particular.) It is the converse -- that the triviality of the kernel is sufficient for injectivity -- that is less obvious. Mark Bennett has given the core idea: symbolically, if $\phi(a)$ equals $\phi(b)$, then their (group) inverses are equal too, and thus $$\phi(ab^{-1})=\phi(a)\phi(b^{-1})=\phi(a)\phi(b)^{-1}=e$$ The first and second steps of this calculation work only because of the homomorphism property of $\phi$. Now, if the kernel is trivial, we conclude that $ab^{-1}=e$, so $a=b$, and injectivity follows. Here is a nonstandard but alternative way to think about this result; it is good for intuition and illustrates another Big Idea. Say a function $f$ is injective at $y$, an element of the image, if the preimage $f^{-1}(y)$ has only one element. This is a "local" property; a function might be injective at $y_1$ but not at $y_2$. For general functions between sets, injectivity at a point is not sufficient to deduce global injectivity -- that is, injectivity at every point. But if $f$ is a group homomorphism, not just a bare function between sets, then a special type of local injectivity, namely injectivity at the identity element, is sufficient to deduce global injectivity. The extra structure provided by the homomorphism property and the definition of the identity allow us to parlay a local phenomenon into a global one. This is a taste of a general class of "local-to-global" results that show how local phenomena can be used to deduce global structure. A: If two elements of $G$ map to the same thing, their ratio maps to the identity, and is therefore in the Kernel. This ratio is only $1$ (the identity) if they are equal.
Q: How to remove a pair from a dictionary for a specified key? I want to write a function with a parameter, which shall be used for a comparison with a key of a dictionary. The function iterates a collection and checks if the case has a pair with this key. If it has, I want to remove that pair, leave the other in that case and move on with the next case. I've created a function filterAndExtract(). However it only iterates and do nothing. When comparing (Bool) the parameter and the key in each case, it doesn't work as expected. I want to know how to identify a key in a pair, so I can do stuff with the cases in the collection. Thanks in advance! enum Tags: String { case one = "One" case two = "Two" case three = "Three" } struct Example { var title: String var pair: [Tags: String] } let cases = [ Example(title: "Random example One", pair: [Tags.one: "First preview", Tags.two: "Second preview"]), Example(title: "Random example Two", pair: [Tags.two: "Thrid preview", Tags.three: "Forth preview"]), Example(title: "Random example Three", pair: [Tags.three: "Fifth preview", Tags.one: "Sixth preview"]) ] func filterAndExtract(collection: [Example], tag: Tags) { for var item in collection { let keys = item.pair.keys for key in keys { if key == tag { item.pair.removeValue(forKey: key) } } } for i in collection { print("\(i.title) and \(i.pair.values) \nNEXT TURN--------------------------------------------------\n") } } //Results: //Random example One and ["Second preview", "First preview"] //NEXT TURN-------------------------------------------------- //Random example Two and ["Thrid preview", "Forth preview"] //NEXT TURN-------------------------------------------------- //Random example Three and ["Fifth preview", "Sixth preview"] //NEXT TURN-------------------------------------------------- //Solution (how I want it to look at the end): for var i in cases { i.pair.removeValue(forKey: .three) print("\(i.title) and \(i.pair.values) \nNEXT TURN--------------------------------------------------\n") } //Random example One and ["Second preview", "First preview"] //NEXT TURN-------------------------------------------------- //Random example Two and ["Thrid preview"] //NEXT TURN-------------------------------------------------- //Random example Three and ["Sixth preview"] //NEXT TURN-------------------------------------------------- A: Swift collections are value types. Whenever you assign a collection to a variable you'll get a copy of the object. To modify the parameter collections you have to make it mutable and you have to modify the value directly inside collections. func filterAndExtract(collection: [Example], tag: Tags) { var collection = collection for (index, item) in collection.enumerated() { let keys = item.pair.keys for key in keys { if key == tag { collection[index].pair.removeValue(forKey: key) } } } for i in collection { print("\(i.title) and \(i.pair.values) \nNEXT TURN--------------------------------------------------\n") } }