patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11859900 | DETAILED DESCRIPTION Hereinafter, exemplary implementations of the present disclosure will be described in detail with reference to the accompanying drawings. However, The disclosure may, however, be implemented in many different forms and should not be construed as being limited to the implementations set forth herein; rather, alternative implementations included in other retrogressive disclosures or falling within the spirit and scope of the present disclosure can easily be derived through adding, altering, and removing, and will fully convey the concept of the disclosure to those skilled in the art. FIG.1is a perspective view of a refrigerator according to an implementation of the present disclosure. AndFIG.2is a front view illustrating a state in which all doors of the refrigerator are opened. AndFIG.3is a perspective view illustrating a state in which a sub-door of the refrigerator is opened. As illustrated in the drawings, an external appearance of a refrigerator1according to an implementation of the present disclosure may be formed by a cabinet10which forms a storage space and a door which opens and closes the storage space. An inside of the cabinet10may be divided up and down by a barrier11, and a refrigerator compartment12may be formed at an upper portion of the cabinet10, and a freezer compartment13may be formed at a lower portion of the cabinet10. And various accommodation members121such as a shelf, a drawer and a basket may be provided inside the refrigerator compartment12. If necessary, the accommodation members121may be inserted and withdrawn while the door is opened, and may accommodate and store food by the inserting and withdrawing. A main lighting unit85which illuminates the refrigerator compartment12may be provided at the refrigerator compartment12. The main lighting unit85may also be disposed at the freezer compartment13, and may also be disposed at any positions of an inner wall surface of the refrigerator1. A drawer type freezer compartment accommodation member131which is inserted and withdrawn may be mainly disposed inside the freezer compartment13. The freezer compartment accommodation member131may be formed to be inserted and withdrawn, interlocking with opening of a freezer compartment door30. And a first detection device31which detects a user's body may be provided at a front surface of the freezer compartment door30. Detailed description of the first detection device31will be described again below. The door may include a refrigerator compartment door20and the freezer compartment door30. The refrigerator compartment door20serves to open and close an opened front surface of the refrigerator compartment12by rotation, and the freezer compartment door30serves to open and close an opened front surface of the freezer compartment13by rotation. One pair of refrigerator compartment doors20and one pair of freezer compartment doors30may be provided left and right to shield the refrigerator compartment12and the freezer compartment13, respectively. A plurality of door baskets may be provided at the refrigerator compartment door20and the freezer compartment door30. The door baskets may be provided so as not to interfere with the accommodation members121and131while the refrigerator compartment door20and the freezer compartment door30are closed. The refrigerator compartment door20and the freezer compartment door30may form an entire exterior when being seen from a front. The exterior of each of the refrigerator compartment door20and the freezer compartment door30may be formed of a metallic material, and the entire refrigerator1may have a metallic texture. In some cases, a dispenser which dispenses water or ice may be provided at the refrigerator compartment door20. While the implementations described in this application may refer to an example in which a French type door opening and closing one space by rotating one pair of doors is applied to a bottom freezer type refrigerator having the freezer compartment provided at a lower side thereof, the present disclosure may be applied to all types of refrigerators having a door. In some cases, a right one (inFIG.1) of the pair of refrigerator compartment doors20may be formed to be doubly opened and closed. Specifically, the right refrigerator compartment door20may include a main door40which may be formed of the metallic material to open and close the refrigerator compartment12, and a sub-door50which may be rotatably disposed inside the main door40to open and close an opening of the main door40. The main door40may be formed to have the same size as that of a left one (inFIG.1) of the pair of refrigerator compartment doors20, may be rotatably installed at the cabinet10by a main hinge401and a middle hinge402, and thus may open and close a part of the refrigerator compartment12. An opening part403may be formed at the main door40. A door basket404may be installed at a rear surface of the main door40including an inside of the opening part403. Therefore, a user may have access to the door basket404through the opening part403without opening of the main door40. A size of the opening part403may correspond to most of a front surface of the main door40except, for example, a part of a perimeter of the main door40. The sub-door50may be rotatably installed inside the opening part403, and may open and close the opening part403. At least a part of the sub-door50may be formed of a transparent material like glass. Therefore, access to the opening part403can be allowed through opening of the sub-door50, and even while the sub-door50is closed, it can also be possible to see through the inside of the opening part403. The sub-door50may be referred to as a see-through door. In some cases, the glass material forming the sub-door50may be formed to be selectively changed into a transparent or opaque state by controlling a light transmittance and a reflectivity thereof according to a user's operation. Therefore, the glass material can become transparent so that an inside of the refrigerator1is visible, only when the user wants, and otherwise, can be maintained in the opaque state. FIG.4is a front view illustrating a state in which the sub-door is opaque. As illustrated in the drawing, when there are not any operations in the refrigerator1while all of the main door40and the sub-door50are closed, the sub-door50may have an opaque black color or may be in a state like a mirror surface. Therefore, the sub-door50may not enable an internal space of the sub-door50, i.e., an accommodation space of the main door40and an internal space of the refrigerator compartment12to be visible. Therefore, the sub-door50may be maintained in a state having the black color, and thus may provide a beautiful and simple exterior having a mirror-like texture to the refrigerator1. Also, the exterior may harmonize with the metallic texture of the main door40, the refrigerator compartment door20and the freezer compartment door30, and thus may provide a more luxurious appearance. FIG.5is a front view illustrating a state in which the sub-door is transparent. As illustrated in the drawing, in a state in which all of the main door40and the sub-door50are closed, the sub-door50may be made transparent by a user's certain operation. When the sub-door50is in the transparent state, the accommodation space of the main door40and the internal space of the refrigerator compartment12may be visible. Therefore, the user may confirm an accommodation state of food in the accommodation space of the main door40and the internal space of the refrigerator compartment12without opening of the main door40and the sub-door50. Also, when the sub-door50is in the transparent state, a display unit60disposed at a rear of the sub-door50may be in a visible state, and an operation state of the refrigerator1may be displayed to an outside. An exemplary operating method and configuration for enabling the accommodation space of the main door40and the internal space of the refrigerator compartment12to be visible will be described below in detail. FIG.6is a perspective view illustrating a state in which the main door and the sub-door of the refrigerator are coupled to each other. AndFIG.7is an exploded perspective view illustrating a state in which the main door and the sub-door are separated. AndFIG.8is an exploded perspective view of the main door. As illustrated in the drawings, an external appearance of the main door40may be formed by an outer plate41which may be formed of a metallic material, a door liner42which is coupled to the outer plate41, and door cap decorations45and46which are provided at upper and lower ends of the outer plate41and the door liner42. The outer plate41may be formed of a plate-shaped stainless material, and may be formed to be bent and thus to form a part of a front surface and a perimeter surface of the main door40. The door liner42may be injection-molded with a plastic material, and forms the rear surface of the main door40. And the door liner42may also be formed so that an area thereof corresponding to the opening part403is opened. The opening part403may have a plurality of uneven structures so that the door basket404is installed. A rear gasket44may be provided at a perimeter of a rear surface of the door liner42. The rear gasket44is in close contact with a perimeter of the cabinet10, and prevents a leak of cooling air between the main door40and the cabinet10. In some cases, a door lighting unit49which illuminates the inside of the opening part403may be provided at an upper surface of the door liner42. The door lighting unit49may emit light downward from an upper side of the opening part403, and thus may illuminate the entire opening part403including the door basket404, and may also enable the sub-door50to be in the transparent state. The cap decorations45and46may form an upper surface and a lower surface of the main door40, and a hinge installation part451which enables the main door40to be rotatably installed at the cabinet10may be formed at each of the cap decorations45and46. An upper end of the main door40may be coupled to the main hinge401, and a lower end of the main door40may be coupled to the middle hinge402, and thus the upper and lower ends of the main door40may be rotatably supported. A door handle462may be formed to be recessed from the lower surface of the main door40, i.e., the cap decoration46. For example, the user may put a hand into the door handle462, may rotate the main door40, and thus may open and close the refrigerator compartment12. In some cases, a door frame43may be further provided between the outer plate41and the door liner42. The door frame43may be coupled between the outer plate41and the door liner42, and may form a perimeter of the opening part403. In a state in which the outer plate41, the door liner42, the door frame43, and the cap decorations45and46are coupled with each other, a foaming solution may be filled inside an internal space of the main door40, and thus an insulation may be formed therein. That is, the insulation may be disposed at a perimeter area of the opening part403, and thus isolate a space inside the refrigerator1from a space outside the refrigerator1. The door frame43may be injection-molded with a plastic material which is different from that of the door linier42. In some cases, the door frame43may be integrally formed with the door liner42, and may be directly coupled to the outer plate41. A frame stepped part431which protrudes inward may be formed at an inner surface of the door frame43. Therefore, when the sub-door50is closed, the frame stepped part431may support the sub-door50. A front gasket434may be provided at the frame stepped part431. The front gasket434may be in contact with a rear surface of the sub-door50when the sub-door50is closed to thereby provide a seal between the main door40and the sub-door50. Of course, the front gasket434may be omitted in some cases. Also, the front gasket434may be formed in a sheet shape formed of a metallic material, and may also be formed to be in close contact with a sub-door gasket591having a magnetic force by the magnetic force. A frame heater4321may be provided at a rear surface of the frame stepped part431. The frame heater4321is disposed along the frame stepped part431, and heats the frame stepped part431. The frame stepped part431may have a relatively low surface temperature due to an influence of cooling air in the refrigerator1. Therefore, dew condensation may occur on a surface of the frame stepped part431. The dew condensation may be prevented by driving of the frame heater4321. A hinge hole433in which each of sub-hinges51and52for installing the sub-door50is installed may be formed at each of both sides of the door frame43. The hinge hole433may be formed at a position which faces a side surface of the sub-door50, and also formed so that each of the sub-hinges51and52is inserted therein. In some cases, a hinge case47may be provided at the inner surface of the door frame43(which is in contact with the insulation) corresponding to the hinge hole433. The hinge case47may be formed by vertically coupling a first case471and a second case472to each other. The hinge case47can form a space which rotatably accommodates a part of each of the sub-hinges51and52inserted through the hinge hole433when the first case471and the second case472are coupled to each other. A hinge installation member473may be provided at a recessed space of the hinge case47. The hinge installation member473may be fixed by the coupling of the first case471and the second case472. The hinge installation member473may be formed of a steel material, and may have a shaft insertion part4731in which a hinge shaft of each of the sub-hinges51and52is inserted. The hinge case47may be installed at the hinge hole433which may be formed at each of upper and lower portions of the door frame43. And the hinge cases47which are disposed up and down may be formed to have the same structure and shape. In some cases, a hinge frame48may be provided at an outside of the door frame43. The hinge frame48may be formed to vertically extend, and fixes the hinge cases47which are disposed up and down. For instance, the hinge frame48may be formed of a metallic material or a plastic material having excellent strength, may be formed in a plate shape, and may be formed to vertically extend. An upper end482and a lower end483of the hinge frame48may be bent, and then may be coupled and fixed to the cap decorations45and46provided at the upper and lower ends of the main door40. That is, the upper end482and the lower end483of the hinge frame48may be fixed to the cap decorations45and46, and thus an installation position thereof may be maintained. Moreover, the hinge frame48may indirectly support the sub-hinges51and52. A case fixing part481may be formed at each of upper and lower portions of the hinge frame48. The case fixing part481may be formed by cutting away a part of the hinge frame48. Therefore, a portion of the hinge case47which forms the recessed space may be accommodated and fixed into the cut-away case fixing part481of the hinge frame48. The hinge case47may be coupled to the hinge frame48by a separate fastening member such as a screw. A frame reinforcing part484may be formed between the case fixing parts481, which are formed at the upper and lower portions of the hinge frame48, to be recessed. And a plurality of frame openings485may be formed at the frame reinforcing part484. The frame reinforcing part484may reinforce strength of the hinge frame48, may prevent the hinge frame48from being bent or deformed, and may also maintain an installation position of the hinge case47. When the foaming solution is injected into the main door40, a surface area can be increased, and thus adhesion with the foaming solution is enhanced. Also, the foaming solution may pass through the frame openings485, and thus flowability of the foaming solution may be improved. When the insulation is molded, the hinge frame48may be buried and fixed in the insulation. The sub-hinges51and52may include an upper hinge51which is installed at an upper end of the sub-door50and a lower hinge52which is installed at a lower end of the sub-door50. And the upper hinge51and the lower hinge52may extend laterally toward the hinge hole433, and may be coupled at an inside of the main door40. Therefore, the sub-hinges51and52may be installed at accurate positions, and may have a structure which extends laterally. Accordingly, since there is not an interfering structure with the sub-hinges51and52at a gap between the main door40and the sub-door50, a distance between the main door40and the sub-door50may be maintained in a very narrow state, and the exterior may be further enhanced. Also, since the distance between the main door40and the sub-door50is maintained in the very narrow state, and deflection of the sub-door50is effectively prevented, the interference with the main door40upon the rotation of the sub-door50may be prevented. A hinge cover53which shields the upper hinge51and guides access of an electric wire of the sub-door50may be further provided at an upper side of the upper hinge51. FIG.9is an exploded perspective view of the main door and the display unit. AndFIGS.10A and10Bare partial perspective views illustrating an installing state of the display unit. As illustrated in the drawings, the display unit60may be provided at the opening part403of the main door40. The display unit60serves to display an operation state of the refrigerator1and also to operate the refrigerator1, and may be formed so that the user recognizes through the sub-door50from an outside when the sub-door50is in the transparent state. That is, the display unit60may not be visible from the outside while the sub-door50is in the opaque state, but may indicate a variety to information to the outside while the sub-door50is in the transparent state. The display unit60may include a display61which displays state information of the refrigerator1, and various operating buttons62which set the operation of the refrigerator1. The operation of the refrigerator1may be operated by the operating buttons62. The display unit60may be separably provided at a lower end of the opening part403. Therefore, when it is necessary to check or repair the display unit60, the display unit60may be separated. And after the main door40is assembled, the display unit60which is assembled as a separate module may be simply installed. Also, the display unit60which has a necessary function according to a specification of the refrigerator1may be selectively installed. To install and separate the display unit60, a display installing protrusion435may be formed at both inner side surfaces of the opening part403. And a display connection part436for electrical connection with the display unit60may be provided at the lower end of the opening part403. The display installing protrusion435may be formed by protruding a side surface of the opening part403, more specifically, a part of the door liner42and a part of the door frame43. That is, the display installing protrusion435may be formed by coupling a liner side installation part4352and a frame side installation part4351to each other, and may be formed in a protrusion shape having a circular cross section. Therefore, when the display unit60is installed, the display installing protrusion435is maintained in an installed state, and thus coupling between the door liner42and the door frame43may be more firmly maintained. A plurality of display installing protrusions435may be formed and may be arranged vertically. The display installing protrusion435has a structure which is matched with a display guide634formed at both of left and right side surfaces of the display unit60. The display guide634has a structure which is opened downward. Therefore, when the display unit60is moved downward from an upper side, the display installing protrusion435and the display guide634are coupled to each other. And in a state in which the display unit60is installed, the display unit60may be seated and fixed to the lower end of the opening part403. The display connection part436may be formed at a bottom surface of the door liner42. The display connection part436may be formed to be recessed or stepped downward, and may be formed so that at least a part of the display unit60is inserted therein when the display unit60is installed. And a door connector4361may be provided at the display connection part436. The door connector4361may be connected with an electric wire which supplies electric power for an operation of the display unit60and transmits a signal, and may be electrically connected with the display unit60by a separable structure of the display61. That is, the door connector4361may protrude upward from a bottom surface of the display connection part436, and may be coupled and electrically connected to a display connector651provided at a bottom of the display unit60when the display unit60is installed. A plurality of door connectors4361may be provided, and may be formed separately according to functions of the display unit60. That is, the door connectors4361may be independently formed corresponding to the display61and the operating buttons62of the display unit60, and may also be formed so that the separate electric power and signal are transmitted to each of them. In some cases, a case extension part635may be formed at a lower end of a rear surface of the display unit60. Also, a screw hole6351in which a screw is fastened may be formed at the case extension part635, and thus the display unit60may be maintained in a coupled state to the main door40. FIG.11is a cross-sectional view illustrating an installed state of the display unit. AndFIG.12is an exploded perspective view of a display assembly. As illustrated in the drawings, the display unit60may include an outer case63which forms an external appearance, an inner case64which is provided inside the outer case63, a display PCB65and a display cover66. The outer case63may form an entire exterior of the display unit60, and can have an accommodation space formed therein to accommodate the inner case64. The accommodation space is opened forward, and a connector opening631for coupling to the door connector4361, through which the electric wire connected to the display connector651passes, may be formed at a bottom surface of the accommodation space. The display connector651may be provided at a lower side of the connector opening631, and in some cases, the display connector651may be fixed to the connector opening631. Therefore, when the display unit60is installed at the opening part403of the main door40, the display connector651and the door connector4361may be coupled and connected to each other by moving the display unit60up and down. By such a connection, the power supplying and the signal transmitting to the display unit60may be enabled. A plurality of case coupling protrusions632which protrude to be coupled to the inner case64are formed at inner upper and lower ends of the accommodation space. The case coupling protrusions632may be formed at an opened entrance side of the accommodation space, and may be formed at regular intervals. A case support part633which supports the inner case64may be formed to protrude inward from both of left and right sides of an inner surface of the accommodation space. A screw hole6331in which a screw is inserted may be further formed at the case support part633, and the inner case64may be installed and fixed to the case support part633. The display guide634may be formed at both of left and right side surfaces of the outer case63. The display guide634may be formed in a rib shape which protrudes from both of the left and right side surfaces of the outer case63. And the display guide634may be formed to be opened downward, and the display installing protrusion435may be inserted through an opened lower side thereof. The display guide634may be formed so that a width thereof becomes narrower upward from an opened entrance6343thereof. An upper end6341of the display guide634may be formed to have the same size as a diameter of the display installing protrusion435. Therefore, the display installing protrusion435may be easily inserted into the display guide634, and may be restricted by the upper end6341of the display guide634. Also, a fixing part6342which protrudes inward may be further formed at the display guide634. A distance between the fixing parts6342may be formed somewhat smaller than the diameter of the display installing protrusion435. Therefore, the display guide634may be elastically deformed while passing through the fixing part6342, and may be fitted and fixed when being moved to the upper end6341of the display guide634. The inner case64may be injection-molded with a plastic material, and may provide a space in which the display PCB65is installed. A center of the inner case64may be formed to be recessed with a size corresponding to the display PCB65, and a plurality of case coupling grooves641are formed at a perimeter of the inner case64, and the case coupling protrusion632is coupled therein. A case seating part642which extends laterally and is seated on the case support part633may be formed at both side surfaces of the inner case64. The inner case64is coupled to the outer case63by a screw fastened into a screw hole6421of the case seating part642. A case hole643may be formed at one side surface of the inner case64. The case hole643serves as a passage of the electric wires connected to the display PCB65, and the electric wires may pass through the case hole643, and may be connected to the display connector651through the connector opening631. The display PCB65may be accommodated in a space formed inside the inner case64. The display61and the plurality of operating buttons62may be installed at the display PCB in the form of a module. And elements on the display PCB65may be covered and sealed with a resin material for waterproofing and moisture-proofing. The display61may be formed in a panel type which displays the operation state and operation information of the refrigerator1. And the plurality of operating buttons62may be provided at both of left and right sides of the display61, and may be formed to be operated by a user's operation which pushes the display cover66. When the display PCB65is installed at the inner case64, the inner case64is accommodated inside the outer case63, and the display61may be coupled so as to shield an opening of the outer case63. Therefore, the display PCB65and the inner case64may be shielded by the display61. The display cover66may be formed to have a size corresponding to an opened front surface of the outer case63. Therefore, the display cover66may form an exterior of a front surface of the display unit60. And a center of the display cover66may be formed so that information output from the display61is projected therethrough. The display61may be exposed through an opening of the display cover66, or may be exposed to an outside by forming a part of the display cover66to be transparent. The plurality of operating buttons62may be provided at both of the left and right sides of the display61. The plurality of operating buttons62may also be correspondingly indicated on both sides of the display cover66. The operating buttons62indicated on the display cover66are not actual operating buttons62, but are indicated at corresponding positions, and may be touched or pushed by the user. A case fixing member661which installs and fixes the display cover66may be formed to protrude from both of left and right side ends of the display cover66. An end of the case fixing member661may be formed in a hook shape, and may be hooked and restricted by a case restricting groove636formed at both side surfaces of the outer case63, and thus the display cover66may be installed and fixed. FIG.13is a cross-sectional view taken along line13-13′ ofFIG.1. As illustrated in the drawing, the door lighting unit49may be provided at an upper portion of the main door40. The door lighting unit49may be formed at a space between the door liner42and the door frame43. Of course, an installation position of the door lighting unit49is not limited, and may be formed at one of the door liner42and the door frame43, and may be disposed at a position which illuminates the inside of the opening part403. The door lighting unit49may include a lamp case491which is installed inside the main door40, a lamp PCB492which is provided at one side of the lamp case491and at which a plurality of LEDs4921are disposed, and a lamp cover493which shields an opened surface of the lamp case491and is exposed through the opening part403. The lamp cover493may be formed to extend long along the door liner42, and includes a recessed part4914which forms a recess space therein to accommodate the lamp PCB492. Specifically, a surface of the recessed part4914which faces the lamp PCB492may be formed to be rounded, and light emitted from the lamp PCB492is reflected by a rounded surface4915having a predetermined curvature, and directed to the lamp case491. A film which increases the reflectivity of the light may be attached to or coated on an inner surface of the recessed part4914, particularly, the rounded surface4915. A lamp PCB installation part4913at which the lamp PCB492is installed may be formed at one surface which faces the rounded surface4915. The lamp PCB installation part4913enables the lamp PCB492to be installed and fixed in a direction perpendicular to the lamp cover493. The lamp PCB installation part4913and the lamp PCB492are located above the door frame43so as to be covered by an end of the door frame43when being seen from a lower side. Therefore, the LEDs4921may be covered by the end of the door frame43without an additional bezel, and thus a phenomenon in which the light looks as if forming a lump may be prevented. A first case installation part4911and a second case installation part4912may be formed at both ends of the recessed part4914. The first case installation part4911and the second case installation part4912may be installed to be in surface contact with inner side surfaces of the door liner42and the door frame43, respectively, and thus the lamp case491may be hooked and restricted or adhered inside the main door40. Cover insertion grooves4916and4917may be formed at the first case installation part4911and the second case installation part4912. The cover insertion grooves4916and4917may be formed to be stepped, and thus a space in which both ends of the lamp cover493are inserted when the lamp case491is installed may be formed between the first case installation part4911and the door liner42and between the second case installation part4912and the door frame43. The lamp cover493may be formed so that the light reflected by the rounded surface4915of the recessed part4914is transmitted therethrough. The lamp cover493serves to shield an opening of the recessed part4914and also to shield a space between the door liner42and the door frame43. The lamp cover493may be formed to be transparent or translucent, such that the light reflected by the rounded surface4915and uniformly spread is transmitted therethrough. Therefore, the light passing through the lamp cover493can illuminate the inside of the refrigerator1via an indirect illumination method, and can have an effect like surface emitting. To effectively diffuse the light, a film may be attached to or coated on the lamp cover493. And in some cases, when the lamp cover493is injection-molded, particles or a material for diffusing the light may be added. In some cases, cover fixing parts4931and4932which are inserted into the cover insertion grooves4916and4917may be formed to protrude from both ends of the lamp cover493so that the lamp cover493is installed and fixed. The cover fixing parts4931and4932formed at both sides of the lamp cover493may be coupled or fitted inside the cover insertion grooves4916and4917in the form of a hook, and thus the lamp cover493may be installed and fixed. The door lighting unit49may be selectively turned on/off by a user's operation. When the door lighting unit49is turned on, the rear surface of the sub-door50and the opening part403become bright. When the inside of the refrigerator1is brighter than an outside of the refrigerator1by turning on the door lighting unit49, the light emitted by the door lighting unit49is transmitted through the sub-door50. Therefore, the sub-door50may be seen to be transparent by the user, and thus the accommodation space inside the main door40may be seen from an outside through the sub-door50. In some cases, the main lighting unit85may be separately provided inside the refrigerator compartment12. When the main lighting unit85is turned on/off, the space inside the refrigerator1may be seen from the outside through the sub-door50. The main lighting unit85provided inside the refrigerator compartment12may be turned on/off together with the door lighting unit49, or may be independently turned on/off. A heater support part432which protrudes backward may be formed at the rear surface of the frame stepped part431. The heater support part432may be formed along a perimeter of the frame stepped part431, and may be formed to protrude backward. And a protruding position of the heater support part432is located at an outside (an upper side inFIG.13) of the frame stepped part431so that the frame heater4321is located at an outer end of the frame stepped part431. The frame heater4321can heat a corner of the frame stepped part431at which there is a high possibility of dew condensation. The corner of the frame stepped part431is a portion which is in contact with an outer portion of the sub-door gasket591, has a relatively low temperature, is in contact with external air, and thus has the high possibility of dew condensation. Therefore, the outside of the frame stepped part431is heated by the frame heater4321, and the dew condensation can be prevented. In some cases, door restricting members, such as magnets, may be provided at positions corresponding to the main door40and the sub-door50, respectively. The door restricting members can enable the sub-door50itself to be restricted to the main door40without a separate restricting structure, and thus prevent the sub-door50from being undesirably opened by an inertial force generated when the main door40is rotated. For example, a first magnet installation part430may be formed at an inner side surface of the door frame43which forms an upper surface of the opening part403, and a first magnet4301may be installed and fixed to the first magnet installation part430. A magnet installation part572may be formed at an upper portion of the sub-door50corresponding to the first magnet installation part430, and a second magnet5721may be installed and fixed to the second magnet installation part572. The second magnet installation part572may be formed at an inner side surface of an upper cap decoration57which forms an upper surface of the sub-door50, and thus the second magnet5721is not exposed to the outside. When the sub-door50is closed, the first magnet4301and the second magnet5721are located at positions which face each other, and also disposed so that facing surfaces thereof have different polarities from each other. Therefore, the sub-door50can be maintained in a closed state by an attraction between the first magnet4301and the second magnet5721. Of course, when a rotating force of the sub-door50applied by a user's operation is larger than a magnetic force of the first magnet4301and the second magnet5721, the sub-door50may be rotated. When the first magnet4301and the second magnet5721are located on the same extension line, the magnetic force may be applied strongly. An arrangement structure of the first magnet4301and the second magnet5721is in parallel with an extending direction of a rotating axis of the sub-door50. Therefore, when the sub-door50starts to be opened, the first magnet4301and the second magnet5721cross each other, and thus the magnetic force may be considerably weakened. Accordingly, after the sub-door50is rotated at a predetermined angle, opening of the sub-door50may be smoothly performed. In some cases, the cap decoration45may be provided at the upper end of the main door40. The foaming solution may be injected into an internal space formed by the outer plate41, the door liner42, the door frame43and the cap decoration45, and thus the insulation may be formed therein. An opening device accommodation part452may be formed at the cap decoration45to be recessed downward. The opening device accommodation part452may be shielded by a cap decoration cover453. FIG.14is an exploded perspective view of an installation structure of a door opening device according to the implementation of the present disclosure. AndFIG.15is a view illustrating an operation state of the door opening device. As illustrated in the drawings, the opening device accommodation part452may be formed at the cap decoration45on an upper surface of the main door40. And a door opening device70may be provided inside the opening device accommodation part452. An opened upper surface of the opening device accommodation part452is shielded by the cap decoration cover453. The door opening device70for automatically opening the main door40may include a driving motor72which is provided inside an opening device case71, a push rod74which pushes and opens the main door40, and gears73which transmits power of the driving motor72to the push rod74. A rack gear741which is engaged with the gears73may be formed at an outer surface of the push rod74, and thus may be inserted and withdrawn through a rod hole4511formed at the rear surface of the main door40. In some cases, the push rod74may be formed to have a predetermined curvature. Therefore, even when the main door40is rotated, a front end of the push rod74may continuously push the cabinet10while being maintained in a stably contacting state with a front surface of the cabinet10, and thus may open the main door40. In a state in which the user is holding food and thus cannot use his/her hands, the main door40may be rotated at a predetermined angle by the door opening device70, and thus the user may put a part of his/her body like an elbow therein, and may open the main door40. For example, by the operation of the door opening device70, the main door40may be opened so that a distance D between the main door40and the adjacent refrigerator compartment door20is about 90 mm. A rotating angle of the main door40may be arpimd 24° to 26°. When the refrigerator compartment door20is automatically opened by the distance D, the user may put the elbow or a part of his/her body in an opened gap of the refrigerator compartment door20, and may additionally open the refrigerator compartment door20even while holding an object and thus cannot use his/her hands. Of course, since the door opening device70is disposed inside the cap decoration45having a limited width, a length of the push rod74which is inserted and withdrawn may be limited. Therefore, to minimize the length of the push rod74, the door opening device70may be located at a position as close as possible to a rotating axis of the main hinge401so that a force for opening the main door40may be effectively transmitted. And to ensure an opening angle of the main door40, the gears73may be combined and arranged so that the push rod74having the predetermined length is maximally withdrawn. The door opening device70may be installed at the opening device accommodation part452by a screw. The door opening device70may be supported at an inside of the opening device accommodation part452by a shock absorbing member through which the screw passes, and thus vibration and noise generated when the door opening device70is operated may be prevented. In some cases, the door opening device70may be selectively driven by the user's operation, and may rotate the main door40by an operation of the driving motor72when a door opening signal is input by the user. Since the user's hands cannot be used, an operation input of the door opening device70may be performed in a position detecting method or a motion detecting method, instead of a direct input method by the user's body contact. This will be described again below in detail. FIG.16is a cross-sectional view taken along line16-16′ ofFIG.1. As illustrated in the drawing, in the main door40, an external appearance formed at both sides of the opening part403may be formed by coupling the outer plate41, the door frame43and the door liner42. A front support part437which is bent to support the outer plate41may be formed at a front end of the door frame43. A front accommodation part4371in which an end of the outer plate41is introduced in a bent state may be formed at an end of the front support part437. The end of the outer plate41which is located at the front accommodation part4371forms a multi-bent part411which is continuously bent several times. The multi-bent part411forms one end of the opening part403. The one end of the opening part403at which the multi-bent part411is located is close to a handle561formed at a second side frame56of the sub-door50. The multi-bent part411is bent at a portion forming the front surface of the main door40to have a predetermined slope, and forms a first bent part4111. An inclined surface of the first bent part4111may be formed to be directed toward the opening part403, and an end of the first bent part4111forms one end of the opening part403. A second bent part4112which is bent in a direction opposite to the first bent part4111may be formed at the end of the first bent part4111. And a third bent part4113which is bent in parallel with the front surface of the main door40may be formed at an extending end of the second bent part4112. The second bent part4112and the third bent part4113may be located inside the front accommodation part4371, and may be in close contact with and supported by the front support part437. Therefore, the one end of the opening part403at which the multi-bent part411may be formed is a portion at which the handle561of the sub-door50is located, and the user's hand comes in and out frequently. In a process in which the user's hand comes in and out, the user's hand may be in contact with one end of the opening part403. Here, the user's hand may smoothly come in and out without being caught or scratched by the inclined surface of the first bent part4111. At the same time, strength may be reinforced by the multi-bent part411, and the outer plate41may be prevented from being deformed by a shock generated while the user's hand comes in and out frequently. The handle561forms one side surface of the sub-door50, and may be formed long vertically, and also formed to have a predetermined space between the one side surface of the sub-door50and one end of the opening part403, such that the user puts his/her hand therein and then pulls. In some cases, the frame heater4321and the heater support part432may be formed to protrude from the rear surface of the frame stepped part431of the door frame43and thus to heat the frame stepped part431, thereby preventing the dew condensation. FIG.17is a perspective view of the sub-door. AndFIG.18is an exploded perspective view of the sub-door when being seen from a front. AndFIG.19is an exploded perspective view of the sub-door when being seen from a rear. As illustrated in the drawings, the sub-door50may be formed in a shape corresponding to that of the opening part403. The sub-door50may include a panel assembly54which may be formed by stacking a plurality of glass layers at regular intervals, side frames55and56which forms both side surfaces of the sub-door50, a sub-door liner59which forms a perimeter of the rear surface of the sub-door50, and the upper cap decoration57and a lower cap decoration58which forms an upper surface and a lower surface of the sub-door50. The panel assembly54may form an entire front surface of the sub-door50. The panel assembly54may include a front panel541which forms an exterior of a front surface thereof, and an insulation panel542which may be formed to be spaced apart from a rear surface of the front panel541. A plurality of insulation panels542may be provided, and a spacer bar543is provided between the front panel541and the insulation panel542and between the plurality of insulation panels542. The front panel541and the insulation panel542may be formed of glass or a see-through material, and thus the inside of the refrigerator1may be selectively seen through. And the front panel541and the insulation panel542may have an insulating material or an insulating structure, and may be formed to prevent a leak of cooling air in the refrigerator1. A configuration of the panel assembly54will be described below in detail. The side frames55and56may form both of left and right side surfaces of the sub-door50. The side frames55and56may be formed of a metallic material, and serves to connect the panel assembly54with the door liner42. The side frames55and56may include a first side frame55forming one surface at which the sub-hinges51and52are installed, and a second side frame56at which the handle561enabling the user to perform a rotating operation is formed. The first side frame55may be formed long vertically, and also formed to connect between the upper hinge51and the lower hinge52. Specifically, hinge insertion parts551and552in which the upper hinge51and the lower hinge52are inserted are formed at upper and lower ends of the first side frame55, respectively. The hinge insertion parts551and552are formed at the upper and lower ends of the first side frame55to be recessed, and may be formed to have a corresponding shape, such that a part of the upper hinge51and the lower hinge52is matched therewith. The first side frame55may be formed of a metallic material such as aluminum or a material having high strength, and may enable the upper hinge51and the lower hinge52to be maintained at accurate installation positions, such that the installation positions are not changed by a weight of the sub-door50. Therefore, the sub-door50may maintain an initial installation position at the main door40, and an outer end of the sub-door50and the opening part403of the main door40may not interfere with each other when being rotated, and may maintain a very closely contacting state with each other. Like the first side frame55, the second side frame56may be formed of the metallic material or the material having high strength. The second side frame56may be formed to extend from the upper end of the sub-door50to the lower end thereof, and may have the handle561which is recessed to allow the user to put his/her hand therein. The upper cap decoration57forms the upper surface of the sub-door50, and connects upper ends of the first side frame55and the second side frame56, and is also coupled to an upper end of the panel assembly54and an upper end of the sub-door liner59. An upper hinge installation part571may be formed at one end of the upper cap decoration57. The upper hinge installation part571may be recessed so that the upper hinge51and the hinge cover53are installed therein, and upper surfaces of the hinge cover53and the upper cap decoration57may form the same plane while the hinge cover53is installed. The lower cap decoration58may form the lower surface of the sub-door50, and may connect lower ends of the first side frame55and the second side frame56, and is also coupled to a lower end of the panel assembly54and a lower end of the sub-door liner59. A lower hinge installation part581may be formed at one end of the lower cap decoration58. The lower hinge installation part581can be recessed so that the lower hinge52is installed therein. A detection device accommodation part (or detection device accommodation bracket)582in which a second detection device81and a knock detection device82are installed may be formed at the lower cap decoration58. The detection device accommodation part582may be shielded by an accommodation part cover (or accommodation groove cover)583. The second detection device81which is installed at the lower cap decoration58is a device which checks a user's approach, and the knock detection device82is a device which detects a user's knocking operation on the sub-door50. The second detection device81and the knock detection device82may be attached to the rear surface of the front panel541, and may be provided at a lower end of the front panel541close to the second side frame56. By the second detection device81and the knock detection device82, the sub-door50may selectively become transparent, and thus an inside of the sub-door50may be seen through. Detailed structures of the second detection device81and the knock detection device82will be described below. The sub-door liner59forms a shape of a perimeter of the rear surface of the sub-door50, and may be injection-molded with a plastic material. The sub-door liner59is coupled to the first side frame55, the second side frame56, the upper cap decoration57and the lower cap decoration58. And the foaming solution is injected into an internal space of a perimeter of the sub-door50formed by the sub-door liner59, and the insulation may be filled therein, and thus an insulation structure of the perimeter of the sub-door50can be provided. That is, the insulation structure may be formed at a center portion of the sub-door50by the insulation panel542forming the panel assembly54, and a perimeter of the panel assembly54may have the insulation structure by the insulation. The sub-door gasket591is provided at a rear surface of the sub-door liner59. The sub-door gasket591may be formed to be in close contact with the main door40when the sub-door50is closed. Therefore, the leak of the cooling air between the main door40and the sub-door50may be prevented. FIG.20is a cut-away perspective view taken along line20-20′ ofFIG.17. AndFIG.21is an exploded perspective view of the panel assembly according to the implementation of the present disclosure. As illustrated in the drawings, an entire exterior of the sub-door50may be formed by the panel assembly54, and the first side frame55and the second side frame56are coupled to both ends of the panel assembly54. And the foaming solution is filled in a space formed by the panel assembly54, the first side frame55and the second side frame56, and forms the insulation. The panel assembly54may include the front panel541which forms the entire front surface of the sub-door50, at least one or more insulation panels542which are disposed at a rear of the front panel541, and the spacer bar543which supports between the front panel541and the insulation panel542and between the plurality of insulation panels542. The front panel541may be formed of a glass material which is selectively seen through according to a light transmittance and reflectivity, and thus may be referred to as a half mirror. The front panel541may be formed so that a rear of the sub-door50is selectively seen through according to ON/OFF of the main lighting unit85or the door lighting unit49in the refrigerator1. That is, in a state in which the door lighting unit49is turned on, light inside the refrigerator1penetrates the front panel541, and thus the front panel541looks transparent. Therefore, a space inside the refrigerator1located at the rear of the sub-door50or the accommodation space formed at the main door40may be seen from the outside while the sub-door50is closed. In a state in which the door lighting unit49is turned off, the light may not penetrate the front panel541, but rather be reflected, and thus the front panel541can serve as a mirror surface. In this state, the space inside the refrigerator located at the rear of the sub-door50or the accommodation space formed at the main door40may not be seen from the outside. A bezel5411may be formed along a perimeter of the rear surface of the front panel541. The bezel5411may be formed so that the light is not transmitted therethrough, and thus the side frames55and56, the upper cap decoration57, the lower cap decoration58and the spacer bar543which are coupled to the front panel541are prevented from being exposed forward through the front panel541. The second detection device81and the knock detection device82may be disposed at the bezel5411which is formed at the lower end of the front panel541, and the knock detection device82is disposed so as to be covered. In some cases, in the bezel5411which may be formed at the lower end of the front panel541, a penetration part5412may be formed at a position corresponding to the second detection device81. The penetration part5412may be formed in a shape corresponding to a front surface of the second detection device81, and the bezel5411is not printed thereon. That is, the bezel5411having a predetermined width may be printed along a perimeter of the front panel541, except for the penetration part5412. The penetration part5412can enable the light emitted from the second detection device81to not interfere with the bezel5411, but rather to pass through the front panel541and thus to be transmitted and received. The front surface of the second detection device81which is in contact with the penetration part5412may be formed to have the same color as that of the bezel5411. Therefore, even in a state in which the front surface of the second detection device81is exposed by the penetration part5412, an area of the penetration part5412may not be easily exposed, and may have a sense of unity with the front panel541. In some cases, the first side frame55and the second side frame56may be installed at the rear surface of the front panel541. The first side frame55and the second side frame56may be adhered to both side ends of the rear surface of the front panel541, respectively, and may be adhered to an inside of an area of the bezel5411. The spacer bar543may be formed at the perimeter of the rear surface of the front panel541. The spacer bar543can enable the front panel541and the insulation panel542to be spaced apart from each other, and also serves to seal therebetween. The spacer bar543may also be disposed between the plurality of insulation panels542. The front panel541, the insulation panel542and the plurality of spacer bars543may be bonded to each other by an adhesive, and a sealant may be coated to seal among the front panel541, the insulation panel542and the spacer bar543. The insulation panel542may be formed to have a size smaller than that of the front panel541, and may be located within an internal area of the front panel541. And the insulation panel542may be chemical strengthening glass in which glass is soaked in an electrolyte solution at a glass transition temperature or more, and thus chemically strengthened. A low-radiation coating layer for reducing heat transfer into the storage compartment due to radiation may be formed at a rear surface of the insulation panel542. Glass on which the low-radiation coating layer may be formed is referred to as low-E glass. The low-radiation coating layer may be formed by sputtering silver or the like on a surface of the glass. A sealed space between the front panel541and the insulation panel542and a sealed space between the plurality of insulation panels542which are formed by the spacer bar543may create a vacuum state so as to be insulated. In some cases, an inert gas for the insulation, such as argon, may be filled in the sealed space between the front panel541and the insulation panel542and the sealed space between the plurality of insulation panels542. Inert gas generally has more excellent insulation property than that of air. Therefore, insulation performance may be ensured by forming a predetermined space between the front panel541and the insulation panel542and between the plurality of insulation panels542in which the inert gas is filled. The insulation panel542may be formed in a single panel, and may be installed to be spaced apart from the front panel541. In some cases, two or more insulation panels542may be provided to be spaced apart from each other. Hereinafter, a structure of the front panel541having various applicable types of half mirror structures will be described. FIG.22is a cross-sectional view schematically illustrating an example of a front panel of the panel assembly. As illustrated in the drawing, the front panel541according to an example implementation may include a glass layer5413which forms an exterior, a vacuum deposition layer5414which may be formed at a rear surface of the glass layer5413, a bezel print layer5415which may be formed at a rear surface of the vacuum deposition layer5414, and a transparent print layer5416which forms entire rear surfaces of the bezel print layer5415and the vacuum deposition layer5414. Specifically, the glass layer5413may be formed of green glass which is widely used as transparent glass, and can form an entire surface of the front panel541. Of course, various other transparent glass materials other than the green glass, such as white glass, may be used. The vacuum deposition layer5414can allow the front panel541to have a half glass property, and may be formed at the rear surface of the glass layer5413by vacuum-depositing a titanium compound (e.g., TiO2). That is, the vacuum deposition layer5414may be formed at the entire rear surface of the glass layer5413. While the door lighting unit49is not turned on, the light may be reflected by the vacuum deposition layer5414, and thus the front panel541can look like a mirror when being seen from a front. The bezel print layer5415may form the perimeter of the rear surface of the front panel541, and the bezel5411may be formed by the bezel print layer5415. The bezel print layer5415may be formed so that the light is not transmitted therethrough even while the door lighting unit49is turned on, and thus elements which are disposed along the perimeter of the rear surface of the front panel541may be shielded. The transparent print layer5416may be formed at the entire rear surface of the front panel541including the bezel print layer5415and the vacuum deposition layer5414. The transparent print layer5416may be formed to be transparent, such that the light is transmitted therethrough, and serves to protect the front panel541while the front panel541or the panel assembly54is processed. In particular, the transparent print layer5416can prevent the vacuum deposition layer5414from being damaged. For coupling with the insulation panel542, the front panel541may be formed so that the spacer bar543or the like is attached thereto. The front panel541may be manufactured separately from the insulation panel542, and then transported. In this process, when the transparent print layer5416is not provided, the vacuum deposition layer5414may be damaged, and thus may not perform a half glass function. Therefore, in a structure in which the vacuum deposition layer5414may be formed at the rear surface of the glass layer5413, the transparent print layer5416should be provided. FIG.23is a cross-sectional view schematically illustrating another example of the front panel of the panel assembly. As illustrated in the drawing, the front panel541according to another example implementation may include a glass layer5413which forms an exterior, a ceramic print layer5417which may be formed at a front surface of the glass layer5413, and a bezel print layer5415which may be formed at a rear surface of the glass layer5413. Specifically, the glass layer5413may be formed of a glass material through which the light is transmitted, and also which is seen through. A glass material called dark gray glass which imperceptibly has a dark gray color in a transparent state may be used. When the door lighting unit49is not turned on, and thus the front panel541is in a mirror-like state, the dark gray color of the glass layer5413serves to subsidiarily provide a color sense which enables the front panel541to have a texture which looks like an actual mirror. The ceramic print layer5417may be formed at the entire front surface of the glass layer5413, and may be formed in a silk screen printing using reflectance ink which reflects the light. The reflectance ink can include the titanium compound (TiO2) as a main component, a viscosity-controlling resin, an organic solvent, and an additive. The reflectance ink may be manufactured to have a predetermined viscosity for the silk screen printing. The ceramic print layer5417may be formed to have a thickness of approximately 40 to 400 nm. The ceramic print layer5417may have flatness similar to a mirror surface through the silk screen printing using the reflectance ink, and may also be formed like the mirror surface when being reinforced by heating. The ceramic print layer5417can be separately formed on the surface of the glass layer5413, and can have a different refractive index from that of the glass. Therefore, some of the light incident from the outside of the refrigerator1to the front panel541may be reflected by the ceramic print layer5417, and the rest may be reflected by the glass layer5413, and may have an effect like the mirror due to an interference effect of the light which is reflected. That is, due to the interference effect of the light which is reflected by a boundary surface of another medium having a different refractive index, the front panel541may look like the mirror when being seen from an outside. However, when the door lighting unit49is turned on, the light is emitted from the inside of the refrigerator1toward the glass layer5413, and the light transmitted through the glass layer5413passes through the ceramic print layer5417. Therefore, the front panel541may look transparent when being seen from the outside of the refrigerator1, and the space in the refrigerator1may be visible. The ceramic print layer5417may be formed so that the transmittance of the front panel541is about 20% to 30%. When the transmittance is 20% or less, it can be difficult to see through the space in the refrigerator1due to a low transparency of the front panel541even while the door lighting unit49is turned on. And when the transmittance is 30% or more, the space in the refrigerator1may be visible even while the door lighting unit49is turned off, and thus the surface effect like the mirror may not be expected. Therefore, for the half mirror effect, it is preferable that transmittance of the front panel541is about 20% to 30%. And to form a surface having a high brightness, such as the mirror surface, the ceramic print layer5417can be reinforced by heating to a predetermined temperature. An organic component may be completely removed through the heating, and the titanium compound (TiO2) may be calcined on the glass layer5413. In some cases, when the front panel541is heated after the ceramic print layer5417is printed by the silk screen printing, the heating may be performed at a high temperature so that the organic component of the reflectance ink is completely removed, and the titanium compound is calcined. However, when the heating is performed at an excessively high temperature, bending may occur. Therefore, it is preferable that the heating be performed within a range at which the surface is not deformed. And for removing of the organic component and calcination of the titanium compound, the front panel541may be heated in stages at different temperatures. The bezel print layer5415may form the perimeter of the rear surface of the front panel541, and the bezel5411may be formed by the bezel print layer5415. The bezel print layer5415may be formed so that the light is not transmitted therethrough even while the door lighting unit49is turned on, and thus may shield the elements which are disposed along the perimeter of the rear surface of the front panel541. In some cases, the bezel print layer5415may be formed in an inorganic printing method (glass printing). The bezel print layer5415may be printed using a ceramic pigment as a main component in which frit, an inorganic pigment and oil are mixed. Therefore, in the bezel print layer5415, the resin can be decomposed and volatilized by the heating in the glass reinforcing process, and the frit melts and covers the pigment, and then may be attached on the surface of the glass layer5413. Such an inorganic printing method has smaller fragments and higher durability than an organic printing method. And a glass component may melt and may be integrally molded with the glass layer5413, and thus in a multi-layering process with the additional insulation panel542, it may be possible to reduce heat loss and also to provide an excellent adhesive property. FIG.24is a cross-sectional view schematically illustrating still another example of the front panel of the panel assembly. As illustrated in the drawing, the front panel541according to still another example implementation may include a glass layer5413which forms an exterior, a hard coating layer5418which may be formed at a front surface of the glass layer5413, and a bezel print layer5415which may be formed at a rear surface of the glass layer5413. Specifically, the glass layer5413may be formed of a glass material through which the light is transmitted, and also which is seen through. A glass material called gray glass which imperceptibly has a dark gray color in a transparent state may be used. The gray glass can have a somewhat brighter color than the dark gray glass described in the above-described example implementation. This difference may be caused by a difference between the ceramic print layer5417and the hard coating layer5418which are formed on the glass layer5413. When the door lighting unit49is not turned on, and thus the front panel541is in the mirror-like state, the gray color of the glass layer5413can serve to subsidiarily provide a color sense which enables the front panel541to have a texture which looks like the actual mirror. The hard coating layer5418may be formed at the entire front surface of the glass layer5413, and also formed to have a light transmittance of 25 to 50% and a reflectivity of 45 to 65%, and to have a half mirror property, such that the transmittance and the reflectivity may be simultaneously increased. The hard coating layer5418may be formed in a thickness of about 30 to 80 nm, and may be configured with triple layers of iron, cobalt and chrome. Of course, one or two layers of the triple layers may be omitted, considering the transmittance, the reflectivity and a color difference. The hard coating layer5418may be formed in an atmospheric pressure chemical vapor deposition (APCVD) method in which a vaporized coating substance may be formed on the entire surface of the glass layer5413, or in a spraying method in which a liquid coating material is sprayed. The hard coating layer5418may be separately formed on the surface of the glass layer5413, and can have a different refractive index from that of the glass layer5413. Therefore, some of the light incident from the outside of the refrigerator1to the front panel541may be reflected by the hard coating layer5418, and the rest may be reflected by the glass layer5413. Therefore, the front panel541may have an effect like the mirror due to an interference effect of the light which is reflected. That is, due to the interference effect of the light which is reflected by a boundary surface of another medium having a different refractive index, the front panel541may look like the mirror when being seen from an outside. However, when the door lighting unit49is turned on, the light is emitted from the inside of the refrigerator1toward the glass layer5413, and the light transmitted through the glass layer5413passes through the hard coating layer5418. Therefore, the front panel541may look transparent when being seen from the outside of the refrigerator1, and the space in the refrigerator1may be visible. The hard coating layer5418may be formed so that the transmittance of the front panel541is about 20% to 30%. When the transmittance is 20% or less, it is difficult to see through the space in the refrigerator1due to a low transparency of the front panel541even while the door lighting unit49is turned on. And when the transmittance is 30% or more, the space in the refrigerator1may be visible even while the door lighting unit49is turned off, and thus the surface effect like the mirror may not be expected. Therefore, for the half mirror effect, it is preferable that transmittance of the front panel541be between about 20% to 30%. The bezel print layer5415forms the perimeter of the rear surface of the front panel541, and the bezel5411may be formed by the bezel print layer5415. The bezel print layer5415may be formed so that the light is not transmitted therethrough even while the door lighting unit49is turned on, and thus may shield the elements which are disposed along the perimeter of the rear surface of the front panel541. The bezel print layer5415may be formed in the inorganic printing method. FIG.25is a cross-sectional view of the sub-door. As illustrated in the drawing, the side frames55and56are provided at both sides of the panel assembly54. The side frames55and56may be attached and fixed to the front panel541, may be coupled to the sub-door liner59so as to form a space in which the insulation is accommodated, and may also insulate the perimeter of the sub-door50. The second side frame56may be formed at a position which faces the first side frame55, and may be configured to form another side surface of the sub-door50. And a first front bent part553and a first rear bent part554may be formed at both ends of the first side frame55. The first front bent part553may be formed to be bent and thus to be in contact with the rear surface of the front panel541, and may extend to a position of the spacer bar543. Therefore, a temperature outside the sub-door50may be transferred to the rear surface of the front panel541along the first side frame55formed of the metallic material, and thus the dew condensation at one side of the front panel541which is in contact with the first front bent part553may be prevented. And a first heater installation groove5531at which a sub-door heater502is installed may be further formed at the first side frame55. The first heater installation groove5531may be formed at an end of the first front bent part553so that the sub-door heater502is disposed at a position close to the spacer bar543. Therefore, the sub-door heater502may be vertically disposed long along the first side frame55. Due to a property of the first side frame55formed of the metallic material, the dew condensation at the front panel541may be prevented by heating the rear surface of the front panel541which is in contact with the first front bent part553. The first rear bent part554may be bent from a rear end of the first side frame55, and coupled to the sub-door liner59. The first rear bent part554may be formed to support the sub-door liner59, and may be formed to support a load transmitted through the sub-door gasket591when the sub-door50is closed. The second side frame56is provided at a position which faces the first side frame55, and may be configured to form still another side surface of the sub-door50. The second side frame56may be formed to be located at a position close to one surface of the opening part403of the main door40. And a second front bent part562and a second rear bent part563may be formed at both ends of the second side frame56. The second front bent part562may extend from an end of the second side frame56, and may be recessed to form the handle561in which the user's hand is put. The handle561may be formed to be recessed toward a lateral side of the panel assembly54. Therefore, the handle561may not be exposed, and only a part of the second side frame56may be exposed forward when being seen from a front. And the second front bent part562can form the handle561, and may be formed to extend from one end of the second side frame56and to be in contact with the rear surface of the front panel541. Therefore, a temperature outside the sub-door50may be transferred to the rear surface of the front panel541along the second side frame56formed of the metallic material, and thus the dew condensation at one side of the front panel541which is in contact with the second front bent part562may be prevented. Specifically, the second front bent part562may be recessed from an outer side further than the front panel541toward the front panel541, and a recessed end may be formed to be recessed inward further than an outer end of the front panel541. And the second front bent part562may be located at a rear of the front panel541, and thus the user may put his/her hand into the handle561formed by the second front bent part562, and then may rotate the sub-door50. And a second heater installation groove5621at which the sub-door heater502is installed may be further formed at the second front bent part562. The second heater installation groove5621enables the sub-door heater502to be disposed at a position close to the spacer bar543. Therefore, the sub-door heater502may be vertically disposed long along the second side frame56. Due to a property of the second side frame56formed of the metallic material, the dew condensation at the front panel541may be prevented by heating the rear surface of the front panel541which is in contact with the second front bent part562. A portion of an inner side surface of the second front bent part562which is in contact with the front panel541may be formed to be rounded, and thus may allow the user to easily grip and pull forward the portion. The second rear bent part563may be bent from a rear end of the second side frame56, and coupled to the sub-door liner59. The rear bent part563may be formed to support the sub-door liner59, and may be formed to support the load transmitted through the sub-door gasket591when the sub-door50is closed. FIG.26is an exploded perspective view illustrating a coupling structure of the sub-door and the upper hinge. AndFIG.27is a partial perspective view illustrating an installed state of the upper hinge. As illustrated in the drawings, the upper hinge installation part571which is recessed so that the upper hinge51and the hinge cover53are installed therein may be formed at the upper cap decoration57of the sub-door50. The upper hinge installation part571may be formed at an upper end of the upper cap decoration57, and may be formed to be connected to the adjacent first side frame55. That is, the hinge insertion part551formed at an upper end of the first side frame55and the upper hinge installation part571of the upper cap decoration57may be connected to each other, and thus the upper hinge51may be installed at a corner of the sub-door50to which the upper hinge installation part571and the hinge insertion part551are connected. In some cases, the lower cap decoration58provided at the lower end of the sub-door50may have the same structure, and thus the lower hinge52may be installed at a corner of the sub-door50. A hinge accommodation part5711which is recessed to have a shape corresponding to the upper hinge51may be formed at the upper hinge installation part571. And a hinge fixing hole5712in which the screw passed through the upper hinge51is fastened may be formed at the hinge accommodation part5711. And an electric wire guide part5714and an electric wire hole5713through which an electric wire L disposed at the upper hinge51passes may be formed at one side of the upper hinge installation part571. The electric wire L guided through the electric wire guide part5714is connected to the second detection device81and the knock detection device82, and guided to the upper cap decoration57via the lower cap decoration58and the second side frame56. Then, the electric wire L may be introduced into the electric wire guide part5714through the electric wire hole5713formed at the upper hinge installation part571, and may be guided to an outside of the sub-door50through the electric wire guide part5714. The electric wire L guided along the electric wire guide part5714is guided in an extending direction of the upper hinge51, and introduced into the main door40through the hinge hole433of the main door40together with one side of the upper hinge51. In some cases, the upper hinge51may be installed and fixed to the upper hinge installation part571by a screw, and may include a door installation part511which is fixed to the sub-door50, and a rotary coupling part512which is rotatably coupled to the main door40. And the door installation part511may include a horizontal part5111which is fixed to the upper hinge installation part571, and a vertical part5112which is fixed to the hinge insertion part551of the first side frame55. The horizontal part5111and the vertical part5112are formed perpendicularly to each other, and thus the upper hinge51may be maintained in a fixed state to a corner of the upper end of the sub-door50. The rotary coupling part512may be formed to extend from an end of the horizontal part5111toward the outside of the sub-door50. The rotary coupling part512may be formed to be bent in one direction, and a hinge shaft5121may be formed at an extending end thereof. The hinge shaft5121may be formed to extend downward from the plate-shaped rotary coupling part512. And a cut-away part5122may be formed at the rotary coupling part512to have the same shape as a shape that the rotary coupling part512is bent laterally. The cut-away part5122may be formed to be recessed inward from one side at which the hinge shaft5121is formed. And the rotary coupling part512may be cut so as to be rounded in a rotating radius direction of the sub-door50when the sub-door50is opened and closed. Therefore, when the sub-door50is rotated to be opened while the upper hinge51is coupled to the main door40, one end of the door frame43forming the hinge hole433is inserted into the cut-away part5122. And a flange5123which prevents a deformation of the rotary coupling part512and reinforces strength may be formed along an outer end of the rotary coupling part512. The flange5123may be formed to extend in a direction which perpendicularly intersects with the rotary coupling part512. A stopper5124may be further formed at one end of the rotary coupling part512. The stopper5124may be formed at one side of the rotary coupling part512close to the hinge shaft5121, and extends downward so as to interfere with one side of the main door40or the hinge hole433while the sub-door40is rotated to be completely opened, and thus prevents the sub-door50from being further opened. The hinge cover53may be formed to shield an opening of the upper hinge installation part571and also to shield the upper hinge51from an upper side thereof. The hinge cover53may include a cap decoration shielding part531which shields the upper hinge installation part571, and a hinge shielding part532which shields the rotary coupling part512of the upper hinge51. The cap decoration shielding part531may have a shape corresponding to the upper hinge installation part571, and may also have a plurality of screw holes5311so that a screw is directly fastened to the upper cap decoration57, or the screw passing through the door installation part511is moved in and out. The hinge shielding part532may be formed to extend along a shape of the rotary coupling part512of the upper hinge51, and may also be formed to cover the rotary coupling part512from an upper side thereof. And the hinge shielding part532is disposed to be somewhat spaced apart from the rotary coupling part512, and thus to form a space between the hinge shielding part532and the rotary coupling part512, and thus the electric wire L passed through the electric wire guide part5714may be guided through the space between the hinge shielding part532and the rotary coupling part512. FIG.28is a longitudinal cross-sectional view illustrating a coupling structure of the upper hinge. As illustrated in the drawing, the upper hinge51has a structure which is installed and fixed to the upper hinge installation part571of the upper cap decoration57, and shielded by the hinge cover53. And while the sub-door50is installed at the main door40, the upper hinge51is inserted into the hinge hole433, and the rotary coupling part512of the upper hinge51is located inside the main door40. In this state, the hinge shaft5121of the upper hinge51may be inserted into a shaft installation part438of the main door40. The shaft installation part438may be fixed to the inside of the main door40by a separate member, and may be integrally formed with the door frame43forming the main door40. The shaft installation part438may be formed to form a space in which the hinge shaft5121is inserted, and to be rotated while a rotating shaft is inserted into the shaft installation part438. And when the sub-door50is rotated to be opened while the upper hinge51is coupled to the main door40, the upper hinge51is also rotated with rotation of the sub-door50. At this point, a side end of the hinge hole433is inserted into the cut-away part5122of the upper hinge51, and thus interference may be prevented. Due to such a structure of the upper hinge51, the sub-door50may be rotatably disposed inside the opening part403of the main door40while the sub-door50is closed. And the upper hinge51extends laterally, and is rotatably coupled to the inside of the main door40, and thus the interference of the upper hinge51is prevented while the sub-door50is closed. Therefore, an outer surface of the sub-door50and an inner surface of the opening part403may be formed to be in close contact with each other, and thus even when the sub-door50is rotated, the sub-door50is not sagged or deformed by a stable supporting structure of the upper hinge51. And the electric wire L introduced through the electric wire guide part5714of the upper cap decoration57may pass through the hinge hole433via the hinge shielding part532of the hinge cover53, and may be guided to the inside of the main door40. Therefore, even while the sub-door50is being rotated, the electric wire L is not exposed to the outside, and is guided to the inside of the main door40while being shielded by the hinge cover53. FIG.29is a longitudinal cross-sectional view illustrating a coupling structure of the sub-door and the lower hinge. As illustrated in the drawing, the lower hinge52has the same structure as that of the upper hinge51, except a bending direction which is bent upward. To install the lower hinge52, the lower hinge installation part581may be formed at the lower cap decoration58to be recessed, and the lower hinge52may be installed and fixed to the lower hinge installation part581and the hinge insertion part552of the first side frame55. That is, the lower hinge52has a structure which is installed and fixed to a corner of the lower end of the sub-door50. In some cases, each of the upper hinge51and the lower hinge52has a structure which is inserted and fixed by the first side frame55. Due to a property of the first side frame55formed of the metallic material, the first side frame55may stably support the upper hinge51and the lower hinge52, and may stably fix the sub-door50without the sagging or the deformation of the sub-door50even in an environment in which the load is applied. Accordingly, a space between the sub-door50and the main door40may be designed and maintained to be very narrow, and thus the external appearance may be enhanced. The lower hinge52and may include a door installation part521which is installed and fixed to the lower hinge installation part581by a screw, and a rotary coupling part522which is rotatably coupled to the main door40. The door installation part521may include a horizontal part5211which is fixed to the lower hinge installation part581, and a vertical part5212which is fixed to the hinge insertion part552of the first side frame55. And the rotary coupling part522may extend from an end of the horizontal part5211so as to pass through the hinge hole433of the main door40, and a hinge shaft5221may be formed at one extending end. The hinge shaft5221may be inserted into a shaft installation part439formed inside the main door40, and thus the lower hinge52may be rotatably coupled. And a cut-away part5222may be formed at the rotary coupling part522so that one side end of the hinge hole433is inserted therein when the sub-door50is rotated. And a stopper5224which restricts rotation of the sub-door50may be further formed at the rotary coupling part522. In a similar manner, the sub-door50may be rotatably installed at the main door40by the upper hinge51and the lower hinge52which extend laterally from upper and lower ends of one side surface thereof. The sub-door50which has a relatively heavy weight due to the provided panel assembly54may be stably fixed to the inside of the opening part403. FIG.30is an exploded perspective view illustrating a coupling structure of the knock detection device and the second detection device of the sub-door when being seen from a front. AndFIG.31is an exploded perspective view illustrating a coupling structure of the knock detection device and the second detection device of the sub-door when being seen from a lower side. As illustrated in the drawings, the second detection device81and the knock detection device82may be provided at the lower end of the sub-door50. The second detection device81serves to detect a user's position, and to check whether the user stands in front of the refrigerator1to operate the refrigerator1. The second detection device81may be located on an extension line of the first detection device31, and may be arranged vertically with the first detection device31. And an installation height of the second detection device81corresponds to the lower end of the sub-door50, and thus an ordinary adult may be detected, but a child having a small height, an animal, or other things smaller than the height of the second detection device81may not be detected. And the knock detection device82may be formed to recognize whether the user knocks on the front panel541of the sub-door50. A certain operation of the refrigerator1may be designated by a knocking operation detected by the knock detection device82. For example, the door lighting unit49may be turned on by the user's knocking operation, and thus the sub-door50may become transparent. A specific structure of the second detection device81and the knock detection device82will be described below in detail. The lower hinge52may be installed at the lower cap decoration58which forms the lower surface of the sub-door50, and the detection device accommodation part582may be formed at one side which is distant from the lower hinge52, i.e., one side which is close to the second side frame56so as to be recessed. The detection device accommodation part582may be formed to have a size which accommodates the second detection device81and the knock detection device82. And an opened lower surface of the detection device accommodation part582may be shielded by the accommodation part cover583. The case fixing part481to which a screw for fixing the accommodation part cover583to the lower cap decoration58is fastened may be formed at one side of the accommodation part cover583. An injection port cover part5831is further formed at the other side of the accommodation part cover583. The injection port cover part5831may be formed on the lower cap decoration58, and also formed to shield a first injection port5824through which the foaming solution filled to mold an insulation501is injected. And a plurality of hook parts5832are formed at an upper surface of the injection port cover part5831to be fitted into the first injection port5824. Therefore, the injection port cover part5831is fitted into the first injection port5824, and the case fixing part481is fixed to the lower cap decoration58by fastening the screw, and the entire accommodation part cover583is installed and fixed to the lower cap decoration58. When the accommodation part cover583is installed at the lower cap decoration58, the detection device accommodation part582may be shielded, and the first injection port5824may also be shielded. And a PCB installation part5833may further be formed at the accommodation part cover583. A detection device PCB83for processing a signal of the second detection device81and the knock detection device82is installed at the PCB installation part5833. The detection device PCB83is connected to the second detection device81and/or the knock detection device82, and may be seated at the PCB installation part5833. The detection device PCB83can serve to process the signal of the second detection device81and/or the knock detection device82, and is located at a position close to the second detection device81and the knock detection device82, and configured to process the signals. In the case in which the detection device PCB83for processing the signal is located at a distance, there may be a problem that noise generated when the signal to be processed is transferred through a signal line may be increased. However, since the detection device PCB83is located at a position at which the second detection device81and the knock detection device82are installed, a main control part2receives only a valid knock-on signal. Accordingly, the noise due to the signal line between the main control part2and the detection device PCB83may be minimized. That is, the main control part2may receive the signal of which the noise is minimized through the detection device PCB83. Therefore, it may be possible to ensure an accurate recognition rate. In particular, in the case of the knock detection device82, a signal output through a microphone8211is indicated by mV unit, but the main control part2which controls an entire operation of the refrigerator1generally receives a signal which is basically indicated by V unit. Therefore, due to a scale difference in a physical signal, it is not preferable that the main control part2determines whether the knock-on signal is normal. The refrigerator1is an electronic appliance using a high voltage/a high current. Therefore, an electrical noise generation amount is relatively great. This means that the signal of mV unit output from the microphone8211may be further vulnerable to the electrical noise. Therefore, since the detection device PCB83is located close to the knock detection device82, the noise may be remarkably reduced, and thus the recognition rate may be enhanced. In some cases, a second injection port584through which the foaming solution is injected may further be formed at one side of the lower cap decoration58close to the lower hinge52. The second injection port584may be shielded by a separate injection port cover5841. And a plurality of hook parts5842are formed at an upper surface of the injection port cover5841to be fitted into the second injection port584. A first boss5821to which a screw for fixing the second detection device81is fastened, and a second boss5822for fixing the knock detection device82are respectively formed at a bottom surface of the detection device accommodation part582. And an electric wire hole5823may be formed at one surface of the detection device accommodation part582. The electric wire L which is connected to the detection device PCB83, the second detection device81and the knock detection device82may be guided to the outside of the sub-door50through the electric wire hole5823. In some cases, a through part (or opening)5825which is opened so that the second detection device81and the knock detection device82are in close contact with the front panel541may be formed at a front surface of the detection device accommodation part582which is in contact with the front panel541. FIG.32is an exploded perspective view of the knock detection device. AndFIG.33is a cross-sectional view taken along line33-33′ ofFIG.17. AndFIG.34is a cross-sectional view of a microphone module of the knock detection device. A structure of the knock detection device82will be described in detail with reference to the drawings. The knock detection device82may include a microphone module821which detects the knock-on signal, a holder823which accommodates the microphone module821, an elastic member824which presses the holder823and the microphone module821toward the front panel541so that the holder823and the microphone module821are in close contact with the front panel541, and a support member825which supports the elastic member824and the holder823. The microphone module821can include the microphone8211which directly senses a sound wave, and a microphone accommodation part (or microphone accommodation housing)8212which accommodates the microphone8211. The microphone8211serves to directly sense the sound wave, may be formed in a circular shape having a predetermined thickness, and installed and fixed into the microphone module821. One surface of the microphone8211may be referred to as a sound wave receiving part8213which receives the sound wave, and the sound wave receiving part8213is disposed toward an opening8214of the microphone accommodation part8212. And the other side of the microphone8211may be connected to a signal line8216, and the signal line8216may also be connected to the detection device PCB83. The microphone accommodation part8212may be formed of an elastic material such as rubber, and also formed to be in close contact with the front panel541. To this end, the opening8214may be formed at one side of the microphone accommodation part8212close to the microphone8211installed in the microphone accommodation part8212, and a circular protrusion8215may be formed at a circumference of the opening8214. And the protrusion8215serves to enable the microphone accommodation part8212not to be inclined in one direction when the microphone accommodation part8212is in close contact with the front panel541, and also to enable an entire opened front surface of the opening part403to be maintained in a closely contacting state with the front panel541. A predetermined sealed space may be formed between the opening8214and the sound wave receiving part8213which are in close contact with each other by the protrusion8215. Therefore, a front of the closely contacting space is sealed by a medium, i.e., the front panel541. Accordingly, vibration transmitted through an inside of the medium vibrates air in the predetermined space, and the sound wave due to the vibration may be received by the microphone8211. Due to such a sealing process, introduction of external noise or vibration into the predetermined space may be minimized. Thus, an error in determining a knocking operation or a malfunction due to the external noise may be considerably reduced, and a very accurate recognition rate may be ensured. That is, accuracy in determining the knocking operation when a knock-on input is applied may be remarkably increased. A module seating part8231in which the microphone module821is accommodated and which is opened toward the front panel541may be formed at the holder823. The microphone module821may be formed so that at least the protrusion8215protrudes further than a front surface of the holder823while the microphone module821is seated on the module seating part8231. A holder slot8232through which the signal line connected to the microphone8211passes may be formed at the holder823. The holder slot8232may be formed to be opened at one side of the module seating part8231. Also, a first elastic member fixing part8233which protrudes so that the elastic member824is installed and fixed thereto may be formed at a rear surface of the holder823. The first elastic member fixing part8233may be formed to extend and to pass through one end of the elastic member824having a coil shape. A holder coupling part8234which may be formed in a hook shape and coupled to the support member825may be formed at both sides of the holder823. Due to the holder coupling part8234, the holder823is coupled so as not to be separated by the support member825. And also, due to the hook shape of the holder coupling part8234, movement of the holder823in a direction which is inserted into the support member825is not restricted. A front surface of the support member825may be formed to be opened, and also formed so that the holder823is inserted through the opened front surface thereof. And a second elastic member fixing part8251which protrudes so that the elastic member824is installed and fixed thereto may be formed at an inside of the holder823. The second elastic member fixing part8251may be located on an extension line of the first elastic member fixing part8233, and may be inserted so as to pass through one end of the elastic member824. Therefore, even though the elastic member824is compressed to press the holder823, the elastic member824may stably press the holder823toward the front panel541without being buckled. By the elastic member824, the microphone module821may be maintained in a closely contacting state with the front panel541, and particularly, may be always maintained in the closely contacting state with the front panel541without a position change of the microphone module821due to a shock generated when the main door40and the sub-door50are closed and opened or an inertial force generated when the main door40and the sub-door50are rotated. A support member slot8252may be formed at one side of the support member825. The support member slot8252may be formed on an extension line of the holder slot8232. Therefore, the signal line passing through the holder slot8232may pass through the support member slot8252, and may be connected to the detection device PCB83. A support member fixing part8253may be formed at the other side of the support member825. The support member fixing part8253extends outward, and is seated in the second boss5822which protrudes from the detection device accommodation part582. And the screw passes through a screw hole8254of the support member fixing part8253, and is fastened to the second boss5822, and the support member825is installed and fixed on the lower cap decoration58. In some cases, the knock detection device82may be installed at the area of the bezel5411of the front panel541, and thus the knock detection device82is not exposed to the outside when being seen from an outside of the front panel541. The knock detection device82may be located at an edge of the front panel541, but an effective input part for the user's knocking operation is not limited thereto. In a state in which the knock detection device82is in close contact with the medium, even though the knocking operation is applied to any positions, the sound wave may be transmitted through the continuous same medium due to a property of the microphone8211which detects the sound wave generated by the vibration, instead of the vibration itself, and thus may be effectively detected. Therefore, a position of the knock detection device82may be disposed at one end at which the electric wires may be arranged and a visible area of the sub-door50may also be maximized. At the same time, even though the user knocks on any point of the front panel541, the sound wave may be detected through the microphone8211which is in close contact with the same medium. Specifically, an area to which a user's knocking input is applied may be an entire area which is defined by the front surface of the front panel541. Most of the front panel541except a boundary portion thereof is substantially a see-through area which selectively becomes transparent, and the knock detection device82may not be disposed thereat. Therefore, it is preferable that the knock detection device82be located at the area of the bezel5411in the front panel541. In particular, the bezel5411located at an upper end and left and right sides of the front panel541may be minimized by locating the knock detection device82at the lower end of the front panel541rather than both of the left and right sides thereof. By such a shape of the bezel5411, the see-through area may be expanded. Since the knock detection device82is located at the lower end of the front panel541on which a user's eyes are relatively less focused, the wider see-through area may be provided to the user. Since the knock detection device82is located at the area of the bezel5411, is not exposed to an outside, and has a structure which is in close contact with the front panel541, the user's knocking operation may be detected even though the user knocks on any position of the front panel541. In some cases, there may be environmental factors other than the knocking operation in which the vibrations are exerted on the front surface of the front panel541. The front surface of the panel assembly54may be vibrated by the shock generated when the main door40and the sub-door50are opened and closed, an external loud noise or the like, and such an input due to the external environments may be recognized as a knock signal. Therefore, the detection device PCB83may be set so that a user's operation which knocks several times the front surface of the sub-door50may be recognized as a normal knock input. More specifically, the user's operation which knocks several times the front surface of the sub-door50at predetermined time intervals may be recognized as the normal knock input. For example, when the user knocks twice the front surface of the sub-door50within a predetermined time, it may be recognized as the normal knock input. When a general user's knock pattern is analyzed, it may be understood that a time interval between a first knock and a second knock is less than about 600 ms. That is, when it is considered that 1 second (s) is 100 ms, a case in which the first knock and the second knock are performed at a time interval less than 1 second may be recognized as the normal knock input. Therefore, by setting the time interval, an abnormal input may be remarkably prevented from being misrecognized as the knock signal. In some cases, there may be a deviation in a knock intensity according to the user. However, since the medium is the same, it may be understood that the deviation in the knock intensity may be large, but a deviation in a vibration pattern is very small. Therefore, the deviation in the knock intensity may be offset through an algorithm, and the normal knock input may be effectively recognized using a knock input pattern and the time interval between the knocks as factors. FIG.35is an exploded perspective view illustrating a coupling structure of the second detection device. AndFIG.36is a partial perspective view illustrating an installed state of the second detection device. As illustrated in the drawings, the second detection device81may be located inside the detection device accommodation part582, and may be located at a lateral side of the knock detection device82. The second detection device81is a device which detects a user's approach, and a position sensing device (PSD) may be used as the second detection device81. That is, the second detection device81includes a light emitting part811and a light receiving part812, and may be formed so that the infrared light is emitted from the light emitting part811, an angle of the reflected light is measured by the light receiving part812, and thus a position of the user is recognized. An approach distance which is detected by the PSD may be set, and a detectable distance of the second detection device81is set to less than 1 m, and thus, when the user is located within a distance of 1 m from the front surface of the refrigerator1, it may be recognized that the user is located in front of the refrigerator1to operate the refrigerator1. Like the knock detection device82, an installation position of the second detection device81corresponds to the lower end of the sub-door50located at an upper side. Since the installation position corresponds to a height of about 1 m from a floor, the child having the small height or other things having the low height may not be detected. A pressing member813may be further provided at a rear of the second detection device81. The pressing member813may be formed to press the second detection device81so that the second detection device81is installed and fixed to the detection device accommodation part582, and also the second detection device81is in close contact with the front panel541. Specifically, a detection device fixing part8131which is fixed to a rear surface of the second detection device81may be formed at the pressing member813. The detection device fixing part8131is coupled to both side ends of the second detection device81, and thus the pressing member813and the second detection device81may be integrally coupled to each other. And an elastic part8132which protrudes backward to be rounded may be formed between the detection device fixing parts8131. The elastic part8132may be elastically deformed by a pressure, and an end of the elastic part8132which protrudes while the second detection device81is installed may be in close contact with a wall surface of the detection device accommodation part582, and elastically deformed. Therefore, the second detection device81may be in close contact with the front panel541by an elastic restoring force of the elastic part8132. Therefore, the light emitting part811and the light receiving part812may be completely in close contact with the rear surface of the front panel541. The front surface of the second detection device81may pass through the through part5825formed at the front surface of the detection device accommodation part582, and may be disposed at an area of the penetration part5412which may be formed at the bezel5411to be transparent. Therefore, the second detection device81has a structure which is actually exposed to the outside through the penetration part5412. However, the second detection device81may have a black color or a dark gray color which is the same as or similar to a color of the front panel541having a half mirror structure, and thus may not be easily seen when being seen from an outside. That is, the light emitted from the second detection device81does not interfere with the bezel5411, and the second detection device81is prevented from being remarkably exposed, and thus the external appearance is also prevented from being degraded. In some cases, a pressing member fixing part8133may be formed at one side of the pressing member813. The pressing member fixing part8133may be formed to extend outward, and seated at the first boss5821which protrudes from the detection device accommodation part582. And the screw passing through a screw hole8134of the pressing member fixing part8133is fastened to the first boss5821, and thus the pressing member813is installed and fixed on the lower cap decoration58. FIG.37is a view illustrating an electric wire arrangement inside the sub-door. As illustrated in the drawing, in the sub-door50, while the second detection device81and the knock detection device82are assembled, the detection device accommodation part582is shielded by the accommodation part cover583. At this point, the detection device PCB83is installed at an inner surface of the accommodation part cover583, and the electric wire L which is connected to the second detection device81, the knock detection device82and the detection device accommodation part582is guided to an outside of the detection device accommodation part582through the electric wire hole5823. In the sub-door50, a space in which the insulation501is formed may be provided at an outer perimeter of the panel assembly54, i.e., an internal area of the upper cap decoration57, the lower cap decoration58, the first side frame55and the second side frame56. Therefore, an empty space may be formed before the foaming solution for molding the insulation501is injected, and the electric wire L passing through the electric wire hole5823of the detection device accommodation part582may be guided along a space formed by the second side frame56and the upper cap decoration57. And the electric wire L guided to the upper hinge installation part571through the electric wire hole5713of the upper hinge installation part571may be covered by the hinge cover53. And the electric wire L is guided to the inside of the main door40through a space between the hinge cover53and the upper hinge51, and is not exposed to the outside even while the sub-door50is being rotated. In some cases, the first injection port5824and the second injection port584are formed at the lower cap decoration58, and may be shielded by the injection port cover5841and the injection port cover part5831formed at the accommodation part cover583. The first injection port5824may be located at a lateral side of the detection device accommodation part582, and may be located at a position close to the second side frame56. The first injection port5824may be formed as outward as possible. When the first injection port5824may be formed at a position which is at least partially overlapped with a space between the panel assembly54and the second side frame56, it is easy to inject the foaming solution between the panel assembly54and the second side frame56. However, since the inference may occur due to a shape of the handle561formed at the second side frame56, it is preferable that first injection port5824may be formed as outward as possible. A foaming solution guide part585which may be formed inside the first injection port5824to be rounded toward the second side frame56may be formed inside the lower cap decoration58. Therefore, when the foaming solution is injected through the first injection port5824, the foaming solution may naturally flow to the space between the second side frame56and the panel assembly54. The second injection port584may be formed on the lower cap decoration58close to the lower hinge installation part581. The second injection port584is located to avoid the interference with the lower hinge installation part581. At this point, the second injection port584may be formed at a position which is spaced laterally further than a space formed by the first side frame55and the panel assembly54. A width of the space between the first side frame55and the panel assembly54is narrow, and thus the foaming solution may overflow when the foaming solution is directly injected. To solve the problem, the foaming solution may primarily be injected into a relatively wide space formed by the lower cap decoration58and the panel assembly54, where it can then naturally flow to the space formed by the first side frame55and the panel assembly54. There may be a difference in fluidity of the foaming solution according to positions of the first injection port5824and the second injection port584. The foaming solution may be simultaneously injected at both of the first injection port5824and the second injection port584, and may be filled at the perimeter of the sub-door50. FIG.38is a perspective view illustrating a state in which the foaming solution is injected into the sub-door. AndFIG.39is a view illustrating an arrangement of a vent hole of the sub-door. Referring to the drawings, in a state in which the accommodation part cover583and the injection port cover5841are opened, the foaming solution is injected toward the first injection port5824and the second injection port584. At this point, a pressure of the foaming solution injected to each of the first injection port5824and the second injection port584may be set differently. That is, the foaming solution which is injected to the first injection port5824having a relatively wide flowing space may be injected at a relatively high pressure. A flowing path of the foaming solution will be described with reference toFIG.38. The foaming solution injected to the first injection port5824is introduced into a space formed by the second side frame56and the panel assembly54through the foaming solution guide part585. Then, the foaming solution flows continuously to a space formed by the upper cap decoration57and the panel assembly54. The foaming solution injected to the second injection port584is first injected into the space formed by the lower cap decoration58and the panel assembly54, and then flows continuously to the space between the first side frame55and the panel assembly54. The foaming solution which is simultaneously injected to both of the first injection port5824and the second injection port584is combined at an area A of the upper cap decoration57or an area B of the first side frame55. Then, the foaming solution is fully filled in a space formed by the upper cap decoration57, the first side frame55and the second side frame56, and then finally filled in the space formed by the lower cap decoration58and the panel assembly54. After the filling of the foaming solution is completed, the first injection port5824and the second injection port584are shielded by the accommodation part cover583and the injection port cover5841. Meanwhile, a vent hole5921through which air remaining in the sub-door50is discharged when the foaming solution is injected may be formed at the sub-door liner59. The vent hole5921may be formed at a gasket installation groove592at which the sub-door gasket591formed along the sub-door liner59is installed. The gasket installation groove592may be formed to be recessed along a perimeter of the sub-door liner59, and the vent hole5921may be formed in the gasket installation groove592at regular intervals. And after the foaming solution is fully filled, the sub-door gasket591is installed at the gasket installation groove592. Therefore, the vent hole5921may be covered by the sub-door gasket591, and may not be exposed to an outside. Meanwhile, the vent hole5921may be formed at a partial section of the entire gasket installation groove592. The vent hole5921may be formed at regular intervals along areas A and B at which the upper cap decoration57and the first side frame55are disposed, and particularly, may be formed at regular intervals based on a corner at which the upper cap decoration57and the first side frame55meet. Therefore, the air in the sub-door50may be discharged at an area close to a point at which the foaming solutions injected into the first injection port5824and the second injection port584are combined. The air may be continuously discharged until the foaming solution is completely filled. FIG.40is a perspective view illustrating an operation state of a projector of the refrigerator. AndFIG.41is a cut-away perspective view illustrating an internal structure of a freezer compartment of the refrigerator. As illustrated in the drawings, the freezer compartment13may be opened and closed by one pair of the freezer compartment doors30. And the first detection device31and a projector32may be provided at a right one (inFIG.40) of the pair of freezer compartment doors30. It is preferable that the first detection device31and the projector32are provided at the right one of the pair of freezer compartment doors30at which the sub-door50is located. And the first detection device31may be vertically disposed on an extension line of the second detection device81. An inclined surface331which is formed to be inclined downward toward an inside may be formed at a lower portion of the freezer compartment door30. And the first detection device31and the projector32may be provided at the inclined surface331. The projector32serves to project light on a floor surface located in front of the refrigerator1. An image P such as a design and a character may be projected through the projector32. For example, when the projector32is turned on, the image P including a word like “Door open” may be displayed on the floor surface located in front of the refrigerator1. Meanwhile, the first detection device31may be disposed at a lower side of the projector32. The projector32and the first detection device31may be formed in one module, and may be installed together at the inclined surface331. The first detection device31may be configured with a kind of proximity sensor which detects a position, and may be provided at the lower side of the projector32, and may detect whether an object is located at a position of the image P projected by the projector32. That is, when the user locates his/her body like a foot on the image P projected by the projector32, the first detection device31may detect the body. A PSD sensor or an ultrasonic sensor may be used as the first detection device31, and various kinds of proximity sensors which recognize a distance of about 10 to 20 cm may be used. The projector32and the first detection device31may be installed on the inclined surface331to project the image right in front of the refrigerator1or at a lower side of the inclined surface331and to detect the object. Therefore, an erroneous detection is prevented from occurring due to a person or an animal which just passes by the refrigerator1, an object which performs a cleaning operation or the like. That is, the user stands at a position close to the refrigerator1to be detected by the first detection device31. At this point, when the user's foot is located right in front of the inclined surface331or at the lower side of the inclined surface331, the foot is detected by the first detection device31. Detecting of the first detection device31may include a motion of covering at least a part of the image P projected by the projector32for a preset time, a motion of passing through an area of the image P, and another motion which may be recognized by the first detection device31. In addition, it may be set that positioning of the user is recognized as an user's operation for operating the refrigerator1only when the positioning is simultaneously detected by a combination of the first detection device31and the second detection device81, and thus malfunction may be minimized. To this end, when the user is detected by the second detection device81, the projector32may be operated, and a detection value of the first detection device31may be valid. Like this, when both of the first detection device31and the second detection device81validly perform an detection operation, the door opening device70may be operated to open the main door40. The implementation of the present disclosure has described an example in which the main door40is opened by the door opening device70. However, the sub-door50or the freezer compartment door30may be opened according to a position of the door opening device70. Meanwhile, the user may grip a freezer compartment handle, and then may rotate the freezer compartment door30, and thus the freezer compartment13may be opened and closed by rotation of the freezer compartment door30. An opening and closing detection device302may be provided at a freezer compartment door hinge301which rotatably supports the freezer compartment door30, and whether or not the freezer compartment door30is opened may be determined by the opening and closing detection device302. And when the freezer compartment door30is opened at a preset angle or more, and the freezer compartment accommodation member131provided inside the freezer compartment door30is in a state which may be withdrawn, the freezer compartment accommodation member131may be automatically withdrawn forward by driving of an accommodation member withdrawing device34. To this end, the freezer compartment accommodation member131having a drawer or basket shape may be supported by a sliding rail1311so as to be inserted into or withdrawn from the freezer compartment13. And the accommodation member withdrawing device34provided inside the freezer compartment13may be formed so that an inserting and withdrawing rod341is inserted and withdrawn by driving of a motor and a gear assembly. The inserting and withdrawing rod341may be connected to the freezer compartment accommodation member131, and thus the freezer compartment accommodation member131may be automatically withdrawn by driving of the accommodation member withdrawing device34. At this time, even when a plurality of freezer compartment accommodation members131are provided, the inserting and withdrawing rod341may be connected to all of the plurality of freezer compartment accommodation members131through a connection member342, and thus the plurality of freezer compartment accommodation members131may be inserted and withdrawn at the same time. When the freezer compartment door30is rotated to be closed, and then it is determined that the freezer compartment door30is rotated at a predetermined angle or more before being in contact with the freezer compartment accommodation member131, the accommodation member withdrawing device34is reversely rotated, and the inserting and withdrawing rod341is inserted, and thus the freezer compartment accommodation member131may be slid and inserted to an initial position. Hereinafter, an operation of the sub-door of the refrigerator according to the implementation of the present disclosure having the above-described structure will be described. FIG.42is a block diagram illustrating a flow of a control signal of the refrigerator. AndFIG.43is a flowchart sequentially illustrating an operation of the sub-door of the refrigerator. As illustrated in the drawings, the refrigerator1includes the main control part2which controls the operation of the refrigerator1, and the main control part2may be connected to a door switch21. The door switch21may be provided at the cabinet10, and may detect opening of the refrigerator compartment door20or the main door40, and may also be provided at the main door40, and may detect opening of the sub-door50. And the main control part2may be connected to the main lighting unit85provided inside the cabinet10, and may illuminate the inside of the refrigerator1when the refrigerator compartment door20or the main door40is opened. And the main control part2may be connected to the door lighting unit49, and may enable the door lighting unit49to be turned on when the sub-door50is opened or the knock-on signal is input. And the main control part2may be connected to the display unit60, and may control an operation of the display unit60, and may receive an operating signal through the display unit60. Also, the main control part2may be connected to the door opening device70and the accommodation member withdrawing device34, and may control operations of the door opening device70and the accommodation member withdrawing device34. The main control part2may be connected to a communication module84. The communication module84serves to transmit and receive data such as state information of the refrigerator1, program updating, and transmitting of a using pattern, and may be configured with a device which allows short range communication such as NFC, WiFi and Bluetooth. And setting of the communication module84may be performed at the display unit60. The main control part2may be directly or indirectly connected to the first detection device31, the second detection device81, the knock detection device82and the projector32, and may receive the operating signals thereof or may control the operations thereof. And when the detection device PCB83is connected to the knock detection device82and/or the first detection device31, the detection device PCB83may be connected to the main control part2. And the knock detection device82and the detection device PCB83may be integrally formed with each other. In a general state in which a separate operation is not applied to the refrigerator1having the above-described configuration, the sub-door50is in the opaque state like the mirror surface, as illustrated inFIG.4. In this state, it may not be possible to see through the inside of the refrigerator1. And in this state, the first detection device31, the second detection device81and the knock detection device82are maintained in an activated state in which the user may input the operation anytime [S110]. In this state, when the user locates in front of the front surface of the refrigerator1to open the main door40or the sub-door50of the refrigerator1, the second detection device81detects the user's position. At this time, when the user is not an ordinary adult, but a child, the user may not be detected due to a property of the position of the second detection device81. When a height of an object which is being cleaning or traveling is lower than that of the second detection device81, the object may not be detected, and thus the malfunction may be prevented. Meanwhile, the detecting of the second detection device81is not essential, and thus may be selectively set by the user's operation [S120]. Then, when the user performs a knocking operation which knocks on the front surface of the sub-door50, i.e., the front panel541, the knock detection device82may detect the knocking operation, and the detection device PCB83determines whether the knocking operation is valid. Specifically, when the user knocks on the front panel541, the sound wave due to the vibration generated at this point is transmitted along the front panel541formed of the same medium, and the microphone8211which is in close contact with the front panel541receives the sound wave. The received sound wave is filtered and amplified while passing through a filter and an amplifier, and transmitted to the detection device PCB83. The detection device PCB83determines the knock with the signal which is collected and analyzed to detect the knock signal. That is, in the case of the sound wave which is generated by a noise or a shock inside or outside the refrigerator1, there is a difference from the sound wave generated by the knocking operation in a property thereof, and thus the detection device PCB83determines whether the user performs the knocking operation through the signal corresponding to the property of the knock signal. Of course, in a certain situation, a signal similar to the knock signal may be generated, or a shock similar to the knock may be applied to the front panel541due to the user's carelessness or inexperienced operation, or the external noise may be recognized as a signal similar to a wavelength of the knock signal. To prevent misrecognition in the certain situation, the detection device PCB83can confirm whether the knock signal is continuously generated in a preset pattern, and also determines whether the pattern may be formed within a preset time. For example, it may be set that, when a signal which is recognized as the knock is generated twice within one second, the signal may be detected as the valid knock-on signal. In an analysis of the general user's knock pattern, when the knock is performed continuously twice, the time interval is less than one second. Therefore, when a signal recognition condition is set as described above, the misrecognition in the certain situation may be prevented, and also the user's knocking operation may be accurately recognized. Of course, the number of the knock signal and the set time necessary to be recognized as the valid knock-on signal may be changed variously. When a detecting signal is not detected by the second detection device81, or it is determined through the knock detection device82that the valid knock-on signal is not generated, the main control part2does not perform a separate control operation, and is maintained in a standby state. And while the main door40or the sub-door50is opened, the second detection device81and the knock detection device82may be inactivated, or may ignore the input signal, and thus the malfunction may be prevented [S130]. Meanwhile, when the valid knock-on signal is detected, and the detection device PCB83transmits the valid signal to the main control part2, the main control part2turns on the main lighting unit85or the door lighting unit49. When the main lighting unit85or the door lighting unit49is turned on, the inside of the refrigerator1becomes bright, and the light inside the refrigerator1passes through the panel assembly54. In particular, when the light passes through the front panel541, the front panel541becomes transparent, and thus the inside thereof may be seen through, as illustrated inFIG.5. When the sub-door50becomes transparent, the user may confirm the accommodation space inside the main door40or the space inside the refrigerator1, and thus may open the sub-door50to store the food, or may perform a necessary operation. At this time, the display unit60may also be turned on, and may display operation information of the refrigerator1. Therefore, the user may check the information output from the display61disposed inside the main door40through the sub-door50[S140]. The turned-on main lighting unit85or the door lighting unit49may be maintained in a turned-on state for a preset time, e.g., 10 seconds, and thus may allow the user to sufficiently confirm an internal state of the refrigerator1. Of course, the display unit60may also be maintained in a turned-on state for a preset time. And it is determined whether the preset time passed while the main lighting unit85or the door lighting unit49is turned on. When the present time passes, the main lighting unit85or the door lighting unit49is turned off [S150]. And while the main lighting unit85or the door lighting unit49is turned on, a valid knocking operation signal may be input by the user before the preset time passes. That is, when the user performs the knocking operation to confirm the inside of the refrigerator1, but a separate operation is not needed, the main lighting unit85or the door lighting unit49may be turned off before the preset time passes. For example, in a state in which the user confirms an accommodation state inside the refrigerator1within 5 seconds after the main lighting unit85or the door lighting unit49is turned on, or confirms the information displayed on the display unit60, when it is intended that the sub-door50becomes opaque, the knocking operation may be performed again on the front surface of the sub-door50, i.e., the front panel541. At this point, when it is determined that the knocking operation is valid, the main lighting unit85or the door lighting unit49may be turned off before the preset time passes, and the display unit60may also terminate an output of the information. Of course, validity determination of the knocking operation may be set to be the same as the operation S130, and in some cases, may be set to another knock input pattern [S160]. When the preset time passes after the main lighting unit85or the door lighting unit49is turned on, or the valid knock-on signal is input, the main lighting unit85or the door lighting unit49may be turned off. When the main lighting unit85or the door lighting unit49is turned off, the inside of the refrigerator1becomes dark, and the outside thereof is in a bright state. In this state, the light outside the refrigerator1is reflected by the front panel541, and thus the front surface of the sub-door50is in the mirror-like state, and the user may not see through the inside thereof. Therefore, the sub-door50is maintained in the opaque state until a new operation is input [S170]. Hereinafter, an operation of the display unit60will be described with reference to the drawings. FIG.44is a perspective view illustrating an installed state of the display unit. AndFIG.45is a view illustrating a configuration of a front surface of the display unit. As illustrated in the drawings, the display unit60is provided at a lower end of the opening part403of the main door40. And when the main lighting unit85or the door lighting unit49is turned on so that the sub-door50becomes transparent, the display unit60may also be turned on together, and thus the user may confirm the information of the display unit60through the sub-door50even while the sub-door50is closed. The display unit60may be turned on while the sub-door50is opened. The user may open the sub-door50to operate the display unit60, and when the opening of the sub-door50is detected by the door switch21, the display unit60may be activated. The display61may be provided at a center of a front surface of the display unit60, and the plurality of operating buttons62may be provided at both of left and right sides of the display61. The display61may be a screen through which the operation information of the refrigerator1is output, and may be selectively turned on and off according to the knocking operation on the front panel541or the opening and closing of the sub-door50. The operating buttons62serve to set the operation of the refrigerator1, and may include a communication button621, a lock button622, an auto-door button623, an auto-drawer button624, a refrigerator compartment temperature fixing button625, a freezer compartment temperature fixing button626, an air cleaning button627, and a quick freezing button628. A combination of the operating buttons62is just an example for convenience of explanation, and is not limited thereto. FIG.46is a view illustrating a change in a display state of the display unit according to a knocking operation. As illustrated in the drawing, the display61is maintained in an OFF state until the knocking operation on the front panel541is performed. And when the user knocks on the front panel541, the display61is turned on. At this point, a first screen611or a second screen612which outputs a temperature in the refrigerator1and a present operating function may be output on the display61. Since the main lighting unit85or the door lighting unit49is turned on, and the sub-door50becomes transparent, the information of the display61may be indicated even while the sub-door50is closed. When the preset time passes after the display unit60is turned on, or the user knocks again on the front panel541, the display61is turned off. At this time, the main lighting unit85or the door lighting unit49is also turned off, and the sub-door50is in the opaque state, and thus the display61is not visible from the outside. FIG.47is a view illustrating the change in the display state when the sub-door is opened and closed. As illustrated in the drawing, while the sub-door50is closed, the display61is turned off. And when the sub-door50is opened, the opening of the sub-door50is detected by the door switch21, and the main control part2turns on the display61. When the display61is turned on, the operation information of the refrigerator1is displayed on the first screen611, and the first screen611is changed into the second screen612after the preset time passes, and another operation information of the refrigerator1is displayed on the second screen612. At this point, the information displayed on the first screen611and the second screen612may be set by the user's operation. For example, the first screen611may display all of the temperatures of the refrigerator compartment12and the freezer compartment13, and may also the present operating function. And the second screen612may display the temperature of one storage space of the refrigerator compartment12or the freezer compartment13and the present operating function in the corresponding storage space. Meanwhile, when the sub-door50is closed, the display61can detect closing of the sub-door50by the door switch21, and the main control part2turns off the display61. FIG.48is a view illustrating the change in the display state of the display unit when an auto-door function is set. As illustrated in the drawing, in a state in which the sub-door50is opened and the display61is turned on, when the user pushes the auto-door button623, the display61displays a third screen613which indicates an activated state of the door opening device70when the door opening device70is activated. And when the door opening device70is not activated, the display61displays a fourth screen614which indicates an inactivated state of the door opening device70. And when the user operates again the auto-door button623while the display61displays the third screen613or the fourth screen614, the third screen613and the fourth screen614may be converted to each other, and a state of the door opening device70may also be substantially changed. That is, when it is intended that the user does not use the door opening device70, it may be set through operating of the auto-door button623. And in this state, an operation of the door opening device70is not performed. Meanwhile, when the user's operation is not applied for a preset time or more in a state in which it is converted to the third screen613or the fourth screen614, the display61is converted to the first screen611or the second screen612which indicates the temperature in the refrigerator1. At this time, when the door opening device70is activated, the auto-door button623may be in an ON state, and when the door opening device70is inactivated, the auto-door button623may be in an OFF state. FIG.49is a view illustrating the change in the display state of the display unit when an auto-drawer function is set. As illustrated in the drawing, when the user pushes the auto-drawer button624while the sub-door50is opened and the display61is turned on, the display61displays a fifth screen615which indicates an activated state of the accommodation member withdrawing device34when the accommodation member withdrawing device34is activated. And when the accommodation member withdrawing device34is inactivated, the display61displays a sixth screen616which indicates an inactivated state of the accommodation member withdrawing device34. And when the user operates again the auto-drawer button624while the display61displays the fifth screen615or the sixth screen616, the fifth screen615or the sixth screen616may be converted to each other, and a state of the accommodation member withdrawing device34may also be substantially changed. That is, when it is intended that the user does not use the accommodation member withdrawing device34, it may be set through operating of the auto-drawer button624. And in this state, an operation of the accommodation member withdrawing device34is not performed. Meanwhile, when the user's operation is not applied for a preset time or more in a state in which it is converted to the fifth screen615or the sixth screen616, the display61is converted to the first screen611or the second screen612which indicates the temperature in the refrigerator1. At this time, when the accommodation member withdrawing device34is activated, the auto-drawer button624may be in an ON state, and when the accommodation member withdrawing device34is inactivated, the auto-drawer button624may be in an OFF state. FIG.50is a view illustrating the change in the display state of the display unit when the temperature fixing function is set. As illustrated in the drawing, in a state in which the sub-door50is opened and the display61is turned on, when the user pushes the refrigerator compartment temperature fixing button625, the main control part2may control the operation of the refrigerator1so that the temperature in the refrigerator1is maintained at a preset temperature, and a seventh screen617which indicates such a state is displayed. And when a refrigerator compartment temperature fixing mode is not set, the display61displays an eighth screen618which indicates an in activated state of the refrigerator compartment temperature fixing mode. And when the user operates again the refrigerator compartment temperature fixing button625while the display61displays the seventh screen617or the eighth screen618, the seventh screen617or the eighth screen618may be converted to each other, and an operation mode of the refrigerator1may also be substantially changed. That is, when it is intended that the user does not use the refrigerator compartment temperature fixing mode, it may be set through operating of the refrigerator compartment temperature fixing button625. And in this state, an operation of the refrigerator compartment temperature fixing mode is not performed. Meanwhile, when the user's operation is not applied for a preset time or more in a state in which it is converted to the seventh screen617or the eighth screen618, the display61is converted to the first screen611or the second screen612which indicates the temperature in the refrigerator1. At this time, when the refrigerator compartment temperature fixing mode is activated, the refrigerator compartment temperature fixing button625may be in an ON state, and when the refrigerator compartment temperature fixing mode is inactivated, the refrigerator compartment temperature fixing button625may be in an OFF state. Also, in an operation of the freezer compartment temperature fixing button626, the air cleaning button627, the quick freezing button628and the communication button621, a state of the display61is changed in the above-described manner, except contents of the screen, and thus detailed description thereof will be omitted. The refrigerator and the control method thereof according to the proposed implementation of the present disclosure have the following effects. In the refrigerator according to the implementation of the present disclosure, the panel assembly which selectively transmits or reflects the light is provided at a part of the door, and the lighting unit which is turned on or off by the user's operation is provided inside the door, and the lighting unit can be turned on by the user's operation while the door is closed, and thus it may be possible to see through the inside of the refrigerator. Therefore, even while the door is not opened, the user can confirm the space inside the refrigerator, and also can check the position of the food, and thus the user convenience can be enhanced. Also, the door can be prevented from being unnecessarily opened and closed, and loss of the cooling air can be prevented, and thus it may be possible to improve power consumption and also to enhance storage performance. And the panel assembly has a structure like a half glass which is seen through while the lighting unit is turned on, and functions as a mirror while the lighting unit is not turned on, and thus an exterior of the refrigerator door can be enhanced. And the microphone which detects a sound generated by the vibration upon the user's knocking operation on the panel assembly can be provided at the rear surface of the panel assembly. Therefore, the lighting unit can be turned on or off by the user's knocking operation, and thus the panel assembly can be selectively transparent. Therefore, since the panel assembly can become transparent by the simple operation, and the sound of the vibration transmitted through the same medium is the same even though the user knocks on any positions of the front surface of the panel assembly, the operation can be easily performed, and effectively detected. In some cases, when the panel assembly becomes transparent so that the inside of the panel assembly is visible, the display unit provided inside the refrigerator may be turned on to output an operation state of the inside of the refrigerator. Therefore, the user may confirm the display by a knocking operation of the panel assembly. Therefore, in a state in which the usability for displaying the operation state of the refrigerator is maintained, the display may not be seen normally, so that the external appearance may be made simpler and more luxurious. In some cases, the display unit may have a detachable structure, and a display unit of various specifications may be mounted according to a model or function of the refrigerator. Therefore, various display units may be selectively installed without changing the structure of the refrigerator door. In some cases, the display unit may output a first screen in a state in which the display unit is first turned on, and output a second screen in which other information is displayed after the elapse of the set time. Therefore, since various operation states of the refrigerator can be confirmed through the change of the screen of the display of the inside of the refrigerator while the door is closed without any additional operation, improved usability may result. In some cases, the display unit may be activated when the door is opened, so that the operating button may be operated. And, various functions of the refrigerator may be set by the operation of the operating button, and the set function may be displayed through the display. User convenience may be further improved as a result. In some cases, the display PCB of the display unit may be surrounded by a resin material. This way, it may be possible to prevent the display PCB from being damaged due to humidity or moisture of the inside of the refrigerator. In some implementations, a display unit may be provided inside the refrigerator at a position corresponding to the panel assembly. The display unit may become visible when the lighting unit is turned on and may, for example, display an operation state of the refrigerator. In some cases, the display unit may be detachably provided in the opening part of the door. For example, display installing protrusions may be formed in the opening part, and display guides in which the installing protrusions are inserted may be formed on both side surfaces of the display unit. The installing protrusions may be provided on both side surfaces of the opening part, and the display guides may be opened downward, so that the installing protrusions may be inserted and mounted. In some cases, a door connector connected with a main control part controlling the operation of the refrigerator may be provided in the opening part, and a display connector electrically connected with the door connector may be provided on a lower surface of the display. The display unit may be seated on a lower end of the opening part, and both side ends of the display may be detachably coupled to an inner side surface of the opening part. Also, the display unit may include an outer case that forms an outer shape and may be detachably coupled to the inside of the refrigerator, an inner case seated inside the outer case and accommodating a display PCB in which a display and an operating button are disposed, and a display cover which is in close contact with a front surface of the display PCB and shielding an opened front surface of the outer case. Elements mounted on the out display PCB may be formed to be surrounded by a resin material for moisture proofing. The display may be exposed to a center of the display cover, and a plurality of operating buttons may be disposed on both left and right sides of the display. The display cover may be formed to be inclined rearward toward an upper portion. A lower end of the display cover may be extended to a position corresponding to a rear surface of the sub-door when the sub-door is closed. In some cases, a door switch detecting the opening and closing of the sub-door may be provided in the main door, and when the sub-door is opened by the door switch, the display unit may be activated and turned on. Additionally, after the sub-door is opened, when the door switch detects that the sub-door is closed, the display unit may be turned off. The display unit may be turned on and off together with the lighting unit. In some cases, the detection device may be a knock detecting device provided on a rear surface of the panel assembly and detecting a knocking operation of the panel assembly by the user. A bezel that does not transmit light may be formed on a rear surface edge of the panel assembly, and the knock detection device may be disposed in an area within the bezel. In some cases, the panel assembly may include a front panel that forms a front surface of the sub-door and formed of a half mirror that reflects a part of light and transmits a part and selectively becomes transparent, a plurality of insulation panels spaced apart from the front panel and formed of a transparent tempered glass, and a cudgel disposed between the front panel and the insulation panel and between the plurality of insulation panels, and separating and sealing between the front panel and the insulation panel and between the plurality of insulation panels. The front panel may take up an entire front surface of the door, and the insulation panel may have a smaller area than the front panel, and may be disposed in an inner area of the front panel. The detection device may be disposed at an edge of the front panel. The display unit may include a display displaying an operation state of the refrigerator and an operating button for setting and operating the operation of the refrigerator. When the door is closed, and when the user's operation is detected by the detection device, the display and operating button may be activated. A door opening device driven by the user's operation and pushing the cabinet to open the door by a predetermined angle may be provided on the door, and an operating button which may selectively activate the door opening device may be provided on the display. An accommodation member withdrawing device for detecting the opening of the door and allowing an accommodation member to be withdrawn may be provided in the inner side of the cabinet. Moreover, an operating button which may selectively activate the accommodation member withdrawing device may be provided in the display. A control method of a refrigerator according one implementation may include a step in which a knock detection device that may detect a knocking operation of a user from an outside while the door is closed is activated, a step of determining whether an input signal of the knock detection device is valid, and a visualization step in which a lighting unit disposed inside the refrigerator and a display unit disposed inside the refrigerator and displaying an operation state of the refrigerator are turned on by a main control part and a state may be determined from an outside of the refrigerator through the panel assembly in the case in which the user's operation is determined as valid. The display unit may display the temperature and function of the inside of the refrigerator which is already set in the visualization step. In the visualization step, the display unit may be controlled to output a first screen displaying initial temperatures of a refrigerator compartment and a freezer compartment, and controlled to output a second screen displaying a screen of either the refrigerator compartment or freezer compartment when a set time is elapsed. When the door is opened, the display unit may be activated to enable input of an operating button on the display. When the operating button is operated while the door is opened, the display may output a screen displaying a function, and when the set time is elapsed after the screen displaying the function is output, a screen displaying the already set temperature of the inside of the refrigerator may be output. It may be possible to activate or deactivate the setting function through the continuous operation of the operating button, and an activated or deactivated state may be output through the display. In some cases, the knock detection device may be mounted on a rear surface of a panel assembly forming an outer side surface of the door and it may detect sound when a surface of the panel assembly is vibrated. A detection device for detecting proximity of the user may be provided on the door, and the main control part may turn on the lighting unit in the case in which the input signal of the knock detection device is valid and the detection device detects the proximity of the user at the same time. In some implementations, a module PCB that processes the signal of the knock detection device mounted together with the knock detection device may be provided on the door. It may be possible to determine the effectiveness of a knock signal by the module PCB. In the case in which the knock signal input from the knock detection device is performed a plurality of times within the set time, the module PCB may determine the knock signal as valid. When the knock signal is determined as valid from the module PCB, the valid signal may be transmitted to the control part. The main control part may be turned off when the set time is elapsed after the display unit is turned on. When the valid signal is input from the knock detection device again after the set time is elapsed after the display unit is turned on, the main control part may turn off the display unit. The main control part may control the display unit and the lighting unit to be turned on/off together. Even though all the elements of the implementations are coupled into one or operated in the combined state, the present disclosure is not limited to such an implementation. That is, all the elements may be selectively combined with each other without departing from the scope of the disclosure. Furthermore, when it is described that one comprises (or includes or has) some elements, it should be understood that it may comprise (or include or have) only those elements, or it may comprise (or include or have) other elements as well as those elements if there is no specific limitation. Unless otherwise specifically defined herein, all terms comprising technical or scientific terms are to be given meanings understood by those skilled in the art. Like terms defined in dictionaries, generally used terms needs to be construed as meaning used in technical contexts and are not construed as ideal or excessively formal meanings unless otherwise clearly defined herein. Although implementations have been described with reference to a number of illustrative implementations thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims. Therefore, the preferred implementations should be considered in a descriptive sense only and not for purposes of limitation, and also the technical scope of the disclosure is not limited to the implementations. Furthermore, the present disclosure is defined not by the detailed description of the disclosure but by the appended claims, and all differences within the scope will be construed as being comprised in the present disclosure. The present disclosure is directed to a refrigerator which enables at least a part of a refrigerator door to be selectively transparent by an user's operation, such that the user may see through an inside of the refrigerator while the refrigerator door is closed, and a control method thereof. Also, the present disclosure is directed to a refrigerator in which at least a part of a front surface of a refrigerator door is formed of half glass, and a lighting unit in the refrigerator is turned on/off by a user's operation, and thus the user may selectively see through an inside of the refrigerator while the refrigerator door is closed, and a control method thereof. Also, the present disclosure is directed to a refrigerator in which a refrigerator door may be selectively transparent by a knocking operation on a refrigerator door, and thus an inside of the refrigerator becomes visible, and a control method thereof. Also, the present disclosure is directed to a refrigerator which is able to enhance recognition performance and reliability of an operation for selectively enabling an inside of the refrigerator to be visible through a panel assembly while a refrigerator door is closed, and a control method thereof. Also, the present disclosure is directed to a refrigerator having a structure which is provided at a refrigerator door, enables a use to see through an inside of the refrigerator even while the refrigerator door is closed, and also insulates the refrigerator door. Also, the present disclosure is directed to a refrigerator which enables an inside of the refrigerator to be seen through while a lighting unit is turned on by a user's operation, and forms a mirror surface while the lighting unit is turned off, thereby forming an exterior of a refrigerator door. Also, the present disclosure is directed to a refrigerator which may selectively visualize a display unit which is selectively transparent by an operation of a user and provided inside the refrigerator and a control method thereof. Also, the present disclosure is directed to a refrigerator in which a display unit is provided on an opening part of a main door and configured to be shielded by a sub-door to improve the appearance of the refrigerator. Also, the present disclosure is directed to a refrigerator in which a display unit is configured to be selectively detachable inside the refrigerator, so that a suitable display unit may be selectively mounted according to functions and models without any structural change. Also, the present disclosure is directed to a refrigerator which allows to easily check various states thereof without separately operating a display and a control method thereof. Also, the present disclosure is directed to a refrigerator which prevents a display unit provided inside the refrigerator from being abnormally operated due to humidity and moisture of the inside of the refrigerator. According to an implementation of the present disclosure, there is provided a refrigerator including a cabinet forming a storage space; a main door opening and closing the storage space and having an opening part communicated with the storage space formed therein; a sub-door rotatably mounted on the main door to open and close the opening part and having a panel assembly for selectively viewing an inside of the opening part; a detection device provided in the sub-door and detecting a user's operation; a lighting unit provided inside the refrigerator and turned on according to the operating detection of the detection device when the sub-door is closed to make the panel assembly look transparent; and a display unit provided inside the refrigerator corresponding to the panel assembly, and being visible when the lighting unit is turned on to display an operation state of the refrigerator. The display unit may be detachably provided in the opening part. Display installing protrusions may be formed on the opening part, and display guides in which the installing protrusions are inserted may be formed on both side surfaces of the display unit. The installing protrusions may be provided on both side surfaces of the opening part, and the display guides may be opened downward so that the installing protrusions may be inserted and mounted. A door connector connected to a main control part controlling the operation of the refrigerator may be provided in the opening part, and a display connector electrically connected to the door connector may be provided on a lower surface of the display. The display unit may be seated on a lower end of the opening part, and both side ends of the display may be detachably coupled to an inner side surface of the opening part. The display unit may include an outer case forming an external appearance and being detachably coupled to the inside of the refrigerator; an inner case seated inside the outer case and accommodating a display PCB in which a display and an operating button are disposed; and a display cover which is in close contact with a front surface of the display PCB and shielding an opened front surface of the outer case. Elements mounted on the out display PCB may be formed to be surrounded by a resin material for moisture proofing. The display may be exposed to a center of the display cover, and a plurality of operating buttons may be disposed on both left and right sides of the display. The display cover may be formed to be inclined rearward toward an upper portion. A lower end of the display cover may be extended to a position corresponding to a rear surface of the sub-door when the sub-door is closed. A door switch detecting the opening and closing of the sub-door may be provided in the main door, and the display unit may be activated and turned on when the sub-door is opened by the door switch. When the door switch detects that the sub-door is closed after the sub-door is opened, the display unit may be turned off. The display unit may be turned on/off together with the lighting unit. Also, a refrigerator according to an implementation of the present disclosure includes a door opening and closing a storage space and having an opening part communicated with the storage space formed therein; a panel assembly provided in the opening part and forming an exterior front surface of the door to selectively view an inside of the opening part; a detection device provided in the door and detecting a user's operation; a lighting unit provided inside the refrigerator and turned on according to the operating detection of the detection device while the door is closed, and allowing the panel assembly look transparent; and a display unit provided inside the refrigerator corresponding to the panel assembly and being visible when the lighting unit is turned on to display an operation state of the refrigerator. The detection device may be a knock detection device which is provided on a rear surface of the panel assembly and detecting a knocking operation of the panel assembly by the user. The display unit may include a display displaying an operation state of the refrigerator and an operating button for setting and operating the operation of the refrigerator. The display and operating button may be activated when the user's operation is detected by the detection device when the door is closed. Also, a control method of a refrigerator according to an implementation of the present disclosure including a panel assembly of a half mirror material mounted on an opening part of a door communicated with an inside of the refrigerator so as to selectively view the inside of the refrigerator, the method includes a step in which a knock detection device which may detect a knocking operation of a user from an outside of the door while the door is closed is activated; a step of determining whether an input signal of the knock detection device is valid; and a visualization step in which a lighting unit disposed inside the refrigerator and a display unit disposed inside the refrigerator and displaying an operating state of the refrigerator are turned on by a main control part and a status may be confirmed from an outside of the refrigerator through the panel assembly in the case in which the user's operation is determined as valid. When the door is opened, the display unit may be activated and an input of the operating button on the display may be possible. In certain implementations, a refrigerator may comprise: a cabinet to define a storage chamber therein; a lighting device configured to illuminate the storage chamber; and a door connected to the cabinet and configured to open and close the storage chamber, wherein the door includes: a door frame having an opening; a panel assembly configured to cover the opening of the door frame; a display provided at rear of the panel assembly; a sensor provided in the door frame, and configured to detect an input applied to the panel assembly; and a processor configured to control the lighting device and the display according to whether the sensor detects the input, and wherein the door frame and the panel assembly defines an insulation space which an insulating material positioned, an injection port through which a foaming solution is injected to mold the insulation is formed on a lower side of the door frame, and the injection port is provided at a position on a lower side of the door frame which overlaps a space between a lateral side of the panel assembly and the door frame. The panel assembly may include: a front panel; an insulating panel provided rear of the front panel; and a spacer provided between the front panel and the insulating panel, wherein the front panel and the insulating panel have a viewing area. The sensor may include a microphone configured to detect sound waves from the input when the input is applied to the front panel of the panel assembly. The front panel may include a bezel printed along an edge of a rear surface of the front panel, and the microphone may be provided to face the bezel of the front panel. The sensor may further include a microphone accommodation housing which accommodates the microphone, and the microphone accommodation housing may be positioned to be in contact with the bezel of the front panel. The door frame may include a detection device accommodation bracket to receive the sensor. The door frame may further include an accommodation bracket cover to cover the detection device accommodation part. The detection device accommodation bracket may include an opening through which the sensor contacts the front panel when the sensor is received in the detection device accommodation bracket. An electric wire that connects the sensor to the processor may extend along the insulation space toward a hinge of the door. In certain implementations, a refrigerator may comprise: a cabinet to define a storage chamber therein; a lighting device configured to illuminate the storage chamber; a first door connected to the cabinet and configured to open and close the storage chamber and having an opening; a second door configured to open and close the opening of the first door and including a panel assembly, the panel assembly including a front panel, an insulating panel provided a rear of the front panel, and a spacer provided between the front panel and the insulating panel; a display provided at rear of the panel assembly; a sensor positioned to be in contact with the front panel of the panel assembly, and configured to detect an input applied to the front panel of the panel assembly; and a processor configured to control the lighting device and the display based on whether the sensor detects the input, wherein the second door further includes an insulation space in which an insulating material is received, an injection port through which a foaming solution is injected to mold the insulation is formed on a lower side of the second door, and the injection port is provided on a lower side of the second door at a position which overlaps a space corresponding to a lateral side of the panel assembly. The sensor may include a microphone configured to detect sound waves associated with input when the input is applied to the front panel of the panel assembly. The front panel may include a bezel printed along an edge of a rear surface of the front panel, and the sensor may be provided to face the bezel of the front panel. The sensor may further include a microphone accommodation housing which accommodates the microphone, and the microphone accommodation housing may be positioned to be in contact with the bezel of the front panel. The display may be installed in the first door and provided rear of the insulation panel when the second door closes the opening. The display may be provided in the opening of the first door. The second door may further include a detection device accommodation bracket to receive the sensor. The detection device accommodation bracket may include an opening through which the sensor contacts the front panel when the sensor is received in the detection device accommodation bracket. An electric wire that connects the sensor to the processor may extend along the insulation space toward a hinge of the second door. In certain implementations, a refrigerator comprises: a cabinet; a door to open and close the cabinet; and a light provided at one of the door or the cabinet, wherein the door includes: a door frame having an opening; at least one hinge provided adjacent to a first vertical side surface of the door frame; at least one clear panel covering the opening of the door frame; a display provided to be visible through the at least one clear panel; a sensor provided in the door frame, and configured to detect an input to the door; a processor configured to control at least one of the light or the display based on the input; and an injection port through which a foaming solution is received to insulate an interior of the door frame, wherein the injection port is provided at a position on a lower side surface to be positioned closer to a second vertical side surface of the door frame that is opposite to the first vertical side surface of the door frame. The door frame may include a bracket having an inner space to receive the sensor, and wherein the injection port is positioned at an exterior surface of the bracket. It will be understood that when an element or layer is referred to as being “on” another element or layer, the element or layer can be directly on another element or layer or intervening elements or layers. In contrast, when an element is referred to as being “directly on” another element or layer, there are no intervening elements or layers present. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that, although the terms first, second, third, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section. Thus, a first element, component, region, layer or section could be termed a second element, component, region, layer or section without departing from the teachings of the present invention. Spatially relative terms, such as “lower”, “upper” and the like, may be used herein for ease of description to describe the relationship of one element or feature to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation, in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “lower” relative to other elements or features would then be oriented “upper” relative to the other elements or features. Thus, the exemplary term “lower” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Embodiments of the disclosure are described herein with reference to cross-section illustrations that are schematic illustrations of idealized embodiments (and intermediate structures) of the disclosure. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments of the disclosure should not be construed as limited to the particular shapes of regions illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Any reference in this specification to “one embodiment,” “an embodiment,” “example embodiment,” etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones of the embodiments. Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. | 174,017 |
11859901 | DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, detailed embodiments will be described in detail with reference to the accompanying drawings. However, the scope of the present disclosure is not limited to proposed embodiments of the present disclosure, and other regressive inventions or other embodiments included in the scope of the present disclosure may be easily proposed through addition, change, deletion, and the like of other elements. FIG.1is a perspective view of a refrigerator according to an embodiment. Referring toFIGS.1and2, a refrigerator1according to a first embodiment of the present disclosure includes a cabinet10defining a storage space and a door that opens or closes the storage space. Here, an outer appearance of the refrigerator1may be defined by the cabinet10and the door. The inside of the cabinet10is partitioned into upper and lower portions by a barrier (seeFIG.11). A refrigerating compartment12may be defined in the upper portion of the cabinet10, and a freezing compartment13may be defined in the lower portion of the cabinet10. Also, a control unit14for controlling an overall operation of the refrigerator1may be disposed on a top surface of the cabinet10. The control unit14may be configured to control a cooling operation of the refrigerator as well as electric components for selective viewing and screen output of a see-through part611. The door may include a refrigerating compartment door and a freezing compartment door30. The refrigerating compartment door20may be opened and closed by rotating an opened front surface of the refrigerating compartment12, and the freezing compartment door30may be switched by rotating an opened front surface of the freezing compartment13. Also, the refrigerating compartment door20may be provided in a pair of left and right doors. Thus, the refrigerating compartment12is shielded by the pair of doors. The freezing compartment door30may be provided in a pair of left and right doors. Thus, the freezing compartment13may be opened and closed by the pair of doors. Alternatively, the freezing compartment door30may be withdrawable in a draw type as necessary and provided as one or more doors. Although a refrigerator in which, a French type door in which a pair of doors rotate to open and close one space is applied to a bottom freezer type refrigerator in which the freezing compartment13is provided at a lower portion, is described as an example in this embodiment, the present disclosure may be applied to all types of refrigerators including door without being limited to shapes of the refrigerators. At least one door may be provided so that the inside of the refrigerator is visible through the door. A see-through part611that is an area, through which the storage space in the rear surface of the door and/or the inside of the refrigerator are seen, may be provided in the refrigerating compartment door20. The see-through part611may constitute at least a portion of a front surface of the refrigerating compartment door20. The see-through part611may be selectively transparent or opaque according to user's manipulation. Thus, foods accommodated in the refrigerator may be identified through the see-through part611. Also, although the structure in which the see-through part611is provided in the refrigerating compartment door20is described as an example in this embodiment, the see-through part611may be provided in various different types of refrigerator doors such as the freezing compartment door30according to a structure and configuration of the refrigerator. FIG.2is a perspective view of the refrigerator with a sub-door opened. Also,FIG.3is a perspective view of the refrigerator with a main door opened. As illustrated inFIGS.3and4, the refrigerating compartment door20, which is disposed at the right side (when viewed inFIG.3), of the pair of refrigerating compartment doors20may be doubly opened and closed. In detail, the refrigerating compartment door20, which is disposed at the right side, may include a main door40that opening and closing the refrigerating compartment12and a sub-door50rotatably disposed on a main door40to open and close an opening defined in the main door40. The main door40may have the same size as that of the refrigerating compartment door20, which is disposed at the left side (when viewed inFIG.1), of the pair of refrigerating compartment doors20. The main door40may be rotatably mounted on the cabinet10by an upper hinge401and a lower hinge402to open at least a portion of the refrigerating compartment door12. Also, an opening41that is opened with a predetermined size is defined in the main door40. A door basket431may be mounted on the rear surface of the main door40as well as the inside of the opening41. Here, the opening41may have a size that occupies most of the front surface of the main door40except for a portion of a circumference of the main door40. A storage case43may be provided on the rear surface of the main door40. A plurality of door baskets may be disposed in the storage case43. When the sub-door50is opened, the storage case43may have a structure that is accessible through the opening41. Also, the storage case43may be provided with a case door to access the inside of the storage case from the rear surface of the main door40. Also, a main gasket45may be disposed on a circumference of the rear surface of the main door40to prevent cool air within an internal space of the cabinet10from leaking when the main door40is opened. The sub-door50may be rotatably mounted on the front surface of the main door40to open and close the opening41. Thus, the sub-door50may be opened to expose the opening41. The sub-door50may have the same size as the main door40to cover the entire front surface of the main door40. Also, when the sub-door50is closed, the main door40and the sub-door50may be coupled to each other to provide the same size and configuration as those of the left refrigerating compartment door20. Also, a sub gasket58may be disposed on the rear surface of the sub-door50to seal a gap between the main door40and the sub-door50. A panel assembly60through which the inside of the refrigerator is selectively visible and on which a screen is capable of being output is provided at a center of the sub-door50. Thus, even though the sub-door50is closed, the inside of the opening41may be selectively visible, and also an image inside the opening41may be output. The see-through part21may be a portion of the sub-door50, through which the inside of the refrigerator1is visible. However, the see-through part21may not necessarily match the entirety of the panel assembly60. The panel assembly60may be configured to be selectively transparent or opaque according to user's manipulation. Thus, only when the user desires, the transparent panel assembly60may be transparent so that the inside of the refrigerator1is visible, otherwise, be maintained in the opaque state. Also, the panel assembly60may output a screen in the transparent or opaque state. In the embodiment, the panel assembly60is configured to shield an opened portion of the sub-door50. However, according to types of the door, even when one door is configured as in the right door20of the refrigerating compartment12, an opening may be formed in the door20, and the transparent panel assembly may be mounted to shield the opening of the door20. That is, it is noted that the panel assembly60may be applied to all types of doors, through which an opening is formed, regardless of the shape of the refrigerator and the shape of the door. A sub upper hinge501and a sub lower hinge502may be respectively provided on upper and lower ends of the sub-door50so that the sub-door50is rotatably mounted on the front surface of the main door40. Also, a restraint device591may be provided on the sub-door50. A locking unit42may be provided on the main door40to correspond to the restraint device591. Thus, the sub-door50may be maintained in the closed state by the coupling between the restraint device591and the locking unit42. When the coupling between the opening device59and the locking unit42is released by manipulation of an opening device529provided at a lower end of the door, the sub-door50may be opened. Hereinafter, a structure of the sub-door50will be described in more detail with reference to the accompanying drawings. FIG.4is a front perspective view of the sub-door. Also,FIG.5is a perspective view of the sub-door when viewed from a front side. Also,FIG.6is an exploded perspective view of the sub-door. As illustrated in the drawings, the sub-door50may include an outer plate51defining an outer appearance of the sub-door50, a door liner52mounted to be spaced apart from the outer plate51, the panel assembly60mounted on an opening of the outer plate51and the door liner52, and upper and lower cap decorations54and55defining the top and bottom surfaces of the sub-door50. The above-described constituents may be coupled to define the whole outer appearance of the sub-door50. The outer plate51may constitute an outer appearance of the front surface of the sub-door50and a portion of a peripheral surface of the sub-door50and be made of a stainless steel material. The outer plate51may constitute a portion of the outer appearance of the sub-door50as well as the front surface of the sub-door50. Also, the outer plate51may be made of the same material of the front surface of each of the refrigerating compartment door20and the freezing compartment door30. Various surface treatments such as coating or film attachment so as to realize anti-fingerprint coating, hair lines, colors, or patterns may be performed on the front surface of the outer plate51. Also, a plate opening511may be defined at a center of the outer plate51. Here, the plate opening511may be shielded by the panel assembly60. Also, since the inside of the refrigerator1is visible through the panel assembly60that shields the plate opening511, an internal region of the plate opening511may be referred to as the see-through part611. A bent plate part514that is bent backward may be disposed on a peripheral surface of the plate opening511. The bent plate part514may be disposed along a circumference of the plate opening511and extend by a predetermined length so as to be inserted into and fixed to a plate accommodating groove703of a frame70to be described below. Both surfaces of the outer plate51may be bent to define an outer appearance of a side surface of the sub-door50. Both ends of the outer plate51may be coupled to the door liner52. Also, upper and lower ends of the outer plate51may be coupled to the upper cap decoration54and the lower cap decoration55, respectively. An insulator53may be filled inside the outer plate51, the door liner52, the upper cap decoration54, and the lower cap decoration55. The door liner52defines the rear surface of the sub-door50and has a door liner opening521in the area on which the panel assembly60is disposed. Also, a sub gasket58for sealing a gap between the sub-door50and the main door40may be mounted on the rear surface of the door liner52. Also, a door light56may be provided on each of both sides of the door liner opening521. The door light56may illuminate the rear surface of the sub-door50and a rear side of the panel assembly60. Thus, when the door light56is turned on, the inside of the storage case43may be brightened, and thus, the inside of the refrigerator may be more brightened up than the outside of the refrigerator so that a rear space of the sub-door50may be visible through the panel assembly60. Also, if the door light56is turned on when the panel assembly60outputs the screen, the panel assembly60may function as an auxiliary backlight to allow the screen to be clearer. The door light56may be mounted on the light mounting part523disposed on the rear surface of the sub-door50. The light mounting part523may be disposed on the door liner52to protrude rearward along each of both left and right ends of the liner opening521. Here, the light mounting part523may be disposed further behind the panel assembly60, protrude backward, and pass through the opening41in a state in which the sub-door50is closed so that the light mounting part523is accommodated in the storage case43. Also, the light mounting parts523may be opened in a direction facing each other, and the door lights56may be mounted inside the opened sides to irradiate light in the direction facing each other. The upper cap decoration54may define a top surface of the sub-door50and be coupled to upper ends of the outer plate51and the door liner52. The top surface of the upper cap decoration54is opened so that a decoration opening541communicating with an upper space of the panel assembly60is formed, and is shielded by a decoration cover57. The decoration cover57may include a shielding part571that shields the decoration opening541and a PCB mounting part572extending downward from a bottom surface of the shielding part571. The PCB mounting part572may be mounted with PCBs573and574for an operation the panel assembly60and electrical components inside the sub-door50. The PCBs573and574may be configured in at least one module form and may be provided in the PCB accommodating space710above the sub-door50. Here, an inner space of the sub-door50except for the PCB accommodating space710communicating with the decoration opening541may be filled with an insulator53. The lower cap decoration55may define a bottom surface of the sub-door50and be coupled to lower ends of the outer plate51and the door liner52. Also, the lower cap decoration55may be provided with a manipulation device592that opens the sub-door50. Also, the lower cap decoration55may be further provided with a handle groove that is recessed upward and into which a user's hand is inserted during the rotation operation for the opening of the sub-door50. The panel assembly60may be disposed between the outer plate51and the door liner52. Also, the panel assembly60may be configured to shield the plate opening511and the door liner opening521. Also, the panel assembly60may be selectively manipulated to one state of transparent, translucent, opaque, and screen output states by the user. Thus, the user may selectively see through the inner space of the sub-door50through the panel assembly60and see the screen output through the panel assembly60. The frame70configured to support the panel assembly60is mounted on a circumference of the plate opening511of the outer plate51. The panel assembly60may be maintained in the fixed and mounted state by the frame70. Particularly, a front surface of the outer plate51and a front surface of the panel assembly60may be disposed on the same extension line so that a front surface of the sub-door50has a sense of unity. A frame opening701is defined at a center of the frame70. The frame opening701has a size somewhat less than that of the plate opening511and has a structure in which the panel assembly60is seated thereon. In the state in which the panel assembly60is mounted on the frame70, the front surface of the panel assembly60may shield the plate opening511and be exposed forward. A rear surface of the panel assembly60may shield the liner opening521and be exposed backward. Also, the frame70may have a coupling structure with the outer plate51. Here, the outer plate51and an end of the panel assembly60may be mounted on the inner frame52in a state in which the outer plate51and the end of the panel assembly60are closely attached to each other. Thus, when the sub-door50is viewed from the front side, an end of the outer plate51and a periphery of the panel assembly60are in close contact with each other, so that a gap between the outer plate51and the panel assembly60is rarely seen or is seen in a form of a line, and the outer appearance of the front surface may be seen as having senses of continuity and unity. The panel assembly60may have a size that is enough to cover the plate opening511and the liner opening561inside the sub-door50. Also, the see-through part21may be provided in the transparent panel assembly60so that the inner space of the refrigerator is selectively visible, and a screen is outputted. Also, the front surface of the panel assembly60, which is exposed at the front side through the outer plate51, may include the see-through part611through which the inside behind the panel assembly60is visible and on which a screen including an image and/or video is output, a bezel613provided to be opaque along a circumference of the see-through part611. The panel assembly60may further include a transmission part612through which light can pass. The transmission part612is provided at a lower side of the assembly60, in particular at a lower side of the see-through part611. In detail, the bezel613may be disposed on a circumference of a front panel61defining the front surface of the panel assembly60. The bezel613may be printed with an opaque material having a color such as black. Components disposed behind the front panel61may be covered so as not to be exposed to the outside. On a central portion or area of the front panel61a see-through part611is provided. This part is transparent or semi transparent. The see-through part611is a part on which the bezel613is not disposed. The see-through part611may have a size corresponding to a position corresponding to a first display63. Thus, the see-through part611defines an area through which the inside of the refrigerator is visible and defines an area on which the screen is output when the first display63operates. Thus, the see-through part611may be referred to as an output part, a visualization part, and a visualization area. The transmission part612is narrower in a vertical direction and extends lengthily in a horizontal direction, may be disposed below the see-through part611. The transmission part612may be transparent or semi transparent part which may be realized as a horizontal stripe or as a plurality of very tiny transparent or semi transparent spots arranged in a horizontal bar shape. The transmission part612may also be provided to allow light to be transmitted because the bezel613or the opaque material of the bezel613is not provided in area of the transmission part612. Thus, information may be displayed. This information may include an operation state of the refrigerator1displayed by the transmitted light. For example the information provided by the transmission part612may be at least one of a voice recognition state, a touch or note operation input state, an internal temperature, a time setting state, and the like and/or may be displayed as a partial emission area such as a bar graph. In addition, while the partial emission area moves, the transmission part612may be dynamically displayed. Since the transmission part612is displayed in the form of a line, the transmission part612may be referred as a display. Hereinafter, a structure of the panel assembly will be described in more detail with reference to the accompanying drawings. FIG.7is an exploded perspective view of the panel assembly that is one component of the sub-door. Also,FIG.8is a rear perspective view of the panel assembly. Also,FIG.9is a cross-sectional view illustrating an upper end of the panel assembly. Also,FIG.10is a cross-sectional view illustrating one end of the panel assembly. As shown in the drawings, the panel assembly60may be constituted by a plurality of plate-shaped panels, and each of the panels may be spaced a predetermined intervals from each other by at least one spacer to constitute one assembly. In detail, the panel assembly60may have an outer appearance that is defined by the front panel61and the rear panel65, which define the front and rear surfaces of the transparent panel assembly60. The panel assembly60may further include an outer frame67connecting the front panel61to the rear panel65. The front panel61may be made of a transparent material (e.g., blue glass) that defines an outer appearance of the front surface of the panel assembly60. The front panel61may have a size corresponding to that of the plate opening511and/or may have a size greater than that of the frame opening701. Thus, the rear surface of the front panel61may be supported by the frame70. In a state in which the panel assembly60is mounted, an end of the front panel61may be in contact with an end of the plate opening511, and the plate opening511and a circumference of the front panel61may be in contact with each other. In detail, the circumference of the front panel61may further protrude outward than the rear panel65. Thus, the circumference of the front panel61defining the front surface of the panel assembly60may further extend to the outside of the frame opening701and thus may be stably supported by the frame70. The rear panel65as well as the outer frame67may be inserted into the frame opening701. Also, the frame70may be coupled to the panel assembly60by a coupling member such as a screw coupling the outer frame67to the panel assembly60. Thus, the circumference of the panel assembly60may be supported by the frame70, and simultaneously, the frame70may be coupled to the outer frame67so that the heavy panel assembly60is maintained in a stably fixed and mounted state even when the sub-door50is opened and closed. A touch screen (touch screen bonding TSB)62may be disposed on the rear surface of the front panel61. The touch screen62may have a transparent film shape and be attached to the rear surface of the front panel61. Thus, even when information is displayed in the area of the see-through part611, or the screen is output on the first display63, the see-through part611may not affect the output of the screen. The touch screen62may be configured to sense user's touch manipulation and may be referred to as a touch sensing device or a touch sensor. The touch screen62may have a size that is at least equal to or larger than that of the see-through part611or the first display63. Thus, when the user touches the area of the see-through part611, i.e., the screen output area of the front panel61of the first display63, the screen output area may be sensed by the touch screen62, and thus, information may be input and displayed according to the sensed position. A touch cable621connected to the touch sensor62may be disposed on an outer end of the front panel61. The touch cable621may connect the touch screen62to the PCB573above the sub-door50. That is, the PCB573spaced apart from the touch screen62and the touch screen62may be connected to each other by the touch cable621. Also, the touch cable621may be provided as a flexible film type cable such as a flexible flat cable (FFC) or a flexible print cable or flexible print circuit board (FPC). A printed circuit may be printed on the touch cable621to constitute at least a portion of the PCB573. The touch cable621may be connected to the touch screen62to extend upward. Also, the touch cable621may be configured so that a wire is disposed on a base made of a resin material such as a film and may extend upward along the rear surface of the front panel61. The touch cable621may be flexibly bent so that the touch cable601has a thin thickness and a wide width like a sheet. Also, the touch cable621may have a shape such as a film or a sheet and thus may have a structure in which an end of the touch cable621is easily connected to a connector573aof the PCB573when connected to the PCB573. In addition, the touch cable621may be disposed along the rear surface of the front panel61and disposed along a wall surface of the inner space of the sub-door50to efficiently arrange the space inside the sub-door50. In addition, not only the touch cable621, but also the first display cable632connected to the display63and the light cable642connected to the display light641may have the same structure. All of the cables6621,632,642, each of which has a flat cable shape as described above, may extend up to an upper end of the panel assembly60and may be guided to the PCB accommodating space710defined in the upper end of the sub-door50having a thin width and wide width. In addition, a simple structure connected to the PCB573disposed above the sub-door50may be provided. The first display63may be disposed on the rear surface of the front panel61. The first display63may be configured to output a picture or an image through the see-through part611and may have a size corresponding to that of the see-through part611. The first display63may be provided in the form of a module on which a scree is capable of being output. Also, the display63may be transparent so that the user sees the inside through the display63when the screen is not outputted. Thus, the first display63may be referred to as a transparent display and may have various shapes. Also, the first display63may be referred to as a main display63so as to be distinguished from the second display90. A source board631may be disposed on one end of both left and right sides of the first display63. The source board631may be configured to output a screen through the first display63and connected to the first display63and thus provided in an assembled state. Also, a portion of the source board631may also have a flexible film type cable structure. Also, the source board631may be disposed inside the outer frame67. The source board631may be disposed inside a side part671that defines each of left and right sides of the panel assembly60of the outer frame67. Thus, the source board may be disposed so as not to be exposed through the see-through part. The source board631may be connected to the display cable632. The display cable632may have a flexible and flat structure like the touch cable621and also have a structure that is freely bendable. The display cable632may be bent to extend along the circumferential surface of the panel assembly60, i.e., be bent so that an end thereof extends upward from the transparent panel assembly60. Thus, the display cable632may be coupled to the PCB573inside the PCB accommodating space defined in the upper end of the sub-door50. A first spacer643may be provided on each of both left and right sides of the first display63. The first spacer643may allow the first display63and the light guide plate64to be maintained at a set distance. Also, the first spacer643may have a rod shape extending from an upper end to a lower end of the first display63and may be made of aluminum. The light guide plate64may be disposed behind the first display63and be seated on the first spacer643so as to be spaced a predetermined distance from the display63. The light guide plate64is configured so that light irradiated from the display light641is diffused or scattered to illuminate the first display63at the rear side. For this, the light guide plate64may have a plate shape having a size equal to or somewhat greater than that of the first display63. The display light641may be disposed at a position corresponding to at least one of upper and lower ends of the light guide plate64or each of the upper and lower ends of the light guide plate64. The rear panel65may include a rear panel651and a heat insulation panel652. The rear panel651may be disposed at a rear side of the light guide plate64. The rear panel651may define the rear surface of the panel assembly60and have a size greater than that of the light guide plate and less than that of the front panel61. Also, the rear panel651may have a size greater than that of the liner opening561to cover the liner opening561. A pair of second spacers66,661,661may be disposed between the rear panel651and the light guide plate64. Each of the second spacers66,661and662may have a rectangular frame shape and be disposed along a circumference of each of an insulation panel652and the rear panel651. The insulation panel652for heat insulation may be provided between the pair of second spacer661and662. The insulation panel652may be maintained to be spaced a set interval from each of the insulation panel652and the rear panel651by the pair of second spacers661and662. A double-layered insulating space may be defined by the pair of second spacers661and662, the insulation panel652, and the rear panel651. In detail, the second spacers662disposed at the front side may support each of a rear surface of the light guide plate64and a rear surface of the insulation panel652. In this case, the second spacer662may simply support the light guide plate64so that the light guide plate64that is expanded and contracted is effectively supported. In addition, the second spacers661disposed at the rear side may support each of a rear surface of the heat insulation panel652and a front surface of the rear panel651. Here, the second spacer661, the insulation panel652, and the rear panel651may completely adhere to each other. Thus, an insulation space is defined between the rear panel651and the insulation panel652. For example, the insulation space may be defined to be vacuumed or be defined by injecting an insulating gas. In the state in which the rear panel651adheres to the second spacer66, an outer end of the rear panel651may further extend outward from the second spacer66. Also, the outer frame67may be mounted on the outer end of the rear panel651so that the rear panel651and the front panel61are fixed to each other. The outer frame67may have a rectangular frame shape. The outer frame67may connect the rear surface of the front panel61to the front surface of the rear panel651. The outer frame67may define the peripheral surface of the panel assembly60. In detail, the outer frame67may define a periphery of an outer portion of the panel assembly60and also have a connection structure that is capable of allowing the front panel61to be maintained at a certain distance. The outer frame67may include a pair of side parts671defining both left and right surfaces and upper and lower parts, which connect upper and lower ends of the side part671to each other and define top and bottom surfaces, respectively. A space between the front panel61and the rear panel651, i.e., an inner space of the outer frame67may be completely sealed by the coupling of the outer frame67. Also, the inside of the outer frame67may be more sealed by a sealant68(seeFIG.21) applied on a circumference of the outer frame67. That is, the overall outer appearance of the panel assembly60may be defined by the front panel61, the rear panel651, and the outer frame67, and all of the remaining constituents may be provided in the outer frame67. Thus, the sealing may be performed only between the outer frame67, the front panel61, and the rear panel651to completely seal the multilayered panel structure. As a result, the panel assembly60may be disposed in the sub-door50so that the inside of the refrigerator is seen, and the screen is outputted, and also, the thermal insulation structure may be achieved in the multilayered panel structure at the minimum sealing point to secure the thermal insulation performance. At least one display light641may be mounted on inner surface of the outer frame67, preferably on the upper part672and/or the lower part673. The one or more display lights641may be mounted on the upper part672and/or the lower part673, respectively. The light guide plate64may be disposed between the display lights641. Thus, light emitted by the one or more display lights641, preferably by an LED641aof the display light641may be directed to an end of the light guide plate64and then travel along the light guide plate64so that the entire surface of the light guide plate64emits light. The one or more display lights641disposed on the inner upper ends and/or inner lower ends of the panel assembly60may be connected to a light cable642. The light cable642may have a flexible and flat shape like the touch cable621and the display cable632. The light cable642may be connected to the display light641that is mounted inside the outer frame67to extend to the outside of the panel assembly60. Also, the light cable642may extend along the circumference of the first display63so that the light cable642is not exposed through the first display63. Also, the light cable642may extend upward in a state of being closely attached to the rear surface of the rear panel651. As occasion demands, the display light cable606may be bent in the state of adhering to the rear surface of the rear panel651and then may be connected to a PCB573disposed on the upper portion of the sub-door50. Also, the sealant68may allow at least one of cables601,605, and606connected to the touch screen62, the display panel63, and the display light641within the panel assembly60to be accessible therethrough. That is, the sealant68may seal a portion that is in contact with an outer surface of each of the cables621,632,642when the cables621,632,642extend from the inside to the outside of the panel assembly60to prevent water or moisture from being introduced into a space through which the cables621,632,642are accessible. A heater675may be disposed along an outer surface of the outer frame67. The heater675may have a wire shape and be mounted on a heater mounting part672crecessed along the outer surface of the outer frame67. Heat generated by the heater675may heat the circumference of the front panel61along the outer frame67to prevent condensation from occurring. Also, a panel assembly fixing part672bmay be disposed on the outer surface of the outer frame67. A screw passing through the frame70may be coupled to the panel assembly fixing part672b. The panel assembly60may be maintained in a state of being mounted on the frame70by the coupling of the screw. Hereinafter, the structure of the frame70will be described in more detail with reference to the drawings. FIG.11is a front perspective view of a support frame that is one component of the sub-door. Also,FIG.12is a rear perspective view of the support frame that is one component of the sub-door. As illustrated in the drawings, the frame70may be injection-molded using a plastic material and may have a rectangular frame shape so that a frame opening701is defined at a center thereof. Also, the frame70may have a predetermined width and be coupled to the outer plate51, and simultaneously, the panel assembly60may be fixedly mounted on the frame70. The frame70may include an upper frame71defining an upper portion, a lower frame73defining a lower portion, and a side frame72connecting both ends of each of the upper frame71and the lower frame73to each other. In detail, the frame70may define the overall shape of the frame70having the rectangular frame shape by coupling the upper frame71, the lower frame73, and the pair of side frames72to each other. The upper frame71may support an upper portion of the outer plate51and an upper portion of the front panel61. The upper frame71may define a shape of the upper portion of the frame70and may divide the upper space of the door20, preferably sub-door50in a front and rear direction. That is, the upper frame71may be provided with an upper extension part711extending up to the top surface of the door20, preferably sub-door50, and the space above the sub-door50may be divided forward and backward by the upper extension part711. Thus, the upper side of the door20, preferably sub-door50may be divided forward and backward by the upper frame71. A PCB accommodating space710in which the PCB573may be accommodated may be defined in a rear space. The PCB accommodating space710may communicate with the decoration opening541. The lower frame73may be coupled to a lower end of the side frame72and may be configured to support a lower portion of the outer plate51and a lower portion of the panel assembly60. The side frame72may define both left and right sides of the frame70and extend lengthily in a vertical direction to connect the upper frame71to the lower frame73. That is, the side frame72has a structure that is capable of being coupled to both ends of the upper frame71and the lower frame73. The overall structure of the frame70may have the rectangular frame shape. The upper frame71, the lower frame73, and the side frame72are coupled to each other. In a state in which the frame70is assembled, a first mounting portion702extending backward from the frame70, in particular from the first mounting part712may be disposed on a circumferential surface of the frame opening701defined at the center of the frame70. The first mounting portion702may extends backward to have a predetermined width and may be disposed to be in contact with the circumferential surface of the panel assembly60, that is, the outer frame67. Also, the screw that is coupled to pass through the first mounting portion702may be coupled to the outer frame67so that the panel assembly60is stably fixed and mounted on the frame70. A plate accommodating groove703recessed along a circumference of the frame70may be disposed on a front surface of the frame70. The plate accommodating groove703may be recessed at a position corresponding to the bent plate part512so that the bent plate part512of the outer plate51is inserted and may be disposed along the bent plate part512. In addition, the bent plate part512may be disposed to be in contact with the circumference of the front panel61in the state of being inserted into the plate accommodating groove703. Inner and outer surfaces of the plate accommodating groove703may define a plane having the same height, and thus, the front circumference of the frame70may stably support the rear surface of the outer plate51corresponding to the circumferential surface of the plate opening511. That is, each of the upper frame71, the lower frame73, and the pair of side frames72may support the outer plate51. In this embodiment, the frame70may have a structure in which the frame70is molded to be separated into four parts, but the frame70may be provided by coupling two or more components to each other, as necessary. The lower frame73may have a structure that supports and fixes the outer plate51and the lower portion of the panel assembly60, and also, may be provided with a second mounting part731on which a second display90that allows light to be irradiated through the transmission part612is mounted. For example, the second display90may be configured so that a plurality of LEDs are arranged in a line along a substrate at a position corresponding to the transmission part612. Thus, the second display90may be referred to as a line display or an LED bar. Also, the second display90may be referred to as an auxiliary display90so as to be distinguished from the first display63. Also, the upper frame71may define a space above the sub-door50in addition to the structure that supports and fixes the upper portion of the outer plate51and the panel assembly60. In addition, the upper frame71may be configured to guide the cable621extending from the panel assembly60. Hereinafter the structure of the upper frame71will be described in more detail with reference to the drawings. FIG.13is a rear perspective view illustrating the upper frame of the support frame. As illustrated in the drawings, the upper frame71may include an upper extension part711disposed at an upper side, a first mounting part712disposed at a lower side, and a barrier713disposed between the upper extension part711and the first mounting part712. In detail, the frame70may be divided by the barrier713into an upper portion and a lower portion. The first mounting part712may have a structure coupled to the outer plate51and the upper end of the panel assembly60. The upper extension part711defines the PCB accommodating space710, in which the PCB573or other components may disposed, in the upper end of the door20, preferably the upper end of the sub-door50. The barrier713may divide the first mounting part712and the upper extension part711. The barrier713may define a bottom surface of the PCB accommodating space710to prevent the insulator53filled into the door20, preferably sub-door50, from being introduced into the PCB accommodating space710. In detail with reference to the structure of the first mounting part712, an upper end of the frame opening701and a portion of the first mounting portion702may be disposed on a lower end of the first mounting part712. Also, both side ends of the bottom surface of the first mounting part712may be configured to be coupled to the upper end of the side frame72. In addition, the plate accommodating groove703may be defined in the first mounting part712. The plate accommodating groove703may be disposed along a circumference of the first mounting part712. The first mounting part712may have a structure that allows the cable621to be guided from the upper end of the panel assembly60to the PCB accommodating space710. In detail, a guide wall715protruding backward to define a cable accommodating space, into which the insulator53is not introduced, may be disposed on the rear surface of the first mounting part712. The guide wall715may have a rib shape having a predetermined thickness and may extend downward from the barrier713. The guide wall715may be disposed to be coupled to the frame cover80to be described below and may protrude to a height that is capable of being inserted into the frame cover80. A lower end of the guide wall715may be spaced apart from the first mounting portion702and may provide a space in which the frame cover80is mounted in the first mounting part712. A cable inlet714may be disposed inside the guide wall715. The cable inlet714may be opened to pass through the first mounting part712inside the guide wall715. A cable guide part716extending up to an upper end of the panel assembly60may be disposed at a lower end of the cable inlet714. The cable guide part716may be recessed from the front surface of the first mounting part712, and a recessed depth of the cable guide part716may correspond to the thickness of the cable621or be somewhat greater than the thickness of the cable621. Thus, even if the front panel61is mounted on the frame70, a passage in which the cable621is capable of being disposed may be provided between the front panel61and the frame70by the cable guide part716. Thus, in the state in which the sub-door50is assembled, the cable621extending from a top end of the panel assembly60may pass through the cable guide part716and may be guided to an inner space of the guide wall715through the cable inlet714. In addition, a microphone mounting part718on which a microphone (not shown) that receives a user's voice signal may be disposed at a center of an upper portion of the first mounting part712. In addition, a ground hole719through which a wire for grounding is connected may be defined in an upper portion of the first mounting part712by opening a portion of the plate accommodating groove703. The wire for the grounding may be connected to a portion of the bent plate part512protruding through the ground hole719. Wire guide parts702a,713b, and732may be disposed in the frame70. The wire941of the second display90mounted on the frame70may be guided to the PCB accommodating space710by the wire guide parts702a,713b, and732without being in direct contact with the insulator53. Here, the wire941may have a wire shape unlike the cable621. That is, the cables621,632, and642each of which has a flat shape and which are connected to the panel assembly60may be guided into the PCB accommodating space710through a cable accommodating space defined by coupling of the frame70and the frame cover80, and the wires941each of which has a wire shape and which are connected to other electric components including the second display90may be guided into the PCB accommodating space710along the wire guide parts702a,713b, and732provided in the frame70. The wire guide parts702a,713b, and732may include a barrier wire guide part713bprovided in the barrier713, a mounting part wire guide part702aprovided in the first mounting portion702, and a lower wire guide part732provided in the lower frame73. In detail, the barrier wire guide part713bmay be provided in the barrier713. The wire941connected to the second display90may pass through the barrier wire guide part713b. The barrier wire guide part713bmay be disposed at a position corresponding to a central area of the frame70in the horizontal direction. The barrier wire guide part713bmay be recessed from a protruding end of the barrier713or may pass through the barrier713. Also, a mounting part wire guide part702amay be provided below the barrier wire guide part713b. The mounting part wire guide part702amay be provided on the first mounting portion702. The mounting part wire guide part702amay be configured so that the wire941guided between the first mounting portion702and a circumferential surface of the panel assembly60faces the barrier wire guide part713band may be disposed vertically below the barrier wire guide part713b. In addition, the mounting part wire guide part702amay be recessed from a protruding end of the first mounting portion702or may pass through the first mounting portion702. The microphone and the wire for the grounding may also have to be accessible to the inside of the PCB accommodating space710and may pass through the barrier wire guide part713bprovided in the barrier713. The cable accommodating space810defined by the guide wall715may be opened upward, and the opened top surface of the cable accommodating space810may be defined by a barrier opening717. The barrier opening717may provide an inlet configured to so that the cable621is inserted into the space formed by the guide wall715is guided to the PCB accommodating space710and may be provided by cutting a portion of the barrier713. Also, the barrier opening717may be referred to as a cable outlet because the cable621is guided to the outside of the cable accommodating space810. The barrier713may cross the upper frame71in the horizontal direction. Also, the barrier713may protrude vertically from a rear surface of the upper frame71. The sub-door50may have a thickness that gradually increase from one end, to which a rotation axis of the sub hinge is coupled, to the other end thereof. Thus, the barrier713may have a protruding height that gradually increases as it extends from one end to the other end to correspond to the thickness of the sub-door50. The barrier713may have the form of a pair of plates spaced apart from each other in the vertical direction. Thus, a barrier coupling groove713amay be defined by the barrier713. The barrier coupling groove713amay be provided so that a liner coupling part524protruding from the front surface of the door liner52is inserted. Thus, when the door liner52is assembled, the liner coupling part524protruding in a rib shape at a position corresponding to the barrier coupling groove713amay be inserted into the barrier coupling groove713a. The inside of the sub-door50may be divided vertically with respect to the barrier713by the coupling of the door liner52, and a foam liquid filled in the sub-door50may not be introduced above the barrier713, i.e., into the PCB accommodating space710. A barrier reinforcement rib713cmay be disposed on the lower barrier713of the pair of barriers713. The barrier reinforcement rib713cmay extend from the rear surface of the frame70in the protruding direction of the barrier713. Here, the barrier reinforcement rib713cmay extend up to an end of the barrier713. Also, the barrier713may protrude downward by a predetermined height with respect to a bottom surface of the barrier713. When a plurality of the barrier reinforcement ribs713care provided at regular intervals, and the foam liquid is injected to form the insulator53, the barrier713may be prevented from being deformed or damaged by an injection pressure of the foam liquid. The barrier opening717may be defined in the barrier713. The barrier opening717may pass through the barrier713vertically to communicate with a top surface of the cable accommodating space810. That is, the barrier opening717may be provided to be opened by cutting a portion of the barrier713. Also, the barrier713may extend at each of both left and right ends with respect to the barrier opening717. The upper extension part711may extends upward from the upper end of the barrier713to extend up to the top surface of the sub-door50, that is, a bottom surface of the upper cap decoration54. The upper extension part711may extend upward to define the PCB accommodating space710. Also, side portions711band711cdefining both left and right surfaces of the PCB accommodating space710may be further disposed on both left and right sides of the upper extension part711. A side hole711dmay be defined in each of the side portions711c, which is adjacent to the rotation axis of the sub-door50, of the left and right side portions711band711c. The side hole711dmay allow the wire cable connected to the PCB573to be guided to the outside of the sub-door50through the rotation axis of the sub hinge. The upper extension part711may be spaced apart from a front surface of the outer plate51, and the molded insulator531may be disposed in a space between the outer plate51and the upper extension part711. The molded insulator531may be made of an insulation material. For example, the molded insulator531may be provided as a vacuum insulator having excellent insulating performance or may be made of the same material as the insulator53. Also, the molded insulator531may be molded with a size and shape corresponding to a size of the space between the outer plate51and the upper extension part711. Thus, in the process of assembling the sub-door50, the molded insulator531may be inserted and mounted between the outer plate51and the front surface of the upper extension part711. Even if the PCB accommodating space710is defined in the top of the sub-door50by mounting the molded insulator531, and the insulator53is not filled in the PCB accommodating space710, dew condensation may be prevented from being generated on the front surface of the outer plate51. Hereinafter, a structure in which the second display340is mounted will now be described in more detail with reference to the accompanying drawings. FIG.14is an exploded perspective view illustrating a coupling structure of the panel assembly, a second display, and a lower frame. Also,FIG.15is an enlarged view illustrating a portion A ofFIG.14. Also,FIG.16is a front view of the second display. As shown in the drawings, the lower frame73may be coupled to a lower end of the side frame72. An upper end of each of both sides of the lower frame73may protrude upward and be coupled to the upper end of the side frame72. The upper end of the lower frame73may define a lower end of the frame opening701in a state of being coupled to the side frame72. A first mounting portion702may be disposed along the frame opening701. The first mounting portion702may be disposed along an entire circumference of the frame opening701and may be in contact with a circumferential surface of the panel assembly60, i.e., the outer frame70and a sealant68. The lower frame73may generally include a front surface part730that is in contact with a rear surface of the front panel61. The front part730may be in contact with the rear surface of the front panel61to support the panel assembly60at a rear side. Also, an adhesive may be applied to the front surface part730, and the front panel61and the lower frame73may be firmly fixed by the adhesive. A bent part accommodating groove703may be defined around the front surface part730. The bent part accommodating groove703may be connected to the bent part accommodating groove703of the side frame72and the upper frame70and may be configured so that the bent plate part512of the outer plate51is inserted. The outer plate51may be coupled to the lower frame73by inserting the bent plate part512into the inside of the bent part accommodating groove703. Also, an outer end of the lower frame73outside the bent part accommodating groove703may be in contact with a rear surface of the outer plate51to support the outer plate51In the state in which the outer plate51is coupled to the lower frame73, the bent plate part512may be in close contact with a circumferential surface of the front panel61seated on the lower frame73. Thus, the front surface of the sub-door50in the assembled state may minimize a gap between an opening of the outer plate51and the front surface of the panel assembly60. A transmission part612may be disposed on an area of the bezel613under the front panel61. The transmission part612may be disposed at a position corresponding to the second display90and be disposed in front of the second display90. The transmission part612may be provided by cutting a portion of the bezel613to define a slit-shaped area through which light is transmitted. Thus, light irradiated from the second display90may pass through the transmission part612and be displayed to the outside. As illustrated inFIG.16, the second display90may include a substrate91and a plurality of LEDs92mounted on the substrate91. The substrate91may have a size that is capable of being accommodated in the second mounting part731and may extend in a left and right direction. Also, a plurality of the LEDs92may be continuously arranged at regular intervals along the substrate91. Particularly, the arranged position of the LEDs92may correspond to the disposed position of the transmission part612. Thus, a vertical width of the transmission part612may correspond to that of each of the LEDs92. Also, the vertical width of the transmission part612may be less than that of the substrate91. In general, the substrate91may have a length greater than a horizontal length of the transmission part612, and the LEDs92disposed at both ends among the LEDs92may be disposed at the same position as the transmission part612or disposed further inside the transmission part612. The LEDs92may be continuously disposed on a front surface of the substrate91, and elements for controlling the LED92may be mounted on a rear surface of the substrate91. Also, a substrate connector932to which the wire941for supplying power to the substrate91is connected may be provided at one side of the rear surface of the substrate91. The substrate connector932may be disposed at a position adjacent to one end that is close to the lower wire guide part732so as to facilitate the connection with the wire941based on a center of the substrate91. Also, a screw hole911through which a screw95coupled to fix the substrate91pass may be defined in one end of the substrate91. When the screw95is coupled to pass through the screw hole911, the second display90may be fixed and mounted inside the second mounting part731. To mount the second display90, a second mounting part731may be provided on the lower frame73. The second mounting part731may be recessed to accommodate the second display90. The second display90may be accommodated inside the second mounting part731and not interfere with the front panel61when the front panel61and the lower frame73are coupled to each other. Also, the lower frame73may be provided with a lower wire guide part732connecting the second mounting part731to the frame opening701. The lower wire guide part732may guide the wire941connected to the second display90from the second mounting part731to the inside of the frame opening701. The lower wire guide part732may have a size somewhat than a diameter of the wire941and may accommodate the wire941so that the wire941does not interfere with the front panel61in the state in which the front panel61is mounted on the lower frame73. The lower frame73may be covered without being exposed forward by the bezel613, and thus, the second display90and the wire941may not be exposed to the outside. However, light irradiated from the second display90may be transmitted to the outside through the transmission part612. In more detail with respect to the second mounting part731with reference toFIG.15, the second mounting part731may be recessed in the front surface of the lower frame73. The second mounting part731may have a size slightly greater than that of the substrate91. Also, a display support member734supporting the second display90at a rear side may be disposed inside the second mounting part731. The display support member734may have a rib shape that protrudes forward from an inner surface of the second mounting part731. Also, the display support member734may extend from an upper end to a lower end of the second mounting part731. Thus, the second display90may be mounted inside the second mounting part731in a state of being spaced apart from an inner surface of the second mounting part731. In detail, the display support member734may include a rear support part734asupporting the substrate91on the rear surface of the substrate91and a lower support part734bsupporting the substrate91at a lower end of the substrate91. The rear support part734amay have a length corresponding to a vertical width of the substrate91to support the substrate91at a rear side. Here, the substrate91may be spaced a protruding height of the rear support part734afrom the second mounting part731. The element disposed on the rear surface of the substrate91and the substrate connector932may be prevented from interfering with the inner surface of the second mounting part731due to the spaced state of the substrate91by the rear support part734a. The lower support part734bmay further protrude forward from a lower end of the rear support part734ato support the substrate91at the lower end of the substrate91. Thus, the substrate91may maintain a set height and be maintained in a stably mounted state. Particularly, although a frequent impact is applied due to use characteristics of the sub-door50that is repeatedly opened and closed, the disposed position of the substrate91may be maintained. A plurality of display support members734may be spaced apart from each other along a longitudinal direction of the second mounting part731and may be disposed inside the second mounting part731to stably support the substrate91as a whole. A plurality of elastic fixing parts735may be disposed on the second mounting part731. Each of the elastic fixing parts735may protrude inward from an outer end of the second mounting part731and press an end of the substrate91to fix the substrate91. In detail, the elastic fixing part735may be disposed more forward than the rear support part734a. Also, the elastic fixing part735may be bent in a rounded shape to press the substrate91, thereby fixing the substrate91. The elastic fixing part735may be provided in plurality to generally press the substrate91to be maintained in a state of being seated on the display support member734as a whole. For example, the elastic fixing part735may be disposed at one end of the left and right ends of the second mounting part731and may be disposed at a position opposite to one end to which the screw95is coupled. Also, a plurality of elastic fixing parts735may be disposed along an upper end of the second mounting part731. A coupling boss736to which the screw95passes through the substrate91is coupled may be disposed on one of the left and right sides of the second mounting part731. For example, the coupling boss736may be integrated with the rear support part734a. That is, the substrate91may be fixedly coupled in a state of being supported on the rear support part734a. Also, a connector recess733may be provided inside the second mounting part731. The connector recess733may be defined in one of left and right sides with respect to a center of the second mounting part731. In detail, the connector recess733may be defined in the same extension line as the lower wire guide part732. The connector recess733may be further recessed than the second mounting part731and may have a size capable of accommodating the substrate connector932mounted on the substrate91. Thus, when the substrate91is fixed and mounted on the display support member734, the substrate connector932on the rear surface of the substrate91may be disposed at a position corresponding to the connector recess733so as not to interfere with the inner surface of the second mounting part731. The lower wire guide part732may guide the wire941connected to the second display90to be disposed and may extend upward from a position adjacent to the connector recess733. In detail, the lower wire guide part732may extend vertically upward from an upper end of the second mounting part731and may extend to communicate with a lower end of the frame opening701. The lower wire guide part732is opened forward, and thus, the wire941connected to the second display90in the state in which the second display90is mounted may be disposed on the lower wire guide part732, and then, the panel assembly60may be mounted on the frame70. Hereinafter, a state in which the second display90is mounted, and an arrangement state of the wire941will be described in more detail with reference to the drawings. FIG.17is a front view illustrating a state in which the second display is mounted on the lower frame. Also,FIG.18is a cross-sectional view taken along line VIII-VIII′ ofFIG.17. Also,FIG.19is a cross-sectional view taken along line XIX-XIX′ ofFIG.17. Also,FIG.20is a view illustrating an arrangement of the wire between the second display and the PCB in the frame. As shown in the drawings, the wire connector94may be coupled to the substrate connector932of the second display90. The wire connector94may be connected to an end of the wire941, and thus, the second display90may be connected to the wire941by coupling of the wire connector94. Also, the second display90may be fixed and mounted on the second mounting part731in the state in which the wire941connected. The second display90may be mounted on the display support member734and also be mounted so that the element931on the rear surface of the substrate91and the substrate connector932do not interfere with each other. Also, the second display90may not protrude further than the front surface part of the lower frame73so as not to interfere when the front panel61is mounted on the front surface part730. When the second display90is mounted, an upper end including one end of the left and right ends of the substrate91may be constrained by the elastic fixing part735. Also, the lower end of the substrate91may be restricted by the lower end support734bof the display support member734. In addition, the screw95may pass through the screw hole911and be coupled and fixed to the coupling boss736at the other end of the left and right ends of the substrate91. Due to the fixing structure, the second display90may be firmly fixed to the second mounting part731. Also, the second display90may be maintained in installation position in the sub-door50without being separated from the mounted position. That is, the arranged state of the LED92and the transmission part612may be maintained to secure a constant output of the screen when the second display90operates. The wire941connected to the second display90may be escaped from the second mounting part731to extend upward along the lower wire guide part732. The lower wire guide part732may be disposed along the front surface part730and be opened forward. Thus, the wire941may be disposed on the lower wire guide part732before assembling the panel assembly60. In detail, when the sub-door50is assembled, the second display90may be mounted on the second mounting part731of the lower frame73, and simultaneously, the wire941connected to the second display90may also be disposed along the lower wire guide part732. Referring toFIG.20, in the arrangement of the wire941in the frame70, the wire941may be guided upward along the lower wire guide part732. Since the wire941has a structure that is guided along the lower wire guide part732, when the front panel61is seated on the front part730of the lower frame73, the wire941may be naturally in an independent space. Thus, the wire may be guided upward through the independent space without being in contact with the insulator53filled in the sub-door50. Also, the wire941guided to the upper end of the lower wire guide part732reaches a lower end of the frame opening701. The wire941may be guided upward along an inner wall of the frame opening701, i.e., the first mounting portion702. Here, in the state in which the panel assembly60is mounted on the frame70, the outer frame70and the sealant68may be in contact with the first mounting portion702. Also, the wire941may be guided upward along a space between the outer frame70and the first mounting portion702or between the sealant68and the first mounting portion702. Particularly, the sealant68may have elasticity, and thus, so even if a separate space is not defined, the wire941may be guided upward in a state of being sandwiched between the sealant68and the first mounting portion702. The wire941may be guided upward along the inner surface of the frame opening701, i.e., the outer surface of the panel assembly60. Also, the wire941may be guided along the upper end of the frame opening701or the upper end of the outer surface of the panel assembly60, and when reaching a center of the frame opening701, the wire941may move to the rear space of the frame70through the mounting part wire guide part702a. In a section in which the wire941is guided along a circumference of the frame opening701, the wire941may not be exposed to the space in which the insulator53is disposed, and the wire941may be disposed without separate wire restraint. Also, the wire941guided upward through the mounting part wire guide part702amay be introduced into the PCB accommodating space710through the barrier wire guide part713b. The wire941introduced into the PCB accommodating space710may be connected to the PCB573. The wire941may be exposed to the space in which the insulator53is disposed in a region between the mounting part wire guide part702aand the barrier wire guide part713b. Here, a separate tape, sheet, or cover may be attached to the frame70to cover the wire941between the mounting part wire guide part702aand the barrier wire guide part713b, and thus, the entire wire941may not be exposed to the insulator53. Hereinafter, the operation of the second display90having the above structure will be described in detail with reference to the drawings. FIG.21is a cross-sectional view illustrating the lower end of the sub-door. Also,FIG.22is an enlarged view illustrating a portion B ofFIG.21. Also,FIG.23is a view illustrating an output state of the transmission part in the sub-door. As illustrated in the drawings, in the state in which the sub-door50is assembled, the second display90may operate at a position corresponding to the transmission part612When the LED92of the second display90is turned on, the light of the LED92may be seen to the outside through the transmission part612. A bezel layer615may be disposed on the rear surface of the front panel61to prevent the light irradiated from the LED92from being transmitted to a portion other than the transmission part612. The bezel layer615may be configured to block transmission of light and prevent the rear side of the front panel61from being visible. For example, the bezel layer615may be printed with a black color or provided by attaching a film, and an area of the bezel613may be defined by the bezel layer615. Thus, only the area of the transmission part612on which the bezel layer615is not formed may transmit the light irradiated from the LED92. Also, a diffusion sheet614may be attached to the rear surface of the front panel61corresponding to the transmission part612. The diffusion sheet614may be configured to shield the transmission part612at the rear side. Thus, the light irradiated from the LED92may be transmitted through the diffusion sheet614, and thus, the transmission part612may be illuminated in the form of an overall surface light. That is, the light may be prevented from being shined in the form of a spot due to light condensation at a position corresponding to the LED92in the transmission part612. Particularly, in a situation in which an arranged distance of the LED92is not designed to exceed a set distance for thermal insulation of the sub-door50, the transmission part612may be shined in the form of the surface light by attaching the diffusion sheet614. That is, the second display90may be disposed at a position that is the farthest distance within a range in which the insulation of the sub-door50is satisfied. For example, an arrangement distance D1between the LED92and the rear surface of the front panel61may be about 5 mm to about 6 mm. Thus, the transmission part612may be illuminated in the form of the surface light without the light condensation due to the light passing through the diffusion sheet614while satisfying the thermal insulation of the sub-door50. Also, a vertical width D2of the transmission part612may be about 2 mm. An extension line passing through a center of the transmission part612may be disposed at the same position as the extension line passing through the center of the LED92. Also, the transmission part612may have the vertical width D2of about 2 mm. Thus, shadowing does not occur in the vertical direction, and the entire transmission part612may be brightly shined. Thus, generally the arrangement distance D1between the LED92and the rear surface of the front panel61may be larger than the vertical width D2of the transmission part612. The transmission part612may be configured to be shined by the plurality of LEDs92and may be converted into various colors according to the operation of the LED92. Thus, a specific color may be expressed according to an operation state of the refrigerator1. Also, the transmission part612may partially turn on and off the LED92so that only a portion of the entire light is illuminated, and the operation state of the refrigerator including a temperature and time may be displayed as a bar graph or a bar in which a length varies. Also, the transmission part612may allow the LED92to be continuously turned on and off, thereby enabling a dynamic output. For example, the shining portion may be changed, or the length or color of the shining area may be continuously changed. Also, the transmission part612may operate in conjunction with other components constituting the refrigerator1. For example, the first display63may be interlocked with the operation of the first display63so as to be output when the first display63outputs a specific screen, or the screen may be output in the state in which the first display63is turned off. Also, when the microphone operates, the transmission part612may be visualized. Although the structure in which the panel assembly and the second display are provided in the sub-door is described in embodiments, the structure may be equally applied to a refrigerating compartment door provided as a single door. Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. | 71,550 |
11859902 | Where appropriate, sectional views are included and are to be interpreted as continuous of the designs or patterns shown therein, unless specifically described otherwise. That is, pieces appearing as cylindrical sectioned are to be interpreted as continuing cylindrical shape throughout. Where there is conflict in interpretation of a sectional view and a more complete view, the more complete view should be assumed to control. Where there is a conflict in interpretation of a written description and a figure, the written description should be assumed to control. Where descriptions are of geometric or spatial terms, strict mathematical interpretation of those terms is not intended. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. The drawings may not be to scale. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. DETAILED DESCRIPTION To mitigate the problems encountered in beverage cooling as described herein, the inventors had to both invent solutions and, in some cases just as importantly, recognize problems overlooked (or not yet foreseen) by others in the fields of supercooling liquids, beverage cooling, and consumer products. Indeed, the inventors wish to emphasize the difficulty of recognizing those problems that are nascent and will become much more apparent in the future should trends in the foregoing industries continue as the inventors expect. Further, because multiple problems are addressed, it should be understood that some embodiments are problem-specific, and not all embodiments address every problem with traditional systems described herein or provide every benefit described herein. That said, improvements that solve various permutations of these problems are described below. Some embodiments of this present disclosure include a two part cooling system for circulating fluids cooled below beverage freezing points. The top segment of the present invention includes in some embodiments an insulated basin with immobilizing holders for spaced beverage bottles, within the basin and between the bottles is sufficient space to hold fluids that remain in liquid form at temperatures below the freezing temperature of the beverages, such as highly salted iced water brine with a lower freezing temperature than beer. To promote circulation of the brine and prevent crystallization of the beverages, the top segment is placed on a rotating base. The moving base can be powered and can impart reciprocal or linear motion onto the top segment to circulate the brine, to keep an even distribution of the brine between the bottles, and to ensure the beverages do not crystalize prematurely or at all. In a preferred embodiment, the top segment comprises four main parts, an outer shell100, an inner lining300, insulation, and a beverage separator500. In a preferred embodiment, the bottom segment comprises a rotating head, a rotating mount, a motor with a power source, and a base motor housing. In this embodiment, the two segments are held together by gravity and can be separated without tools and without latches by lifting the top segment off the bottom segment. Unless the context clearly means otherwise, throughout this specification, the terms upper, top, upward, down, lower, downward, and bottom are intended to mean the corresponding direction with respect to a fully assembled device made under the teachings of this disclosure placed upright and sitting on its lower base. In some embodiments, the upper body of the cooler comprises a substantially cylindrical hollow insulated tub. The top of the upper body is open to allow for the insertion and removal of beverages. The tub is insulated. In some embodiments, the body comprises three main parts, the outer shell100, the inner lining300and insulation layer. One embodiment of the outer shell100is shown inFIGS.1A,1B, and1C. In some embodiments the insulating ability of the cooler derives substantially from a substantially cylindrical insulation layer insert placed between the inner lining300and the outer shell100of the insulated tub. The insulation is preferably foam but can including any similar light-weight materials suitable for reducing heat transfer, e.g., polystyrene, cork, and polyurethane. In some embodiments, the upper body also comprises a lid, which substantially covers the top of the inner lining300, reduces air flow into and out of the interior portion of the upper body of the cooler and blocks sunlight, reducing convection and radiation heat transfer. An embodiment of the lid is shown inFIGS.6A,6B,6C, and6D. The inner lining layer300of the cooler can be generally cylindrical and sized appropriately to create an inner volume to hold the beverage containers and cooling fluid. One embodiment of the inner lining layer300is shown inFIG.3A-3E. In one embodiment, it is sized to hold eighteen beverage containers (e.g., bottles) arranged in three concentric circles with sufficient additional diameter to contain a beverage separator500, with spacing between each container for supercooling fluid to be circulated. In some embodiments, the inner lining300also comprises a top lip301that, when the lining300is placed within the outer shell100, between the two is an enclosed space where the insulation layer can fit. The bottom of the inner lining layer300in this embodiment contains depressions302sized and spaced to fit the beverage containers. In this embodiment, the inner wall or side of the inner lining300contains integral slots303that are sized and shaped to align with keys501on the beverage separator500. The slots303are positioned vertically such that the bottom of each slot (e.g., at location304) serves a support for the beverage separator ring500to keep it above the base of the inner lining layer300. The slots303are positioned along the inner circumference of the inner lining layer300such that placement of the beverage separator keys501into the lining layer slots303aligns the beverage separator500with the depressions302in the base of the lining layer300. This arrangement aligns the pieces such that when this embodiment is used, it places two areas of contact for each beverage container, vertically displaced: the depression302is sized to fit the bottom of the bottle and the corresponding beverage separator section holds the beverage container nearer its vertical midpoint. These two points of contact reduce the likelihood of tipping or bottle-to-bottle contact when the device is operating and reciprocating. In some embodiments, the depressions302in the bottom of the inner lining layer300are tapered to allow various sizes of beverage containers to fit. In some embodiments, the inner lining layer300is formed of an approximately constant 0.125 inch thick suitable plastic material. It can follow a slightly tapered cylindrical path to allow for ease of removal from mold when molded manufacture is selected and for ease of use allowing more space for bottle removal. In one embodiment, the diameter of the inner liner at its widest point is 19.750 inches including the lip, and 12.125 inches tall. In this embodiment its inner diameter at the top, from the inside surface to inside surface is 17.286 inches. Because of the slight taper, in this embodiment its inner diameter at the bottom from inside surface to inside surface is 16.754 inches. In other embodiments, the inner lining layer300, insulation, and outer shell100are all changed to fit the expected uses including particular beverage bottles including soft drinks, beer, wine, alcohol, water, and others. These modifications can be based on the height of the beverage containers, the number of beverage containers, the amount of fluid to be used, and otherwise. In the described embodiment, the outer shell100is sized and shaped to fit the inner lining layer300within and leave space along its circumference for insulating material. One embodiment of the outer shell100is shown inFIG.1. In this embodiment, the outer shell100will have substantially the same taper as the inner lining300such that a constant thickness area between the two is formed and for similar benefits for ease of manufacture and removal in a mold. In this embodiment, the bottom of the outer shell100has four slots101sized to fit with corresponding tabs in the rotating head. The slots101and tabs are sized and shaped to transfer rotational force from the motor in the lower section to the upper section. The slots101can be sized and shaped in any suitable manner to transfer the rotational energy of the rotating head to the main body. Other utility of the slots101is that they serve as an easy and nearly immediate placement of the upper body onto the rotating head with correct alignment, and they allow the device to be disassembled and reassembled with no tools and relying only on gravity to maintain the assembly. In one embodiment, there are four slots101, approximately in the shape of arcs centered about the center-point of the outer shell100, each approximately 450. This disclosure encompasses many embodiments of slots101, differing in size, shape or number, or other appropriate means for transferring the rotational or other directional energy from the moving (e.g., rotating) head to the body of the cooler. The numerous other ways to transfer rotational energy include fasteners, clips and clamps. In such embodiments, the other means for transferring rotational energy can be used in lieu of the slots101or in addition to the slots101. Other embodiments include a unitary construction between the motor and the cooler body, obviating any need for slots101or other means to transfer rotational energy. This embodiment includes a unitary piece that directly connects the motor to the upper section. In a preferred embodiment, the outer shell100has a height of 14.875 inches. The outer shell100can have at its widest point at top a diameter of 23.000 inches an inner diameter of 19.750. Here, the inner diameter matches the largest outer diameter of the inner lining300which allows the two to be placed together and result in a tight fit. The outer shell100has a shelf in this embodiment that extends from the diameter of 18.971 inches to the inner diameter of 19.750. This sizing allows for the inner lining300to be placed within the outer shell100and have approximately 0.779 inch overlap of the inner lining lip with the shelf. As with the inner lining, in this embodiment the outer shell100is constructed of 0.125 inch material. It can taper downward with a bottom diameter of 19.217 inches, which in this embodiment would be the appropriate width to match the taper of the inner lining. The slots101each have a height of 0.875 inches in this embodiment. There are a variety of suitable insulation materials for the including but not limited to polyethylene and polystyrene. In some embodiments, the inner lining300and the outer shell100are of unitary construction and the insulation can include vacuum. The insulation in some embodiments is of two-part construction, one approximating a curved surface of a cylindrical to fill the space circumferential to the inner lining300and one approximating the flat surface of a cylinder beneath the bottom of the inner lining, both of appropriate thickness. In some embodiments, the insulation is of single construction as both the curved and flat surface of the cylinder of adequate thickness. In some embodiments, the insulation layer is added after assembly of the upper section and pumped into the space between the inner lining300and outer shell100as a hardening liquid or semi-liquid. Some embodiments include a lid600sized to cover the top of the inner shell300. One embodiment of the lid600is shown inFIG.6. The lid060serves primarily to block radiant energy from sunlight and to reduce heat transfer through convection. The lid600can be made of any suitable material, including material not ordinarily used for thermal insulation. In some embodiments, the lid600comprises additional accessories for the device, including any one or more of a measuring cup for the solute (e.g., salt mixture) necessary for a slurry or brine, a bottle opener, a hatch for insertion and extraction of material from inside the basin, a solid surface to deliver an impact on the bottle as it is being removed. In some embodiments, in the assembled state, the upper body rests on the power section. The power section of the cooler in a particular embodiment comprises a motor housing stand, a rotating head200, a motor, a rotating mount and an electrical line or battery. In some embodiments, the motor is secured within the housing and rotationally secured to the rotating head200. The rotating head200serves as the contact point between the power section and the body of the cooler and to transfer rotational energy from the motor to the cooler. In some embodiments, the power section imparts linear motion instead of rotational motion. Other movements can similarly be incorporated into the teachings of the present disclosure. In the preferred embodiment, the rotating head200is a thin circular member that is operationally affixed (e.g., secured through bolts and/or other connecting parts) to the motor linkage and provides a plate-like base on which the upper body can sit when the device is in its fully assembled state. An embodiment of this rotating head200is shown inFIG.2. The rotating head200also comprises in this embodiment a matching set of tabs201to the slots101on the outer liner of the upper body. The tabs201are beneficially arranged to slide within the slots101and impart the rotational energy on the upper body. In one embodiment, there are four tabs201, each 0.750 inches tall. This corresponds to the 0.875 inch depth of the slots in the base of the outer shell of the upper body, plus 0.125 of space for ease of use, ease of manufacture without tight tolerances, and quick assembly and disassembly. As with the slots, the four tabs201are arranged in an arc manner, each spanning approximately 450 of a complete circle with approximately 450 spacing between each. In this manner, they are positioned to fit within the slots of the upper body outer shell and immediately place that section into a correct alignment for operation. The rotating head200is primarily a short cylinder or tapered cylinder and its outer surface is a slightly tapered cylindrical surface (or a frustoconical surface) with the upward jutting tabs201. The inside surface of the rotating head200is an oppositely disposed tapered cylinder or frustoconical surface, designed to engage with the motor housing on which it rests. It is sized and shaped such that the rotating head200can be quickly and easily placed onto the base and because of each piece's corresponding partial conical section, the two will easily align for final assembly. In the described embodiment, the underside of the rotating head200can have protrusions or bosses disposed vertically downward arranged to guide the arms of the rotating mount to through holes in the rotating head200. In some embodiments, the protrusions can be fashioned and positioned with tight enough tolerances around the rotating mounts such that when these rotating mounts are secured to the rotating head200and reciprocating through motion of the motor, the protrusions transfer some or all of the rotational energy from the rotating mounts to the rotating head200without excessive stress on the fastener connecting the rotating head200to the rotating mount. In one embodiment, the protrusions or bosses are disposed substantially perpendicular to the rotating mounts in their affixed state such that rotational forces are directed along the longest axis of the protrusions. In the preferred embodiment, there are sixteen such protrusions, arranged with four on each of the four rotating mounts. In one embodiment, the largest diameter of the rotating head200is its upper portion and is 18.265 inches and the narrow diameter on the opposite lower side after the taper is 17.915 inches. The total thickness of the rotating head200in this embodiment is 2.063 inches, which comprises the 0.750 inch tabs201and 1.313 inch height for the main body of the rotating head200. In the assembled state the motor housing stand is below the rotating head200. In the preferred embodiment, the motor is fixedly attached to the rotating head200or interface plate200such that the head or plate can be alternatively rotated clockwise and counter clockwise in a repeating pattern about the centerpoint of the rotating head200. In that embodiment, the rotation is horizontal; that is, in its assembled state and operable, under the teachings of this disclosure, the largest surface of the rotating head200is substantially parallel to the ground and when powered rotates in that same plane. In one embodiment, the motor409is connected to a rotating mount411, which in turn connects to the rotating head200. In this embodiment, the linkage arm413from the motor connects to the rotating mount by a fastener in such a position that a revolution of the motor and the consequent movement of the linkage arm of does not cause full circular motion of the rotating mount411. In this manner, the linkage arm extends from the motor, located approximately centered within the motor housing400, to a point near the outer edge of the rotating head. The center-points of the motion of each side of the linkage arm (the motor side and the rotating head side) can be vertically aligned or non vertically aligned, with the rotating head side having a larger radius than the motor side. Because of the geometries of this embodiment, as the motor turns, the linkage arm is in alternating tension and compression, as it pulls and pushes the rotating head in a reciprocating arc motion, necessarily less than 180°. This can be implemented with different linkages between the motor and the rotating head200, including double rocker mechanisms, crank-rocker mechanisms, and triple rocker mechanisms. In the preferred embodiment, the linkage arm connects to the rotating mount at the same vertical position as the pass-through hole in the rotating head200, such that a single fastener of sufficient length (e.g., a bolt) can connect each of the three pieces. The speed of rotation and possible angle of rotation of rotating head200is determined by the speed of the motor, the length of the linkage arm, and the location of the connection between the linkage arm and the rotating mount. This specification contemplates variations in each to change the speed and angle of rotation. In the preferred embodiments, the range of motion of the rotating head200is substantially smaller than a full revolution. In a particular embodiments, the range of motion of the rotating head200is between 30 and 60 degrees. In certain embodiments, considerations of the amount of j ostling of the beverage containers, stress on the motor and connecting parts, and power requirements are evaluated for proper speed of rotation. With respect to each consideration, the lower amount of each is preferred. In many embodiments, the reciprocation of the rotating head200ranges between 0.5-2.0 Hz. This range has been determined through testing to be an adequate range to prevent premature crystallization and limit the agitation of typically carbonated beverages, wherein carbonation can cause the beverage to overly bubble upon opening. The purpose of the rotation of the head is to impart rotational energy on the body of the cooler to cause constant agitation of the cooling material (e.g., ice and water) therein to facilitate even distribution of the fluid within the basin and to keep the beverages circulating to oppose crystallization. With the reciprocatory motion imparted by the engine, the cooling fluid within the body is alternatively forced in different directions and avoids settling in any position and freezing. Because of, for instance, the frictional forces between the various surfaces (e.g., the walls and the floor of the basin, the sizes of the beverage containers) and the cooling fluid, the fluid is first forced in once direction and begins a circular path in that direction. Soon, however, the alternatively rotating motion of the basis switches direction, and the cooling fluid is then forces in the opposite direction. In this manner, the inertia of the cooling fluid results results in a lag between the motion of the beverage containers and the cooling fluid. There is constant relative motion between the fluid itself and the beverage containers. In some embodiments, the rotating head200and motor are attached by fasteners, e.g., bolts passing through the rotating head200, through other combinations of pieces, including mounts, direct linkage arms, gears or otherwise. The rotating head200has appropriately sized pass-through holes sized and shaped to meet the motor linkage411beneath. In the preferred embodiment each of the pass-through holes are 0.469 inches in diameter and placed approximately on the midpoint of the sides of a square centered on a center-point of the rotating head200, with approximately 7.436 sides; that is, each hole is approximately 3.718 inches from the midpoint. It should be understood that this feature is designed for the purpose of meeting up with the motor linkage below; if the motor linkage varied in size, position, or shape, the corresponding holes would need to moved accordingly. The motor is housed in some embodiments in the motor housing stand400. One embodiment of the motor housing stand is shown inFIG.4. In some embodiments, the majority of the motor housing stand400has a generally cylindrical or frustoconical outer surface, continuing along substantially the same rate of taper as the upper body and the rotating head200. Disposed above this primary large frustoconical surface401of this piece in this embodiment is an oppositely disposed frustoconical surface402, sized and positioned to mate with the rotating head200to be positioned on top of it and to allow rotation of the rotating head200. In a preferred embodiment and when molds are the expected manufacturing process, frustoconical surfaces are preferred over cylindrical surfaces because of their ease of removal from molds. The bottom of the motor housing stand400can be arranged with pass-throughs405for bolts to attach a rotating mount base407to the motor housing stand. In a preferred embodiment, similar upward protrusions or bosses403extend from the bottom of the motor housing stand as with the rotating head200and for the same purpose: to align the rotating mount base with the pass through holes and to transfer any rotational energy from the motor409to the motor housing stand without relying on the bolts and pass through holes alone. In the case of these protrusions or bosses403, when the device is fully assembled and operating, the protrusions or bosses403provide forces opposing motion and transfer the stability of the ground on which the motor housing stand rests to the rotating mount base. In many embodiments, additional features are added to the underside of the motor housing stand to have the device rest on an appropriate surface, e.g., the ground. In some embodiments, the underside is fitted with rubber stands that extend beyond the base to better grip the surface and to reduce scarring. In a preferred embodiment, the motor housing stand is shaped to allow for lifting. Because the preferred embodiment does not use fasteners between the upper and lower sections, when an embodiment is created according to this method and it is in its upright state, lifting from anything above the base would cause the embodiment to disassemble. An advantage of this is the easy of assembly and disassembly, where the device can be put together and taken apart not only with without any tools be with only a vertical lift of the upper section. As such, it is advantageous to have oppositely disposed depressions on the cylindrical face of the base, with a flat surface (i.e., parallel to the ground) on the top to serve as a finger grip for lifting. In a particular embodiment, the base is 17.848 inches in diameter at its widest with a taper to 16.249 inches to its most narrow. It can be made in this embodiment of material 0.313 inches thick. The portion of the base that mates with the rotating head200is 0.700 inches tall, and the portion below is 6.345 inches, for a combined height of 7.045 inches in this embodiment. Each of the pass through holes for mounting rubber stands can be 0.344 inches in diameter. A beverage separator500can be placed in some embodiments inside the upper cooler body. One embodiment of the beverage separator device500is shown inFIG.5. The beverage separator500should be advantageously shaped to keep multiple beverage containers (e.g., bottles) separated from each other to minimize the chance of impact while the cooler body is rotated or agitated. The separator500should also be designed to allow sufficient space between the containers for the cooling fluid to flow between the containers in sufficient volume to allow for supercooling of the beverages. In one embodiment, a beverage separator500contains eighteen circular spaces503sized for insertion of bottles. In this embodiment, the circular spaces503are efficiently arranged in three concentric circles to most effectively fit within the cooler body. The concentric circles have one, six, and eleven spaces503respectively. Each of the spaces503for placing a beverage container has eight downward and inward facing fingers502ringing the space503and angled inwardly to apply symmetric or approximately symmetric gripping force on each side of the beverage container. In many embodiments, these fingers502are sized, angled, and made of appropriate material to allow the space503to hold a variety of sizes of beverage containers. In many embodiments, the beverage separator500and the fingers502are made of a single construction and are of a pliable plastic material. In these embodiments, different bottle sizes can be accommodated by having the downward facing fingers502flex to different diameters. In these embodiments, the fingers502are sized and placed such that when a larger diameter bottle is placed within their periphery, they each flex away from the bottle, with the spring force caused by the flexion holding the bottle in place. In some embodiments, the circular spaces are sized to hold different common beer bottle sizes, e.g., bombers, long necks, heritage, stubby, and pony bottles. For most applications, the circular spaces could be sized to hold bottles expected to range from 2.408 to 2.578 inches diameter (long neck and heritage bottles, respectively). To do this, each circular space in most of the various embodiments would be sized at or above the largest diameter to be used. In the embodiments mentioned, they would be sized at something larger than 2.578 inches diameter. The downward facing fingers502should be arranged such that when the fingers502are in their relaxed, unflexed state, their innermost points form a circle with a diameter at or smaller than the smallest diameter bottle expected to be used, in this embodiment 2.408 inches diameter. As the diameter of ring created by the ends of the fingers502becomes smaller in some embodiments, the greater the force should be expected to be exerted on the bottle in those embodiments. By example, a circular space of 3.00 inches with inwardly pointing fingers502forming a 2.30 inch circle would be able to hold both long neck and heritage bottles. There are many sizes of spaces appropriate for the various embodiments sized for each of the known beverage container types, for other types of beverage containers (e.g., wine bottles), for other types of containers to be cooled, for sizes and shapes of containers that are not traditional, currently popular, currently available, or currently existing, or for variable holding devices that can be sized and adjusted for applicability to different containers. In some embodiments, the beverage separator500contains keys501that align with slots303in the cooler body, such as the inner surface of the inner lining, to allow consistent placement of the beverage separator500in the cooler body. In an embodiment, the keys501are eight tabs that extend radially from the beverage separator500a short distance of approximately one half inch. These keys501align with slots303in the cooler body that allow for placement of the beverage separator500within the cooler body such that the circular spaces align over the beverage depressions302within the cooler body. In a preferred embodiment, the slots303are formed in the inner lining300of the cooler body. In some embodiments, the keys501and slots303are not symmetrically arranged around the circumference of the beverage separator500and the inner shell lining such that the beverage separator500can only be placed in the inner shell one way. In other embodiments one or more of the tabs and slots303are uniquely shaped or positioned to allow for only one correct way to insert the beverage separator500into the cooler body. The reader will appreciate there are a number of ways to accomplish substantially the same goal of providing the user an indication of the proper alignment of the beverage separator500, including markings, non-circular shapes, coloring, limiting removability and so on. Each version is contemplated within this disclosure. In many embodiments, each of the separator and the upper basin, particularly the inner liner, is made of a material that does not generate high shocks when impacting the beverage containers, e.g., molded plastics. The system is able to maintain sub-freezing temperatures within the beverages because of the elimination or substantial elimination of nucleation points within the beverage containers. Shock waves can provide nucleation points. In this manner, the occasional impact between the beverage container and the surrounding materials can undo the efforts of lowering the beverage temperature to sub-freezing without crystallizing. In some embodiments, supercooled temperatures are reached in the beverages. Under the techniques taught by this disclosure, the beverages can be lowered below their typically understood freezing point without solidifying or crystallizing. Freezing points are understood in lay terms to be the point at which crystallization of the fluid occurs in the presence of a nucleation point, but are more accurately understood to be the opposite: the point at which solid water transforms into liquid water. It takes more energy transfer for water to transform from liquid to solid because of the energy necessary to manipulate the water molecules into a crystalline structure. In many beverages or in distilled water, there are no impurities to serve as points of nucleation or if such impurities exist they are wholly dissolved in the beverage. In this state with no points of nucleation in an ideal scenario, the freezing point is a point much lower than the traditionally understood freezing point; instead, it freezes at the point of crystal homogenous nucleation. In many embodiments, the crystallization will begin when the beverage container is tapped or otherwise impacted on the side. This tapping tends to cause a shock wave or force the formation of bubbles from the trapped gasses within the beverage and provide one or more nucleation points for the ice crystals to begin to form. Salt water or other fluid with a freezing temperature below the freezing temperature of water (or other beverage to be cooled) can be used as a fluid to circulate between the bottles. A fully saturated salt water solution has a freezing temperature of around minus 21° C. or minus 6° F. A fluid such as fully or semi-saturated salt water solution can be maintained in a liquid state and circulated using the reciprocal or linear motion of the motor imparted onto the upper portion of the cooler under the teachings of this disclosure. The beverages likewise are constantly moving because of the same reciprocal or linear motion; as the motor rotates the rotating head200in those embodiments, circular motion is imparted onto the upper container and on the bottles held therein. This constant-motion, supercooled, low-impact state that the bottles are in is ripe for supercooling beverages without crystallizing. Once a beverage is removed from the cooling environment in the device under this disclosure and while it remains in its supercooled state, it is possible to induce crystallization in a number of ways. In some embodiments, a shock wave is presented through a light impact on the bottle. In certain embodiments of the present disclosure, a physical shock section is included. This physical shock section can include points designed for impact of a beverage container. The physical shock section can be incorporated into the lid600or side of the device, including the exterior of the basin. The shock section can be one or more metal points or points made of other materials designed to give sharp reaction when impacted with the beverage containers. In some embodiments, the points are arranged in a ring larger than the typical diameter of a beverage container such that a container can be placed in the middle of the ring and tapped in a circle to begin the crystallization process. In these embodiments, the shock section and resulting crystallization is intended to be used immediately before drinking the beverage. In some embodiments, a nucleus point can be added to the beverage, or the beverage could be poured into a second container, which would provide a nucleation point and result in the poured liquid becoming slush-like. As shown inFIG.6, a lid600can be made to fit in or on the upper basin, or within the inner liner300or the outer shell100. The lid600serves typically to prevent solar radiation from reaching the beverages that are being chilled and to limit airflow between the chilled air within the upper basin and the outside environment. Some aspects of the present disclosure include a system for cooling beverages, comprising: an upper insulated basin having a floor and a side surface, the upper insulated basin further comprising an interior volume sized to allow placement of cooling fluid and a plurality of beverage containers; a separator device sized to restrain the plurality of beverage containers such that during agitation, each of the plurality of beverage containers does not impact any other of the plurality of beverage containers or the side surface of the upper insulated basin and allows for cooling fluid to contact an exterior surface of each of the plurality of beverage containers; and an agitation section placeable in physical communication with the upper basin that can provide, when activated, motion to the upper insulated basin. Some aspects of the present disclosure include the above system, wherein the agitation section comprises an electric motor in rotatable communication with the upper insulated basin, said rotatable communication including counterclockwise and clockwise motion. Some aspects of the present disclosure include the above system, wherein the floor of the upper insulated basin further comprises a plurality of recesses sized to fit a lower portion of each of the plurality of beverage containers; and wherein the separator device further comprises a plurality of individual sections for each of the plurality of beverage containers and is securable such that the plurality of recesses is vertically aligned with the plurality of individual sections. Some aspects of the present disclosure include the above system, wherein the agitation section and the upper basin are connected by a removable connection. Some aspects of the present disclosure include the above system, wherein the system has an upright orientation such that the upper insulated basin is vertically above the agitation section; and the removeable connection comprises: one or more extrusions; and one or more recesses sized and positioned to mate with each of the one or more extrusions, such that, when the system is in the upright orientation, the recesses and extrusions limit relative horizontal motion between the agitation section and the upper basin and do not limit the upper basin from being vertically removed from the agitation section. Some aspects of the present disclosure include the above system wherein the agitation section comprises an electric motor in rotatable communication with the upper insulated basin, said rotatable communication including counterclockwise and clockwise motion. Some aspects of the present disclosure include the above system wherein the separator device comprises eighteen sections and the upper basin comprises eighteen sections sized to fit beverage containers. Some aspects of the present disclosure include the above system further comprising a physical shock section. Some aspects of the present disclosure include a portable beverage cooler device, comprising: an basin comprising waterproof walls and fillable with at least five vertical inches of cooling fluid; a power section in moveable communication with the basin that, when activated, provides movement to the basin. Some aspects of the present disclosure include the above device wherein the movement provided by the power section to the basin is alternating clockwise and counterclockwise rotation. Some aspects of the present disclosure include the above device wherein the power section comprises an electric motor, the electric motor is connected by an arm to a rotating head by a place connection, the place connection and arm sized and positioned such that circular motion in a single direction by the electric motor results in alternating circular clockwise and counterclockwise motion by the rotating interface. Some aspects of the present disclosure include the above device wherein the alternating clockwise and counterclockwise rotation is between 0.5 Hz and 2.0 Hz. Some aspects of the present disclosure include the above device wherein the alternating clockwise and counterclockwise rotation is between 30 degrees and 60 degrees. Some aspects of the present disclosure include a method of cooling beverages, comprising the steps of: filling, at least in part, a basin with cooling fluid below 32° F.; placing a first beverage container within the basin such that a surface of the first beverage container is in contact with the cooling fluid; and agitating the basin with an agitation section; wherein the agitation section comprises an electric motor in rotatable communication with the basin, said rotatable communication including counterclockwise and clockwise motion. Some aspects of the present disclosure include the above method further comprising the steps of: restraining a plurality of beverage containers within the basin adjacent to the first beverage containers such that none of the plurality of beverage containers nor the first beverage containers are in contact with any of the other beverage containers. Some aspects of the present disclosure include the above method wherein the plurality of beverage containers are restrained by a separator comprising circles sized and spaced to allow the cooling fluid to contact each of the plurality of beverage containers. Some aspects of the present disclosure include the above method where the agitation section comprises and electric motor linked to a rotating head; and the counterclockwise and clockwise motion being between 0.5 Hz and 2.0 Hz. Some aspects of the present disclosure include the above method further comprising the steps of: lowering the temperature of a beverage within the first beverage container to a temperature less than 32° F.; and maintaining the beverage in a fluid state. Some aspects of the present disclosure include the above method further comprising the step of: circulating the cooling fluid within the basin. Some aspects of the present disclosure include the above method wherein the clockwise and counterclockwise motion is between 30° and 60°. The reader should appreciate that the present application describes several inventions. Rather than separating those inventions into multiple isolated patent applications, applicants have grouped these inventions into a single document because their related subject matter lends itself to economies in the application process. But the distinct advantages and aspects of such inventions should not be conflated. In some cases, embodiments address all of the deficiencies noted herein, but it should be understood that the inventions are independently useful, and some embodiments address only a subset of such problems or offer other, unmentioned benefits that will be apparent to those of skill in the art reviewing the present disclosure. Due to costs constraints, some inventions disclosed herein may not be presently claimed and may be claimed in later filings, such as continuation applications or by amending the present claims. Similarly, due to space constraints, neither the Abstract nor the Summary of the Invention sections of the present document should be taken as containing a comprehensive listing of all such inventions or all aspects of such inventions. It should be understood that the description and the drawings are not intended to limit the invention to the particular form disclosed, but to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. Further modifications and alternative embodiments of various aspects of the invention will be apparent to those skilled in the art in view of this description. Accordingly, this description and the drawings are to be construed as illustrative only and are for the purpose of teaching those skilled in the art the general manner of carrying out the invention. It is to be understood that the forms of the invention shown and described herein are to be taken as examples of embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed or omitted, and certain features of the invention may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the invention. Changes may be made in the elements described herein without departing from the spirit and scope of the invention as described in the following claims. Headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include”, “including”, and “includes” and the like mean including, but not limited to. As used throughout this application, the singular forms “a,” “an,” and “the” include plural referents unless the content explicitly indicates otherwise. Thus, for example, reference to “an element” or “a element” includes a combination of two or more elements, notwithstanding use of other terms and phrases for one or more elements, such as “one or more.” The term “or” is, unless indicated otherwise, non-exclusive, i.e., encompassing both “and” and “or.” Terms describing conditional relationships, e.g., “in response to X, Y,” “upon X, Y,”, “if X, Y,” “when X, Y,” and the like, encompass causal relationships in which the antecedent is a necessary causal condition, the antecedent is a sufficient causal condition, or the antecedent is a contributory causal condition of the consequent, e.g., “state X occurs upon condition Y obtaining” is generic to “X occurs solely upon Y” and “X occurs upon Y and Z.” Such conditional relationships are not limited to consequences that instantly follow the antecedent obtaining, as some consequences may be delayed, and in conditional statements, antecedents are connected to their consequents, e.g., the antecedent is relevant to the likelihood of the consequent occurring. Statements in which a plurality of attributes or functions are mapped to a plurality of objects (e.g., one or more processors performing steps A, B, C, and D) encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing steps A-D, and a case in which processor1performs step A, processor2performs step B and part of step C, and processor3performs part of step C and step D), unless otherwise indicated. Further, unless otherwise indicated, statements that one value or action is “based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors. Unless otherwise indicated, statements that “each” instance of some collection have some property should not be read to exclude cases where some otherwise identical or similar members of a larger collection do not have the property, i.e., each does not necessarily mean each and every. Limitations as to sequence of recited steps should not be read into the claims unless explicitly specified, e.g., with explicit language like “after performing X, performing Y,” in contrast to statements that might be improperly argued to imply sequence limitations, like “performing X on items, performing Y on the X'ed items,” used for purposes of making claims more readable rather than specifying sequence. Statements referring to “at least Z of A, B, and C,” and the like (e.g., “at least Z of A, B, or C”), refer to at least Z of the listed categories (A, B, and C) and do not require at least Z units in each category. Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic processing/computing device. Features described with reference to geometric constructs, like “parallel,” “perpendicular/orthogonal,” “square”, “cylindrical,” “centerpoint” and the like or mathematical constructs like numerical designations of uncountable nouns such as “half of a liter of water” or “two inches”, should be construed as encompassing items that substantially embody the properties of the geometric construct, e.g., reference to “parallel” surfaces encompasses substantially parallel surfaces and reference to “half” a measurement of a fluid encompasses substantially half. The permitted range of deviation from Platonic ideals of these geometric and mathematic constructs is to be determined with reference to ranges in the specification, and where such ranges are not stated, with reference to industry norms in the field of use, and where such ranges are not defined, with reference to industry norms in the field of manufacturing of the designated feature, and where such ranges are not defined, features substantially embodying a geometric construct should be construed to include those features within 15% of the defining attributes of that geometric construct. | 47,888 |
11859903 | DETAILED DESCRIPTION In the following, details are set forth to provide a more thorough explanation of the exemplary embodiments. However, it will be apparent to those skilled in the art that embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form or in a schematic view rather than in detail in order to avoid obscuring the embodiments. In addition, features of the different embodiments described hereinafter may be combined with each other, unless specifically noted otherwise. Further, equivalent or like elements or elements with equivalent or like functionality are denoted in the following description with equivalent or like reference numerals. As the same or functionally equivalent elements are given the same reference numbers in the figures, a repeated description for elements provided with the same reference numbers may be omitted. Hence, descriptions provided for elements having the same or like reference numbers are mutually exchangeable. The following detailed description is not to be taken in a limiting sense. In this regard, directional terminology, such as “top”, “bottom”, “lower,” “upper,” “below”, “above”, “front”, “behind”, “back”, “leading”, “trailing”, “horizontal,” “vertical,” etc., may be used with reference to the orientation of the figures being described. Terms including “inwardly” versus “outwardly,” “longitudinal” versus “lateral” and the like are to be interpreted relative to one another or relative to an axis of elongation, or an axis or center of rotation, as appropriate. Because parts of embodiments may be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope defined by the claims. Terms concerning attachments, coupling and the like, such as “connected” and “interconnected,” refer to a relationship wherein structures are secured or attached to one another either directly or indirectly through intervening structures, as well as both moveable or rigid attachments or relationships, unless expressly described otherwise, and includes terms such as “directly” coupled, secured, etc. The term “operatively coupled” is such an attachment, coupling, or connection that allows the pertinent structures to operate as intended by virtue of that relationship. The term “substantially” may be used herein to account for manufacturing tolerances (e.g., within 5%) that are deemed acceptable in the industry without departing from the aspects of the embodiments described herein. In the context of an orientation, the term “substantially” means within 5 degrees of that orientation. For example, “substantially vertical” means within 5 degrees in either direction of vertical. As used herein, the term “orientation”, in reference to an orientation of a structure, is intended to mean that the orientation of the structure is defined by the structure's longest dimension. The term “fluid flow communication,” as used in the specification and claims, refers to the nature of connectivity between two or more components that enables liquids, vapors, and/or two-phase mixtures to be transported between the components in a controlled fashion (i.e., without leakage) either directly or indirectly. Coupling two or more components such that they are in fluid flow communication with each other can involve any suitable method known in the art, such as with the use of welds, flanged conduits, gaskets, and bolts. Two or more components may also be coupled together via other components of the system that may separate them, for example, valves, gates, or other devices that may selectively restrict or direct fluid flow. The term “conduit,” as used in the specification and claims, refers to one or more structures through which fluids can be transported between two or more components of a system. For example, conduits can include pipes, ducts, passageways, and combinations thereof that transport liquids, vapors, and/or gases. The term “natural gas”, as used in the specification and claims, means a hydrocarbon gas mixture consisting primarily of methane. The term “mixed refrigerant” (abbreviated as “MR”), as used in the specification and claims, means a fluid comprising at least two hydrocarbons and for which hydrocarbons comprise at least 80% of the overall composition of the refrigerant. The terms “bundle” and “tube bundle” are used interchangeably within this application and are intended to be synonymous. The term “compression circuit” is used herein to refer to the components and conduits in fluid communication with one another and arranged in series (hereinafter “series fluid flow communication”), beginning upstream from the first compressor or compression stage and ending downstream from the last compressor or compressor stage. The term “compression sequence” is intended to refer to the steps performed by the components and conduits that comprise the associated compression circuit. As used herein, the term “vertical orientation” is intended to mean that a structure's longest dimension is oriented vertically. As used herein, the term “horizontal orientation” is intended to mean that a structure's longest dimension is oriented horizontally. As used herein, the term “rigidly attached” is intended to mean that a structure is mechanically coupled to the other structure in a way that prevents any motion between the two structures, such as bolting or welding. Unless otherwise specified, a first element is considered to be “rigidly attached” to a second element even if the attachment is indirect (i.e., additional elements are located between the first and second elements). As used herein, the term “ambient temperature” refers to the air temperature of the environment surrounding the equipment. FIGS.1A-1EandFIG.6illustrate an exemplary method of assembling a single shell heat exchange module100(FIG.1D). In this embodiment, the heat exchange module100comprises a coil wound heat exchanger (CWHE). CWHEs are often employed for natural gas liquefaction. CWHEs typically contain helically wound tube bundles housed within an aluminum or stainless steel shell that forms a pressure vessel. For liquid natural gas (LNG) service, a CWHE may include multiple tube bundles, each having several tube circuits. Cooling might be provided using any one of a variety of refrigerants, for example, a mixed refrigerant (MR) stream having a mixture of nitrogen, methane, ethane/ethylene, propane, butanes and pentanes is a commonly used refrigerant for many base-load LNG plants. The refrigeration cycle employed for natural gas liquefaction might be a cascade cycle, single mixed refrigerant cycle (SMR), propane-precooled mixed refrigerant cycle (C3MR), dual mixed refrigerant cycle (DMR), nitrogen or methane expander cycles, or any other appropriate refrigeration process. The composition of the MR stream is optimized for the feed gas composition and operating conditions. Located at the top of each tube bundle within the shell is a distributor assembly that distributes the refrigerant over the tube bundle in the space between the shell and the mandrel, which provides refrigeration for the fluids flowing through the tube bundles. An example of a distributor assembly is disclosed in US Publication No. 2016/0209118, which is incorporated by reference as if fully set forth. FIGS.1A-Dillustrate a first exemplary method of assembling a heat exchange module100comprising a CWHE having two coil wound mandrels114,124. In order to form each coil wound mandrel,114,124, tubing112is spirally wound about a mandrel110. In most applications, multiple circuits of tubing will be wound about the mandrel110. Each coil wound mandrel114has inlets located at or proximate to a first end110aof the mandrel110and outlets located at or proximate to a second end110bof the mandrel110. As shown inFIG.1B, two saddles136a,136bare affixed to a first (lower) portion131of the pressure vessel shell (“shell”), then the first coil wound mandrels114is telescoped (i.e., inserted) into the first portion131of the shell through an open top end of the first portion131along a longitudinal axis L of the lower portion131. Similarly, as shown inFIG.1C, two saddles136cand136daffixed to a second (upper) portion134of the shell, then the second coil wound mandrel124is telescoped into the second portion134. After both coil wound mandrels114,124have been inserted into the first and second portions131,134of the shell, respectively, the first and second portions131,134are joined to form the pressure vessel shell132(SeeFIG.1D). After the shell132is fully formed and closed, it is transported to a plant site in a horizontal orientation (the orientation shown inFIG.1D). Upon arrival at the plant site and as shown inFIG.1E, the heat exchange module100is erected into a vertical orientation and installation is completed. In this exemplary embodiment, the module frame structure that supports the heat exchange module100at the plant site is not shown. The module frame could be assembled and affixed to the first and second portions131,134of the shell130prior to telescoping of the coil wound mandrels114,124, or the module frame could be assembled and affixed to shell130after it is erected at the plant site. A key improvement of the assembly method described in connection with the heat exchange module100shown inFIGS.1A-Eis that the saddles136a-136dare attached each portion131,134of the shell132prior to telescoping the coil wound mandrel114,124into each portion, that those saddles136a-136dare never removed from the shell132, and that the saddles136a-136dare attached to the module frame when it is installed. In other words, the saddles136a-dthat are used to support the portions131,134of the shell132during telescoping remain part of the structural support of the CWHE throughout the construction and installation process, as well as when the CWHE is operated. Accordingly, the saddles136a-136dare adapted to provide support for the CWHE during transport (when it is in a horizontal orientation) and after the CWHE has been erected and installed at the plant site (in which the CWHE is in a vertical orientation). This is in contrast to convention assembly methods, in which three different set of saddles are used in the telescoping, transportation, and final installation stages. As shown inFIGS.1B &10, the saddles136aare configured to support both horizontal and vertical loads of the CWHE shell130. To this end, each of saddles136a-136bincludes a frame portion (see frame portions137a,137b) that is framed around (i.e., fully encircles) the shell132and a base portion (see base portions138a,138b) that makes contact with a load bearing surface (e.g., a platform, ground, and/or a module frame) and supports horizontal and vertical loads when the shell132is in a horizontal orientation. Using a single set of saddles throughout the assembly, transportation, and site installation stages provides several advantages. For example, insulation can be installed on shell132prior to transportation of the CWHE to the plant site because it won't be disturbed by removal and installation of different saddles and additional connection to the module frame. FIGS.2A-2Cillustrate the exemplary assembly method on a heat exchange module200having a different configuration. This exemplary embodiment is very similar to the method described inFIGS.1A-1E, the primary difference being that, in this exemplary embodiment, the CWHE has two separate shells (pressure vessels)230,240, each containing one coil wound mandrel214. In this embodiment, the coil wound mandrels are formed as shown inFIG.1A. As shown inFIG.2A, two saddles236a,236bare affixed to the first shell230, then the first coil wound mandrel210,214is telescoped into the first shell230through an open top end/face. When telescoping is complete, the top end of the shell230is sealed by, as shown inFIG.2B. The process is repeated for the second shell240. The assembled shells230,240are transported to the plant site in the same manner as the shell130and as shown inFIG.1D. Upon arrival at the plant site and as shown inFIG.2C, each of the shells230,240are erected into a vertical orientation. Two saddles236c,236dare affixed to the second shell240. In this exemplary method, the module frame structure that supports the CWHE shells230,240at the plant site is not shown. The module frame could be assembled and affixed to the shells230,240prior to telescoping of the coil wound mandrels or the module frame could be assembled and affixed to shells230,240after the heat exchange module200is erected at the plant site. Referring toFIG.2C, because the CWHE comprises two shells230,240, the second shell240is positioned atop the first shell230. Accordingly, if the module frame for each shell230,240is installed prior to transport of the shells230,240to the plant site, the module frame of the second shell240is preferably attached to the top of the module frame for the first shell230. Once the shells230,240are installed at the plant site, external piping254a-bthat interconnects the shells230,240is installed. FIGS.3A-3Dillustrate another exemplary method of assembling a heat exchange module300having a multiple shell CWHE. In this embodiment, the steps of the assembly process are nearly identical to those of the embodiment shown inFIGS.2A-2C, except the module frames360a-bare constructed and connected to the saddles338a-bprior to telescoping the coil wound mandrels310,320into the respective shells330,340(seeFIGS.3A-C). Constructing the module frame360aand connecting the saddles338a-bto the module frame360aprior to telescoping enables external piping354a-c, piping supports, valves, steps, ladders, standing platforms, and insulation to be installed prior to transportation of the shells330,340to the plant site because the module frame360aprotects the shell330and provides attachment points for the elements being installed. In this embodiment, the module frame360a, the fully formed shell330, and the saddles336a-bform a heat exchange module366a. A second heat exchange module366bis formed using the same steps as the heat exchange module366a. Installation at the plant site is further simplified with this method. The first heat exchange module366ais erected into a vertical position and the first module frame is affixed to a platform361at the plant site (typically a concrete pad or footer). Then the second heat exchange module366bis erected into a vertical position and the second module frame366bis mounted to top of the first module frame366a. Once the shells230,240are installed at the plant site, external piping354d-eand electrical connections (not shown) that interconnect the shells330,340are installed. FIG.3Cillustrates another exemplary method for forming a heat exchange module300. The purpose of this embodiment in which the multiple shell heat exchange module300includes two pressure vessels (shells)330,340, a first module frame360aand a second module frame360bare manufactured. Each module frame360includes a plurality of beams362and trusses364to increase the overall strength of the structure. The plurality of beams362that define a frame volume of the module frame360. Trusses364, if included, may also define the frame volume since they do not extend beyond the frame volume defined by the beams362. Thus, the framing of each module frame360forms a rectangular frame with a cavity (i.e., frame volume) configured to receive a corresponding pressure vessel. In other words, each module frame360is serves as an exoskeleton for its pressure vessel. Multiple module frames and support modules may be manufactured in parallel for each pressure vessel. As will be described below, the first and second module frames360a,360bare configured to be rigidly connected to a corresponding one of the first and second shells330,340, thereby forming a first heat exchange module. In this embodiment, the plurality of beams362are sized and arranged such that no part of the pressure vessel shell extends outwardly beyond the frame volume. In some embodiments, a pressure vessel, including external piping and wiring is confined within the frame volume, while in other embodiments, some eternal piping and wiring may extend beyond the frame volume. Thus, the module frame360itself is a frame enclosure configured to enclose a pressure vessel therein, such that the module frame360defines an outermost boundary in each dimension of the corresponding pressure vessel shell. In other words, at the very least, the corresponding pressure vessel shell does not extend beyond the module frame360in any dimension. In alternative embodiments, it may be desirable to have the shell protrude from the top of the module frame in order to facilitate connections to other elements of the plant. In addition, each of the first and second shells330,340is suspended within the frame volume of its corresponding module frame, such that the pressure vessel is supported by the module frame both when in a horizontal orientation and in a vertical orientation. In addition, each saddle136is rigidly attached to its corresponding module frame360(see e.g.,FIG.3D). Also, when the wound bundle314is being telescoped into the shell330, it may be desirable to pull the wound bundle314through the shell330using cables that extend through a opening at the bottom end of the shell330. Another exemplary embodiment is shown inFIGS.4A-4H. In this embodiment, exemplary structures used to execute the assembly methods disclosed inFIGS.1A-3Dare disclosed in greater detail.FIGS.4A-Bshow a fully assembled CWHE, which consists of two heat exchange modules466a,466b. Each heat exchange module466a,466bcomprises a shell430,440, a module frame460a,460b, two saddles436a-d, and a lug441a,441b. As will be described herein, the saddles436a-d, and the lug441a,441bconnect the shells430,440to their respective module frames466a,466band are adapted to accommodate for multiple types of loads throughout the assembly process and during operation. The structure of the second heat exchange module466bwill be described in detail herein. The described structure is nearly identical in nature in the first heat exchange module466a, understanding that some dimensions may be different due primarily to the fact that the shells430,440have different dimensions. One of the saddles436dis shown inFIGS.4C-E. It should be understood that the other saddle436cof the upper heat exchange module466band the saddles436a-bof the lower heat exchange module466ahave the same structural elements and only differ in dimension/proportions and location. For example, the saddles436a-bwill have larger dimensions due to the larger circumference of the shell430. The saddle436dincludes a frame portion437which encircles the shell440. The saddle436dfurther includes sliding joint plates438a-bwhich engage sliding joints467a-dand connect the saddle436dwith a cross member462of the module frame466d. Optionally, a base plate438can be provided at the connection to the cross member462to provide additional structural strength. The saddle436dfurther includes a contoured plate472, which is arcuate and complimentary in shape to the outer surface of the shell440along an interface. The interface preferably overlaps at least one quarter and, more preferably, at least one third of the circumference of the shell440. The saddle436dfurther includes a plurality of ribs439, which extend linearly from the base plate438, are welded to the sliding joint plates443a-b, then continue to the contour plate472in a direction that is perpendicular to the base plate438. The saddle436dis rigidly affixed to the shell340, either with welds and or fasteners. Each of the sliding joints467a-dincludes a plurality of bolts468(in this embodiment, two bolts per sliding joint), which extend through slots469formed in the sliding joint plates445a-b. Each slot469has a length that is significantly greater than the diameter of the bolt468that engages that slot469. The length of the slot469is preferably at least 1.5 times (more preferably at least twice) the diameter of the bolt468. Alternatively, an elongated slot469could be formed in one of the sliding joint plates445a-band holes that are much closer to the diameter of the bolts468could be provided. The joint plates445a-b, slots469, and bolts468combine to define a shear block. The configuration of the sliding joints436a-denables the saddle436dto move relative to the module frame466bin a direction parallel to the length of the shell430, but prevents any other substantial movement of the saddle436drelative to the module frame466b. The movement allowed by the slots469is preferably sufficient to accommodate thermal contraction and expansion of the shell440that is expected to occur when the shell440is transition to operating temperature. FIGS.4G-Hshow the structure of the lug441bin detail. The lug441bcomprises cross-members442a-dand beams443a-dthat “box” in the shell440. The beams443a-dare each welded to two cross-members442a-dand are either welded or bolded to the shell440. The cross-members442a-dare also preferably welded or bolted to the module frame. This structure rigidly attaches the lug441bto both the shell440and the module frame460b. The lug441band the two saddles436c-dattach the shell440to the module frame460band cooperate to accommodate multiple different types of loads during assembly, transportation, and operation of the heat exchange module400. When the shell440is being assembled and transported (see shell330,FIGS.3B-C), the saddles436c-dprovide the primary support and stability for the shell440. When the shell440is installed in a vertical orientation at the plant site (seeFIG.4A), the lug441bprovides the primary vertical support. The saddles436c-dcooperate with the lug441bto provide support against wind and seismic loads. The sliding joints467a-dand the position of each saddle436c-dallows for thermal expansion of the shell440. The preferred location of the lug441band the saddles436c-dwill depend upon a number of factors, including the geometry of the shell440, its position in the module frame460b, and the location of piping protrusions on the surface of the shell440. In general, it is preferable that the lug441bbe located within 5% (more preferably within 2%) of the center of mass of the shell440. The lower saddle436cis located between the lug441band the bottom end of the shell440and is preferably within 5% (more preferably within 2%) of the midpoint between the location of the lug441band the bottom end of the shell440. The upper saddle436cis located between the lug441band the top end of the shell440and is preferably within 5% (more preferably within 2%) of the midpoint between the location of the lug441band the top end of the shell440. By way of example, if the shell440has a length of 10 meters and a center of mass at its midpoint, the lug441bwould be preferably located within 0.5 meters, and more preferably within 0.2 meters, of the midpoint. As noted in previous embodiments, each shell430,440is contained within a perimeter defined by the cross members462a-d(seeFIG.4D) of the module frame466a-b. This provides protection for the shells430,440during construction and transport. It should be understood that a shell430,440may extend beyond an end of the frame module466a-b, such at the top of shell430, which extends beyond the upper end of its frame module466b. This most common for a shell of a single-shell heat exchanger or the uppermost shell of a multiple-shell heat exchanger. The methods described herein allow for all internal piping and almost all external piping to the shells to be completed prior to the completion of the coil wound exchanger bundle. In addition, valves and instruments can be installed and insulated before the long lead bundles are telescoped into the shells. Additionally, this method can eliminate the need for temporary shipping saddles. In addition, the use of multiple pressure vessels including any combination thereof within the module frames can be accommodated. Furthermore, once at the operation site the final piping connections are made and the exchanger modules can be made operational. As noted above, the heat exchange modules100,200,300,400disclosed herein are most commonly used as part of a natural gas liquefaction plant (system). An exemplary natural gas liquefaction system2is shown inFIG.5. Referring toFIG.5, a feed stream1, which is preferably natural gas, is cleaned and dried by known methods in a pre-treatment section7to remove water, acid gases such as CO2 and H2S, and other contaminants such as mercury, resulting in a pre-treated feed stream3. The pre-treated feed stream3, which is essentially water free, is pre-cooled in a pre-cooling system18to produce a pre-cooled natural gas stream5and further cooled, liquefied, and/or sub-cooled in a CWHE8(which could be heat exchange module100or200) to produce an LNG stream6. The LNG stream6is typically let down in pressure by passing it through a valve or a turbine (not shown) and is then sent to LNG storage tank9. Any flash vapor produced during the pressure letdown and/or boil-off in the tank is represented by stream45, which may be used as fuel in the plant, recycled to feed, or vented. The pre-treated feed stream1is pre-cooled to a temperature below 10 degrees Celsius, preferably below about 0 degrees Celsius, and more preferably about −30 degrees Celsius. The pre-cooled natural gas stream5is liquefied to a temperature between about −150 degrees Celsius and about −70 degrees Celsius, preferably between about −145 degrees Celsius and about −100 degrees Celsius, and subsequently sub-cooled to a temperature between about −170 degrees Celsius and about −120 degrees Celsius, preferably between about −170 degrees Celsius and about −140 degrees Celsius. CWHE8is a coil wound heat exchanger with three bundles. However, any number of bundles and any exchanger type may be utilized. Refrigeration duty for the CWHE8is provided by a mixed refrigerant that is cooled and compressed in a compression system31. The warm mixed refrigerant is withdrawn from the bottom of the CWHE8at stream30, cooled and compressed, then reintroduced into the tube bundles through streams41,43. The mixed refrigerant is withdrawn, expanded, and reintroduced in the shell side of the CWHE8via streams42,44. Additional details concerning the natural gas liquefaction system can be found in US Publication No. 2018/0283774, which is incorporated herein by reference as if fully set forth. The system2shown inFIG.5is identical to the system shown in FIG. 1 of US Publication No. 2018/0283774. In view of the of the disclosed embodiments, the integration of the pressure containing shell (i.e., pressure vessel) into the module frame inclusive of piping outside as well as internal to the CWHE reduces manufacturing time, cost, and field work through simultaneous mechanical work and winding of the bundle. Once the wound bundle is completed it can be telescoped into the pressure shell that is already disposed within the module frame for final assembly. This method allows for completion of electrical and mechanical work, including both electrical systems and piping systems (both internal and external) within the module frame prior to completion of manufacturing of the mandrel with the wound bundle. It also allows for the manufacturing of the pressure shell and assembly to be completed at different sites to optimize labor availability and cost. In addition, the use of saddles that are configured to support both horizontal and vertical loads of the pressure vessels aids in: performing the electrical and mechanical work on the pressure shell within the module frame, supporting the horizontal pressure vessel during shipping of the pressure vessel within the module frame, and supporting the erected pressure vessel within the module frame at the operation site, including during operation. FIG.6provides a flow diagram of an exemplary method of assembly, transport, and installation of a heat exchange module in accordance with the exemplary embodiments described herein. The process commences with construction of the shell (step1012) and winding of tubes around the mandrel to form a wound bundle (step1014). When the shell has been formed, the module frame, including the saddles and lug, is constructed (step1016) and attached to the shell (step1018). When the wound bundle is finished, it is telescoped (inserted) into the shell (step1022) and the top end of the shell is closed (step1024). Constructing and attaching the module frame to the shell prior to telescoping the wound bundle into the shell provides a number of benefits. The structural stability of the module frame reduces stress on the shell during telescoping, transition to transportation, during transportation, and during erection of the shell at the plant side. In some applications, this will enable the shell to be thinner (and therefore lighter) and less costly. For example, the bracing force used to stabilize the shell during the telescoping step1022can be applied to the module frame instead of being applied directly to the shell. Similarly, when the shell is being moved (lifted) in preparation for transportation (step1028) and erected and installed at the plant site (step1032), the moving/lifting forces can be applied to the module frame instead of being applied directly to the shell. In addition, in installations where the heat exchanger consists of multiple shells (seeFIGS.2A-Cand4A-H), the upper shell (e.g., shell440ofFIG.4A) can be installed by simply bolting its module frame to the module frame of the lower shell (e.g., shell430ofFIG.4A). Constructing and attaching the module frame to the shell prior to telescoping also enables some process steps that are required to be performed in series using conventional methods to be performed in parallel. For example, piping penetrations, piping supports, electrical connections, instrumentation, and insulation can be installed on the shell (step1020) prior to or in parallel with the telescoping step1022. Under conventional methods, these elements could not be installed until after the shell is installed at the plant site. This improvement, not only shortens the overall process length, it also enables additional process steps to be performed in an indoor environment instead of being performed outdoors at a plant site. In addition, it enables the option to pressure test the shell (step1026) under shop conditions and before transport to the plant site (step1030). Enabling a significant portion of the piping and electrical work can be done prior to transportation reduces the steps that need to be performed at the plant site. In many cases, the only piping and electrical connections that must be performed at the plant site are those that interconnect the shell with another shell or with other elements of the plant (step1034). Furthermore, the following claims are hereby incorporated into the detailed description, where each claim may stand on its own as a separate example embodiment. While each claim may stand on its own as a separate example embodiment, it is to be noted that—although a dependent claim may refer in the claims to a specific combination with one or more other claims—other example embodiments may also include a combination of the dependent claim with the subject matter of each other dependent or independent claim. Such combinations are proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended to include also features of a claim to any other independent claim even if this claim is not directly made dependent to the independent claim. Although various exemplary embodiments have been disclosed, it will be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the concepts disclosed herein without departing from the spirit and scope of the invention. It will be obvious to those reasonably skilled in the art that other components performing the same functions may be suitably substituted. Thus, with regard to the various functions performed by the components or structures described above (assemblies, devices, circuits, systems, etc.), the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component or structure that performs the specified function of the described component (i.e., that is functionally equivalent), even if not structurally equivalent to the disclosed structure that performs the function in the exemplary implementations of the invention illustrated herein. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. It should be mentioned that features explained with reference to a specific figure may be combined with features of other figures, even in those not explicitly mentioned. Such modifications to the general inventive concept are intended to be covered by the appended claims and their legal equivalents. | 33,471 |
11859904 | DETAILED DESCRIPTION Various non-limiting embodiments of the present disclosure will now be described to provide an overall understanding of the principles of the structure, function, and use of the apparatuses, systems, methods, and processes disclosed herein. One or more examples of these non-limiting embodiments are illustrated in the accompanying drawings. Those of ordinary skill in the art will understand that systems and methods specifically described herein and illustrated in the accompanying drawings are non-limiting embodiments. The features illustrated or described in connection with one non-limiting embodiment may be combined with the features of other non-limiting embodiments. Such modifications and variations are intended to be included within the scope of the present disclosure. Like numbers refer to like elements throughout, and base100reference numerals are used to indicate similar elements in alternative embodiments. Reference throughout the specification to “various embodiments,” “some embodiments,” “one embodiment,” “some example embodiments,” “one example embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with any embodiment is included in at least one embodiment. Thus, appearances of the phrases “in various embodiments,” “in some embodiments,” “in one embodiment,” “some example embodiments,” “one example embodiment,” or “in an embodiment” in places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. Referring toFIGS.1-9, in an embodiment, a liquid removal device100may be used for removing liquids from a surface10. A liquid removal device may be used to remove water, for example, but may also be used to remove other liquids, such as hazardous liquids (e.g., fuel, oil, liquid chemicals). For instance, the liquid removal device100may be used to remove water from athletic courts, such as tennis, pickleball and/or basketball courts, race tracks, construction sites, warehouses, or pool decks and the like. It will be appreciated that the liquid removal device100may be useful in other applications. The liquid removal device100shown inFIG.1illustratively includes an absorber drum102to roll over the surface10and absorb liquids from the surface10. The absorber drum102can include a circular cross-section and comprises a tubular frame104, a liquid absorbing layer106carried by an outer radial surface of the tubular frame104, and an axle108(FIG.2) extending longitudinally and carrying the tubular frame104. Suitable materials for the tubular frame104include, without limitation, a polymer plastic, metal, PVC, or a phenolic tube. Any fluid absorbing material can be used for the liquid absorbing layer appropriate for the particular liquid to be absorbed and the surface on which the liquid exists. In various embodiments, the liquid absorbing layer comprises a foam material, a synthetic fiber material, such as polyester and nylon materials, a microfiber material, a wool material, a wool-poly blend material, or a combination thereof. The absorber drum102may have a uniform outer diameter or a variable or patterned surface as appropriate for various applications. The liquid absorbing layer106may have uniform layering or may have a variable layering as appropriate for a particular application. The liquid removal device100shown inFIG.60includes an extractor drum110abutting the absorber drum102. In the illustrated embodiment, the extractor drum110has a circle-shaped cross-section, and is hollow. In other embodiments, the extractor drum110can have other shapes and abut the absorber drum at any appropriate radial position. The extractor drum110may have a circular sidewall112and an axle114(FIG.2). In an embodiment, the sidewall112may extend between end walls115, which may have the same or a larger cross-sectional area than the sidewall112. The end walls115could be removable to permit cleaning of the hollow interior, which can collect small debris (e.g., dirt) during use. Suitable materials for the extractor drum110may include, without limitation, a polymer plastic material, such as polyvinyl chloride, aluminum, or another material with sufficient rigidness and water, chemical, anti-static, or fuel resistance. The extractor drum110can define an interior comprising an extractor drum fluid reservoir116. The extractor drum fluid reservoir116can be liquid tight or otherwise can prevent leakage of accumulated water below a first set of apertures118and a second set of apertures120. The first set of apertures118are in alignment and communication with the fluid reservoir116of the extractor drum110such that fluid can flow through the apertures118into the fluid reservoir116for storage. In some embodiments, a second set of apertures120may be configured to release the liquid from the interior extractor drum fluid reservoir116of the extractor drum110. While the illustrated embodiment includes two sets of apertures118,120, the technology is not so limited. The shape, size, and/or number of the apertures118,120may vary. For example, the shape, size, and/or number of apertures may vary between the sets of apertures. In an embodiment, the apertures may be arranged linearly (as shown inFIG.4) or in adjacent staggered lines. For example, each set of the plurality of apertures may include a linear orientation of apertures, spaced apart apertures, offset apertures, or any other configuration. Each of the apertures may be circular, hemispherical, polygonal, or any other suitable shape. Apertures may be openings of any shape, size, or dimension within the extractor drum and can be suitably positioned in reference to the absorber drum102. In an embodiment, each of the sets of apertures118,120may be in a different radial quadrant of the extractor drum, such as in opposite radial quadrants. As shown inFIGS.1and2, the liquid removal device100illustratively comprises a chassis122retaining the axle108of the absorber drum102and the axle114of the extractor drum110. The chassis122may include a housing124, which may include for example two side supports126,128bracketing the ends of the axles. The axles may be rotationally coupled to the side supports126,128in any suitable manner. For example, the side supports126,128may include openings130,132for the axle108of the absorber drum102and the axle114of the extractor drum110. The opening132for the extractor drum axle114can allow for relative movement between the axle114and the chassis122. For example, the opening132may be oval shaped to allow displacement of the extractor drum110in the event of debris encountering the abutted drums for passing purposes to prevent absorber drum102from rotationally locking. In an embodiment, the chassis122includes a plurality of support beams134coupling the side supports126,128. The outer diameter of the absorber drum102can extend a distance below the chassis122such that the liquid absorbing layer106of the absorber drum102contacts and can roll along the ground or other surface. The absorber drum102can function as a cylindrical wheel allowing repositioning of the liquid removal device100on desirable surfaces. It should be appreciated that the housing124may further enclose the device components for aesthetic or protection reasons. For example, the housing124may also include a cover (not shown) that encloses the absorber drum102and extractor drums110, as well as other components, for aesthetic and protection from natural elements, such as sun exposure damage. The housing or cover can be modified to hold additional tools, such as a broom or squeegee, can include signage such as digital signage, and can be used to support solar panels for a motorized unit. The liquid removal device100illustratively comprises a handle136coupled to the chassis122for manipulation by a user. As will be appreciated, the user pushes the liquid removal device100along the surface using the handle136keeping the absorber drum102in contact with the liquid-covered surface to remove liquid from the surface. Other forms of operation, such as motorized or autonomous operation, are contemplated. An outer surface of the sidewall112may act as a wheel to rotate the extractor drum110where operationally beneficial but not for transport or repositioning. The outer surface of the sidewalls may have, for example, a urethane coating or another coating with a higher coefficient of friction than the material of the sidewalls. In some embodiments, the extractor drum110may include wheels138. The liquid removal device100illustratively comprises four wheels138coupled to a lowermost portion of the chassis122at diagonal ends thereof for permitting the liquid removal device100to be transported over surfaces not requiring drying and to overcome obstacles such as curbs or sidewalks. In an embodiment, the liquid removal device100will operate on the absorber drum102when liquid pickup is desired, where rear wheels138can be engaged to turn 180 degrees to begin the next swath of drying. Front wheels138can be provided to overcome an obstacle such as a curb when transporting the device. It will be appreciated that the wheels138or other stabilization features can contact the ground or surface while the device is being used to absorb fluid from the surface. When removing liquid from a surface, the wheels138may be held apart from the surface during the extraction phase. To engage the wheels138, the handle136may be lifted or tilted such that the wheels138contact the surface or ground. Moving the liquid removal device100while in this lifted, wheel-engaged position will rotate the wheels138and, thus, the extractor drum110from a first, onboarding position to a second, draining position. In the first onboarding position, the first set of apertures118(FIGS.4and6) are adjacent to the absorber drum102, and the second set of apertures120are opposite the first set of apertures118and parallel to the ground. In this configuration, liquid in the extractor drum fluid reservoir116will not drain out of the second set of apertures120. In the second, draining position, the second set of apertures120can be rotated such that they are facing generally downward towards the surface or ground. In this configuration, liquid may automatically drain out of the extractor drum fluid reservoir116through the second set of apertures120due to gravity. Additionally, the user may lift or lower the handle136to engage either the front or rear transport wheels138to transport the device over surfaces not in need of drying. It may be of use to allow “feeler” wheels to be affixed to the liquid removal device100to assist with handle136stability during operation. In some embodiments, the extractor drum110will not include wheels, and the outer surface of the sidewalls112will not extend beyond the diameter of the extractor drum110body itself. After liquid is onboarded and draining is required, the handle136can be pulled backwards toward the user to cause the absorber drum102to rotate opposite its typical onboarding rotation. By causing the absorber drum102to rotate in the opposite direction, by virtue of the coefficient of friction between the absorber drum102liquid absorbing layer106and the extractor drum110, the extractor drum110will be rotated from the onboarding position to the drain position until extractor rotation limiter pin140(FIG.2), engages the extractor rotation limiter drain stop142(FIG.2). In an embodiment, both ends of the drum110may include a pin140and stop142. As the extractor drum110is rotated to the drain position, liquid is then allowed to escape out of the second set of apertures120which have been rotated to face downward towards the surface or ground. To ensure proper placement of the extractor drum102, the liquid removal device100illustratively comprises at least one elastic device144(e.g., a coil spring, rubber bands, a bungie cord, or any suitable tension creating implement) coupled between the extractor drum102and the chassis122. The elastic device144can be configured to urge the absorber drum102and the extractor drum110into contact with one another with enough of a coefficient of friction to pull water from the absorber drum102into the extractor drum110. Additionally, if the absorber drum102picks up debris larger than the first set of apertures118from the surface10, such as rocks, twigs, tanbark, leaves, debris and the like, the elastic device144may permit the extractor drum110to be displaced slightly such that the debris falls away from the device or for easy manual access and removal by the user. In some embodiments it can be envisioned to institute a cleaning apparatus that would assist with an automated removal and capture of debris as the embodiment is rolled across the surface to keep the liquid absorbing material clean. The elastic device, in one version, can be connected to a slip bushing of low coefficient of friction material which surrounds the extractor drum axle or absorber drum axle, which is also made of a material with low coefficient of friction material. This configuration can function as a bearing and allows high elastic tension force to be applied to the extractor drum axle or absorber drum axle, and yet still let the extractor drum rotate to and from onboarding and draining positions. In various embodiments, the extractor drum110may be movable between a first, onboarding position (FIG.5) and a second, draining position (FIG.6). In the first, onboarding position, the first set of apertures118(axis a1) is adjacent to the absorber drum102, and the second set of apertures120(axis a2) are facing away from the surface. In other words, liquid in the extractor drum fluid reservoir116will not drain out of the second set of apertures120due to gravity in the first, onboarding position (unless the level of the liquid rises above the apertures118or120). For example, liquid in the lower half of the reservoir116will not drain out of the reservoir through the apertures120. In the second, draining position, the second set of apertures120are lower towards the surface relative to the first position, and liquid may automatically drain out of the extractor drum fluid reservoir116through the second set of apertures120(e.g., due to gravity). In an embodiment, the liquid removal device100may be configured to move the extractor drum110to the draining position by moving the liquid removal device100backwards on the surface. The rotation of the extractor drum110to and from onboarding and draining positions occurs easily and naturally due to the rotational direction of the absorber drum102. When the liquid removal device100is pushed forward by the handle, the extractor drum110is rotated to the onboarding position by the coefficient of friction between the absorber drum liquid absorbing layer106and the extractor drum110because of the force imparted by the elastic devices pressing the extractor drum into the liquid absorbing layer106, and until the extractor rotation limiter pin reaches the extractor rotation limiter onboarding stop block. When the liquid removal device is pulled backwards by the handle, the extractor drum110rotates to the drain position due to the coefficient of friction between the absorber drum liquid absorbing layer106and the extractor drum110because of the force imparted by the elastic devices pressing the extractor drum into the liquid absorbing layer106, and until the extractor rotation limiter pin reaches the extractor rotation limiter drain stop block. For example, when moving the liquid removal device100backwards, the extractor drum110may be rotated by the friction between it and the absorber drum102such that the liquid drains out of the extractor drum fluid reservoir116. The distance the liquid removal device100travels backwards to move the extractor drum110to the draining position may vary. In various embodiments, the distance may be in a range of 0.1 to 20 inches, 1 to 10 inches, 1 to 5 inches, or 5 to 10 inches. In some embodiments, the liquid removal device100may include a selectively engageable safety mechanism to prevent unintentionally moving the extractor drum110to the draining position. For example, a trigger for the safety mechanism may be positioned on the handle. When engaged, the safety mechanism may prevent backward movement of the liquid removal device100from rotating the extractor drum110. When disengaged, the safety mechanism may allow backward movement of the liquid removal device100to rotate the extractor drum110. The user may disengage the safety mechanism when ready to drain the liquid from the extractor drum110. In use, when pushing the liquid removal device100along a surface to remove liquid, the absorber drum102rotates to pick up fluid from the surface. In one embodiment, the extractor drum fluid reservoir116remains rotationally stationary and accepts the fluid from the absorber drum102via the first set of apertures118. The extractor drum fluid reservoir116can be prevented from rotating by the extractor rotation limiter pin140engaged with the stop142. At least a portion or all of the first set of apertures118can abut or otherwise engage the rotating absorber drum102at the tangent or point of engagement between the absorber drum and the extractor drum. As the absorber drum102rotates, the liquid absorbing layer106can be urged against the outer surface of the extractor drum110by force exerted by the elastic device144. The force exerted by the elastic device144presses or squeezes the liquid absorbing layer106coaxing the liquid out of the liquid absorbing layer106and into the properly aligned first set of apertures118such that the liquid then collects in the extractor drum fluid reservoir116. The location of the interface between the absorber drum102and the extractor drum110may vary. For example, in the illustrated embodiment, the extractor drum110abuts the absorber drum102at a front radial position. It may be appreciable that the location of the interface may be adjusted by use case where operationally beneficial. To drain liquid from the extractor drum fluid reservoir116, the user pulls the handle136backward to rotate the absorber drum102clockwise and opposite that of the typical onboarding rotation direction. The action of rotating the absorber drum102backwards can correspondingly rotate the extractor drum110in a counter-clockwise direction until it reaches a rotational stop caused by the extractor rotation limiter pin140engaging extractor rotation limiter drain stop142. The extractor drum110, rotating opposite of the absorber drum102, can also move the second set of apertures120such that they are rotated to point towards the ground. When the second set of apertures120are so situated, the liquid stored in the extractor drum fluid reservoir116is allowed to escape and to be drained by way of gravity and liquid momentum. After draining is complete, the handle136is then pushed in the forward direction away from the user, and the system will return to the water onboarding configuration as described herein. It will be appreciated that a safety mechanism, as described herein, may be associated with the extractor drum110or liquid removal device100to prevent the draining of fluid in the reverse direction until desired by the operator. Depending on the application and material used for the liquid absorbing layer, the liquid absorbing layer may stretch during use. For example, the extractor drum pressing against the absorber drum may cause the liquid absorbing layer to stretch and become loose in places. In some embodiments, the liquid removal device may be configured to maintain tension on the liquid absorbing layer during use. Referring toFIGS.3and7, the liquid absorbing layer106may be wound on the absorber drum102. The absorber drum102may include including a dynamic tensioning mechanism for maintaining tension on and preventing loosening of the liquid absorbing layer106. As shown inFIG.7, the dynamic tensioning mechanism may include a spring or other tensioning device, as described further below. In the illustrated embodiment, a first end106aof the liquid absorbing layer106may be anchored to a first end102aof the absorber drum102, and a second end106bof the liquid absorbing layer106may be coupled to a second end102bof the absorber drum102under tension. The absorber drum102may be configured so that the connections of the first and second ends106a,106bof the liquid absorbing layer106are radially inward of the outer radial surface. In such a configuration, the first and second ends106a,106band the components connecting them to the absorber drum102do not contact the surface (e.g., a court) during operation of the liquid removal device100. For example, the sidewall112of the absorber drum102may include cutouts146a,146b. The first and second end walls115a,115bof the extractor drum110may include corresponding cutouts148a,148bthat open to the cutouts146a,146b(FIGS.3and7). The first end106aof the liquid absorbing layer106and the first end102aof the absorber drum102may include corresponding connectors. For example, the first end106aof the liquid absorbing layer106may include a grommet that may be removable coupled to a pin positioned in the cutout148a. The first end106aof the liquid absorbing layer106may extend through the cutout146aand into the cutout148ato be coupled to the pin. Referring toFIG.7, in an embodiment, the second end106bof the liquid absorbing layer106may be removably coupled to the second end102bof the absorber drum102using a spring150. The spring150dynamically tensions the liquid absorbing layer106. The spring150may be removably coupled to at least one of the second end106bof the liquid absorbing layer106and the absorber drum102. In an example, the second end106bof the liquid absorbing layer106may include a connector, such as a grommet, that may be selectively coupled to a first end150aof the spring150. The second end150bof the spring150may be coupled to the absorber drum102. For example, the end wall115bof the absorber drum102may include a connection point, such as a hook152, that may be selectively coupled to a second end150bof the spring150. The liquid absorbing layer106is wound or wrapped on the absorber drum102such that pressure applied by the extractor drum110is distributed towards the second end106bof the liquid absorbing layer106. In other words, if the material stretches, it stretches in a direction towards the spring150. The spring150, which applies tension to the second end106bof the liquid absorbing layer106, is able to compensate if the material stretches. In use, when rolling the liquid removal device100along a surface to remove liquid, the absorber drum102rotates while the extractor drum110is rotationally stationary. As the absorber drum102rotates, the liquid absorbing layer106is pressed against the extractor drum110. If the liquid absorbing layer106stretches, the rotary motion “pushes” the material in a corkscrew motion from the anchored end to the tensioned end. Because the second end of the material is under dynamic tension, the stretching of the material does not result in a loosening of the material. In some embodiments, the tension or strain of the coil spring144, or other elastic device, may be adjustable. Having an adjustable tension may allow for separating the absorber drum102and the extractor drum110without uncoupling the coil spring144. With reference toFIGS.8and9, in another embodiment, the liquid removal device100includes an adjustable bracket154for adjusting tension on the coil spring144. The bracket154is movably coupled to the chassis122and defines a handle156. The bracket154may be coupled to the coil spring144. For example, the coil spring144may be removably coupled to the bracket154using an eyelet hook158. The bracket154may have a cutout160. The cutout160may define a channel162opening to one or more indentations or notches164configured to receive a pin or fastener, such as bolt166. In an embodiment, the bolt166may couple the chassis122and the bracket154. The bracket154may have at least two locked positions relative to the chassis122. Each indentation or notch164defines a position for the bracket154. For example, when the bracket154is in a first locked position, the coil spring144may be tensioned such that the extractor drum110is in contact with the absorber drum102. When the bracket154is in a second locked position, the coil spring144may have a lower tension such that the extractor drum110is spaced apart from the absorber drum102. To move between the locked positions, the bracket154may be moved such that the bolt166slides out of one of the notches164, moves forward or backward in the channel162, and moves into another of the notches164. The channel162may extend beyond the notches164and may allow for the bracket154to be moved to a configuration in which the coil spring144is not under tension. There may be more than two locked positions. For example, multiple locked positions may allow for the extractor drum110to be pressed against the absorber drum102at different tensions. Adjusting the force that the extractor drum110exerts on the absorber drum102results in a different amount of force required to operate the liquid removal device100. Thus, the force required to operate the liquid removal device100may be adjusted based on the application or user preferences In some embodiments, the user may move one or both of the extractor drum110and the absorber drum102to be in a spaced apart configuration to allow a user to remove the liquid absorbing layer106(e.g., to replace old material). For example, the user may use the adjustable bracket154to move the extractor drum110away from the absorber drum102. The liquid absorbing layer106may then be detached and unspooled from the absorber drum102. A new liquid absorbing layer106may then be installed on the absorber drum102. Advantageously, the liquid removal devices disclosed herein provide an effective and robust approach to liquid removal. It will be appreciated that the width of the liquid removal devices described herein may vary. In some embodiments, the width of the liquid removal device may be in a range from 1 ft. to 10 ft., from 2 ft. to 4 ft., from 6 in. to 12 in., or have any other suitable dimensions. It is contemplated that liquid removal devices described herein may be used to apply or deliver a fluid or material in addition to, or separate from, a fluid absorbing function. For example, devices can be modified to deliver a surface coating such as a top coat, sealer, or varnish. Liquid removal devices may be manually pushed, motorized, remote controlled, autonomous, or can be capable of operating in any modes. In various embodiments disclosed herein, a single component can be replaced by multiple components and multiple components can be replaced by a single component to perform a given function or functions. Except where such substitution would not be operative, such substitution is within the intended scope of the embodiments. The foregoing description of embodiments and examples has been presented for purposes of illustration and description. It is not intended to be exhaustive or limiting to the forms described. Numerous modifications are possible in light of the above teachings. Some of those modifications have been discussed, and others will be understood by those skilled in the art. The embodiments were chosen and described in order to best illustrate principles of various embodiments as are suited to particular uses contemplated. The scope is, of course, not limited to the examples set forth herein, but can be employed in any number of applications and equivalent devices by those of ordinary skill in the art. Rather it is hereby intended the scope of the invention to be defined by the claims appended hereto. | 27,980 |
11859905 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG.1shows a device according to the invention for the production of expanded granulated material2from sand-grain-shaped mineral material with an expanding agent. In the exemplary embodiment shown, said material is perlite sand1in which water is bound (so-called water of crystallization) and acts as an expanding agent. The device comprises a furnace3having a substantially vertical furnace shaft4having an upper end5and a lower end6. A conveying section7extends between the two ends5,6, indicated inFIG.1by a dash-dotted line (inFIG.2by a dashed line), wherein the dash-dotted line inFIG.1(inFIG.2the dashed line) also marks a radial center16of the furnace shaft4. The conveying section7leads through a plurality of heating zones8arranged separately from one another in a conveying direction12(indicated by horizontal dotted lines inFIG.1), wherein the heating zones8each have at least one heating element9which can be controlled independently of one another in order to heat the perlite sand1, in particular, to a critical temperature and to expand the perlite sand grains1. In the exemplary embodiments shown, the heating elements9are electrically operated and can be controlled by a regulation and control unit (not shown). The device further comprises feeding means which, in the exemplary embodiment ofFIG.1, include a valve10for regulating the feed of the perlite sand1as well as process air21and are adapted to feed the unexpended perlite sand1(together with process air21) at the upper end5of the furnace shaft4in the direction of the lower end6of the furnace shaft4into the furnace shaft4in order to expand the perlite sand1, as viewed in the conveying direction12, in the last half, preferably in the last third, of the conveying section7. This means that in the exemplary embodiment ofFIG.1, the perlite sand1is conveyed primarily by means of gravity from top to bottom along the conveying section7, with the process air21that may have been blown in or sucked in with the perlite sand1supporting the falling movement of the perlite sand1. The process air21flowing through the furnace shaft4from top to bottom is thereby heated. In principle, this can lead to an increase in the flow velocity in the furnace shaft4, which can shorten the residence time of all perlite sand particles1in the furnace shaft4. To avoid this and to compensate for the increase in the flow velocity of the first process air or to keep the flow velocity approximately constant, the furnace shaft4in the exemplary embodiment ofFIG.1is designed to be wider at the bottom than at the top. This means that the cross-section of the furnace shaft4normal to the conveying direction12increases from the upper end5to the lower end6. It should be emphasized, however, that even if the perlite sand1is fed at the upper end5of the furnace shaft4, furnace shafts4with a constant or approximately constant cross-section are of course also possible. The cross-section of the furnace shaft4is bounded by an inner wall14of the furnace shaft4, which in the exemplary embodiments shown is formed by at least one limiting element made of high-temperature steel. The furnace shaft4or the furnace3is thermally insulated to the outside by means of a thermal insulation24. Temperature sensors23are arranged at vertically spaced positions22, with at least one temperature sensor23being located in each heating zone8. In the exemplary embodiment shown inFIG.1, the temperature of the perlite sand1is thus determined via the temperature prevailing in the respective heating zone8. Heating elements9and temperature sensors23are connected to the regulation and control unit (not shown), which determines the position or region25in the furnace shaft4at which or in which the expansion of the perlite sand grains1takes place, based on the temperature data. At this position or in this region25, a significant reduction in temperature, a temperature drop of, for example, over 100° C., of the expanded perlite sand1takes place. This temperature drop is a consequence of the isenthalpic expansion process of the perlite sand1, wherein the expansion process is brought about by a softening of the surface of the perlite sand grains1followed by an expansion process due to the water vapor or water vapor pressure forming in the perlite sand grains1. For example, the perlite sand1may have about 780° C. immediately before its expansion and only about 590° C. immediately after the isenthalpic expansion process, i.e., a temperature drop of 190° C. occurs in this example, and depending on the material, the temperature drop is typically at least 20° C., preferably at least 100° C. By means of the regulating and control unit (not shown), those heating elements9which, as viewed in the conveying direction12, are located after the position or region25of the temperature drop can be specifically or automatically regulated so that a desired energy input can take place. It should be noted that the aforementioned drop in temperature does not necessarily show up as a drop in temperature in this automatic regulation, but optionally as a range in which more energy is required to maintain the temperature, so that the use of the temperature sensors23to detect the drop in temperature can also be dispensed with. In particular, these heating elements9can be regulated in such a way that no further or repeated increase in the temperature of the expanded perlite sand or granulated material2takes place or that it is ensured that the expanded granulated material2is of closed-cell configuration. In the exemplary embodiment ofFIG.1, the expanded granulated material2is discharged at the lower end6and fed via a water-cooled chute20to an air entrainment/suction flow26operating with cool air27. The cool air27or the cool air28with expanded perlite sand2is sucked in, for example, by a vacuum pump or a fan (not shown). The device according to the invention has at least one directing element13, which is arranged at least in sections in the furnace shaft4, wherein the directing element13forms a gap15with the inner wall14of the furnace shaft4at least in the recon of the one of the two ends5,6of the furnace shaft4, wherein the at least one feeding means is set up for feeding the unexpanded perlite sand1into the gap15. In the exemplary embodiment ofFIG.1, the directing element13is arranged accordingly in the region of the upper end5. The valve10and the process air21are set up in such a way that the perlite sand1is fed to the gap15in the region of the upper end5. This means that the perlite sand1enters the furnace shaft4when it enters the gap15. It should be emphasized that in the exemplary embodiment ofFIG.1, the perlite sand1is introduced at the upper end5over the entire gap15, but inFIG.1, for reasons of clarity, only perlite sand1is shown, which is introduced into the gap15on the left side in the picture. The directing element13shields the perlite sand1from an upward flow of heated air/gases (“chimney flow”) which forms in the region of the radial center16of the furnace shaft4. This prevents very fine granulated material with diameters smaller than 100 μm, in particular smaller than 75 μm, from being obstructed from falling by the chimney flow and from expanding as desired. The latter is caused in particular by the fact that without a directing element13the perlite sand particles1—after their cooling due to the isenthalpic expansion process—are heated up again. This causes the perlite sand particles1to soften again, but the perlite sand particles1can no longer cool isenthalpically by changing their shape, thus creating an increased risk of agglomeration on the inner wall14. Said chimney flow can easily escape upwards from the furnace shaft4through a free space19. This free space19is arranged or formed, along the entire extension of the directing element13parallel to the conveying direction12, between the directing element13and the radial center16of the furnace shaft4. Furthermore, the directing element13guides the perlite sand1in a targeted manner close along the inner wall14, resulting in a uniform heating of all perlite sand grains1in terms of time and location, which in turn results in a uniform expansion result. In the exemplary embodiment ofFIG.1, the directing element13extends in the furnace shaft4from the upper end5to approximately the end of the first third of the conveying section7. However, the uniformity of the movement, in particular the direction of movement, of the perlite sand grans1in the gap15brought about by the directing element13also acts a little way beyond the end of the directing element13. In the exemplary embodiments shown, the directing element13is made of high-temperature steel and reflects the heat radiation caused by the heating elements9correspondingly well. This means that the directing element13additionally acts as a passive heater for the perlite sand1located between the inner wall14and the directing element13. In the exemplary embodiment ofFIG.1, the directing element13is arranged completely in the furnace shaft4and fastened therein accordingly, with detachable fastening means (not shown) being provided for fastening in order to be able to remove the directing element13from the furnace shaft4and insert it gain as required. Apart from regions along the conveying direction12where said fastening means are provided, the gap15extends completely around the radial center16of the furnace shaft4. As can be seen from the sectional view ofFIG.1, the shape of the directing element13is adapted to the cross-section of the furnace shaft4in that the directing element13extends basically parallel to the inner wall14. Accordingly, the gap15has a gap width17which, in the illustrated exemplary embodiment, varies only slightly over the entire extension of the directing element13in the conveying direction12and is preferably approximately constant. It should be noted, however, that embodiment variants are also possible in which the gap width17varies by at least 50% in the conveying direction12in order to selectively adjust the residence time of the perlite sand grains1in different regions along the conveying section7. Furthermore, in the exemplary embodiment ofFIG.1, the gap width17also hardly varies in the circumferential direction18and is preferably approximately constant. This applies to all positions or regions along the conveying section7over which the directing element13extends, in particular in the region of the feed of the perlite sand1, i.e. in the region of the upper end5in the exemplary embodiment ofFIG.1. It should be noted, however, that embodiment variants are also possible in which the gap width17varies significantly in the circumferential direction18, although typically clearly less than in the conveying direction12, e.g. at most 5%. The most obvious difference between the embodiment variant shown inFIG.2and that shown inFIG.1is the feeding of the perlite sand1to be expanded (not shown separately inFIG.2for reasons of clarity) from below into the furnace shaft4, with the conveying direction12facing upward from below. Accordingly, the at least one directing element3is arranged in the furnace shaft4at least in the region of the lower end6of the furnace shaft4and forms the gap15there together with the inner wall14. In this case, the at least one feeding means comprises a suction nozzle11connected upstream of the furnace shaft4and a fan34and is set up to suck the unexpanded perlite sand1together with a quantity of air at the lower end6of the furnace shaft4in the direction of the upper end5of the furnace shaft4into the furnace shaft4in such a way that the perlite sand1is fed into the gap15. The quantity of air thereby forms an air flow flowing from bottom to top, by means of which the perlite sand1is conveyed from bottom to top along the conveying section7in order to be expanded in the upper half, preferably in the uppermost third, of the conveying section7. In the exemplary embodiment ofFIG.2, the feeding means further comprise a diffuser30downstream of the suction nozzle11, which adjoins the lower end6of the furnace shaft4. The diffuser30may help to disperse the perlite sand1in the air volume prior to the expansion process, in order to achieve or support a uniform distribution of the perlite sand1in the air flow. The suction nozzle11is supplied with perlite sand1via a vibrating chute35, with the perlite sand1being fed to the vibrating chute35in metered quantities from a supply container29via a metering screw33. In addition, air is also drawn in via the suction nozzle11(by means of the fan34), forming a suction air flow31. The air flow or the suction air flow31can be adjusted by suitable selection or design of the suction nozzle11and/or by selection of a suitable suction speed (by means of the fan34). The latter can in principle also be automated by means of the regulation and control unit (not shown). In the exemplary embodiment ofFIG.2, the directing element13extends over approximately one or the first quarter of the conveying section7and can, however, also extend considerably further, in particular over the entire conveying section7in the furnace shaft4. The latter is indicatedFIG.2by the dashed-dotted lines. The directing element13is also basically adapted to the cross-sectional shape of the furnace shaft4in the exemplary embodiment ofFIG.2. As in the exemplary embodiment ofFIG.1, the gap width17in the exemplary embodiment ofFIG.2hardly varies in the circumferential direction18ands preferably essentially constant. This applies to all positions or regions along the conveying section7over which the directing element13extends, in particular in the region of the feed of the perlite sand1, i.e. in the case of the exemplary embodiment ofFIG.2in the region of the lower end6. However, it should also be noted in this case that embodiment variants are also possible in which the gap width17varies significantly in the circumferential direction18, although typically clearly less than in the conveying direction12, e.g. at most by 5%. AlthoughFIG.2does not show a variation of the gap width17in the conveying direction12, along the conveying section7or in the conveying direction12, the gap width17can also vary much more than in the circumferential direction18in the exemplary embodiment shown inFIG.2—for example by at least 50%—in order to specifically adjust the residence time of the perlite sand grains1in different regions along the conveying section7. In both embodiment variants shown, however, the gap width17is at most 10 cm. In the exemplary embodiment ofFIG.2, the directing element13is fixed in the diffuser30, preferably removably. Accordingly, as viewed along the conveying direction12, there is an extension of the gap15completely around the radial center16. In the exemplary embodiment ofFIG.2, an absolute temperature measurement is basically carried out (temperature sensors are not shown, however, for reasons of clarity). In addition, the power consumption of the heating elements9is determined or it is determined how this power consumption changes along the conveying section7. Immediately after the expansion process and the associated drop in temperature, the temperature difference between the expanded granulated material2(not shown separately inFIG.2for reasons of clarity) and the heating elements9is significantly greater than between the perlite sand1and the heating elements9immediately before the expansion process. Accordingly, the heat flow also increases, provided that the measured temperature is kept constant. This means that the observed change in heat flow or power consumption of the heating elements9from one heating zone8to the next is an increase, whereas due to the successive heating of the perlite sand1before the expansion process, the change in power consumption along the conveying section7is a decrease. For regulation purposes, in particular for regulation along the conveying section7remaining after the temperature drop, the heating elements9are connected to the regulation and control unit (not shown) so that, for example, an increase in the material temperature along the remaining conveying section7to or above the critical temperature can be specifically prevented or enabled. The discharge of the expanded granulated material2from the furnace shaft4takes place (together with heated air) via a collecting section32adjoining the upper end5of the furnace shaft4. By means of an air entrainment/suction flow26, which operates with cool air27, the expanded granulated material2are conveyed further. The cool air27or the cool air28with expanded perlite sand2is thereby sucked in, as already mentioned, for example by a vacuum pump or a fan (not shown). LIST OF REFERENCE SIGNS 1Perlite sand2Expanded granulated material3Furnace4Furnace shaft5Upper end of the furnace shaft6Lower end of the furnace shaft7Conveying section8Heating zone9Heating element10Valve11Suction nozzle12Conveying direction13Directing element14Inner wall of the furnace shaft15Gap16Radial center of the furnace shaft17Gap width18Circumferential direction19Free space20Water-cooled chute21Process air22Position for temperature measurement23Temperature sensor24Thermal insulation25Position or range of the temperature drop26Air entrainment/suction flow27Cool air of air entrainment28Cool air with expanded perlite sand or expanded granulated material29Supply container30Diffuser31Suction air flow32Collecting section33Metering screw34Fan33Vibrating chute | 17,583 |
11859906 | DETAILED DESCRIPTION OF THE INVENTION During primary and secondary operations which involve the generation of molten aluminium in a furnace, a slag or dross forms on the surface of the molten metal. The dross contains various waste components arising from the processing of the feed material. As well as waste components, the dross also includes significant aluminium content. As a result, when the aluminium stream and dross stream are separated following handling in the furnace, the dross stream is often fed to a dross press. The dross press provides a container unit for the dross and a press head which is forced into the dross. Such an arrangement is shown in GB2314090. The mechanical force applied to the dross forces the still molten aluminium from the dross and out of the container unit and hence recovers that aluminium as further aluminium stream. Existing designs face issues with the limits on the rate at which they cool the dross. The rate of cooling is important in obtaining high dross processing throughputs and in maximising the amount of useful metal recovered. The existing designs also have limits on the effectiveness and versatility of their control systems. Cooling InFIG.1, a plan view from above of one embodiment of a press head1is provided. The press head1includes a substantially planar upper surface3and a central tube5at which location the actuator, not shown, for moving the press head up and down is provided. A pin7which passes through the tube5allows for the press head1to pivot relative to the actuator. Four braces9extend radially from the tube5to stiffen the press head1. The press head1is provided with two loops11which can be used to lift the press head1during assembly, maintenance and the like. Also provided on the upper surface3are mounts13for connection to air pipes, not shown. The air pipes are connected to the outside of the dross press. One of the mounts13provides an inlet15into the press head1. The other mount13provides an outlet17from the press head1. The inlet15and outlet17are used to flow cooling air through the inside of the press head1. Because the air is taken from the environments around the dross press, is fed through an air pipe to the press head1, is fed out of the press head1along an air pipe and is returned to the environments of the dross press, that air does not come into contact with the air and dust inside the dross press. As a result, the risk of particulate material entering the press head1and interrupting flow with time is avoided. Additionally, the air leaving the press head1is not contaminated with dust and/or off gases from the dross press and so needs no or minimal treatment before it is return to the environments of the dross press. Any air flow from the environments into the dross press enclosure which comes into contact with such dust and off gases is handled separately through an outlet on the enclosure and appropriate dust and/or off gas treatment units. In the cross-sectional views ofFIG.2andFIG.3, the air flow within the press head1is illustrated. The press head1has a lower surface19which is dome shaped. The lower surface19has protrusions21which extend therefrom. In this embodiment these form a X shaped pair of protrusions21in the press head1. Other protrusion configurations are possible. The press head1has a hollow23inside. The hollow23includes an upper surface25, exemplified by an upper baffle which causes the air entering through inlet15to flow downwards towards inside of the wall27of the lower surface19. The upper baffle25is provided by a 3 mm thick plate, and includes an inner surface26. The lower surface19is the part of the press head1which receives the most heat in use and so the cooling air is hence directed towards the hottest location so as to have maximum effect. The wall27has a series of baffles in the form of ribs31provided on its inner surface38. The ribs31cause the air to radiate outward from the inlet15and so reach all parts of the wall27in the half of the press head1. The ribs are 25 mm high off the inner surface38and have a 25 mm gap between their top surface and the bottom of the upper baffle25. A baffle33divides the inlet half from the outlet half, but has openings35at either end to allow air flow into the outlet half. Further baffles under the planar surface3are used to control the flow back to the outlet17and ensure that the air contacts all parts of the inner surface33in the outlet half too. The overall effect of the baffle structure is to direct the air flow widely and to the hottest parts so as to give best cooling of the press head1. InFIG.4, a plan view from above of another embodiment of a press head101is provided. The press head101includes a substantially planar upper surface103and two tubes105at which locations the actuator, not shown, for moving the press head up and down are provided. A pin107which passes through the tube105is provided in each case to allow for the press head101to pivot relative to the actuator. Three braces109extend radially from each tube105, together with a linking centre line rib110so as to stiffen the press head101. The press head101is provided with two pairs of loops111which can be used to lift the press head101during assembly, maintenance and the like. Also provided on the upper surface103are mounts113for connection to air pipes, not shown. The air pipes are connected to the outside of the dross press. One of the mounts113provides an inlet115into the press head101. The other mount113provides an outlet117from the press head101. The inlet115and outlet117are used to flow cooling air through the inside of the press head101. In the cross-sectional views ofFIG.5andFIG.6, the air flow within the press head1is illustrated. The press head101has a lower surface119which is dome shaped. The lower surface119has protrusions121which extend therefrom. In this embodiment the protrusions121are provide with one ending from end to end along the long axis of the press head101, with two protrusions121at 90° thereto extending across the narrower width of the press head101. Other protrusion configurations are possible. The press head101has a hollow123inside. The hollow123includes an upper baffle125which causes the air entering through inlet115to flow downwards towards inside of the wall127of the lower surface119. The lower surface119is the part of the press head101which receives the most heat in use and so the cooling air is hence directed towards the hottest location so as to have maximum effect. The wall127has a series of baffles in the form of ribs131provided on its inner surface133. The ribs131cause the air to radiate outward from the inlet115and so reach all parts of the wall127in the half of the press head101. A baffle138divides the inlet half from the outlet half, but has openings135at either end to allow air flow into the outlet half. Further baffles under the planar surface103are used to control the flow back to the outlet117and ensure that the air contacts all parts of the inner surface138in the outlet half too. The overall effect of the baffle structure is to direct the air flow widely and to the hottest parts so as to give best cooling of the press head101. Load, Process and Unload Control An example of a dross press700is shown inFIG.7. It includes a side wall702a, further side wall702b, base704and roof706. The rear wall and front wall708complete the structure. In the roof706is an outlet710for air passing through the inside of the dross press700. The outlet710leads to dust and/or off gas treatment units, not shown. Also in the roof are openings through which the actuators712can act upon the press head provided within the dross press700. The front wall708includes a door714. This slides up and down within the front wall708. As shown inFIG.7, the door714is in the raised position. The sequence of operations for the dross process700is as follows, according to this embodiment. Firstly, a container unit30is provided. The container unit30provides a support structure32for a container34. The container34is deepest in the middle and shallower at the periphery. The container34has an oval profile in plan, but other profiles can be used. The support structure32includes a pair of recesses38a,38bwhich are configured to receive the lifting fork of a forklift truck, not shown. Other forms of lifting vehicle and/or apparatus could be used, such as cranes, but a forklift truck is most suited to the later operations. The container unit30is brought to a loading location to receive the pile of dross. The dross may be loaded to the container34direct from a previous process, such as a furnace. The container unit30is of metal, with steel alloy container34. The materials can withstand temperatures in excess of 1600° C. The container34is constructed to encourage heat loss to the environment of the container unit30. Once loaded with dross, the container unit30is carried by the fork lift truck from the loading location to the dross press700. The door714on the dross press700is opened. The forklift truck advances the container unit30into the enclosure718. As it does so, a part of the forklift and/or container unit30breaks a light beam across the mouth of the open door714. This starts a sequence of events the controller for the dross press700expects. The forklift truck is able to deposit the container unit30on a support surface720. The forklift truck can then be detached from the container unit30and all parts thereof exit the enclosure718. As a result, the controller detects that the light beam across the mouth of the door is not longer being broken. This triggers the next step. Until the light beam is restored, a safety interlock applies which prevents the door closing and/or the press head moving. The subsequent steps may progress automatically, subject to a correct situation being observed in each check. In the next step, a further light beam is used to sense the level of the container unit30, preferably in terms of the surface722around the top of the container34. The level detected is interpreted by the controller and results in the identification of type of container unit30provided within the enclosure718. For different types of container unit30and/or press head, the controller applies different forms and/or durations and/or sequences of subsequent steps. In particular, the level detected will be a factor in the extent of movement the actuators712go through to bring the press head towards the container unit30. FIGS.8aand8billustrate two situations where different levels apply. In the first case, a deeper container34is provided into which the press head800requires a first extent of insertion. InFIG.8b, a shallower container34is provided and a consequentially smaller press head800is used. This requires a lower extent of insertion and hence the detection of the different levels is important. The system also provides for a check that the container unit30and hence the container34are in the correct position on the support surface720and/or relative to the press head above. As a first step in the dross pressing, the controller closest the door714. The door714slides down until the closed position is reached. The closed position, shown inFIG.8, still provides a 15 cm gap between the bottom edge of the door and the support surface720. This allows a flow path for cooling air into the enclosure718and out via the outlet710. Sensors may be provided which confirm to the controller than the closed position for the door has been reached before other steps are permitted. The controller then triggers movement of the actuators712. These have a press head mounted on them, inside the enclosure718. Further movement of the actuators712and press head downward cause the press head to push into the dross in the container34. The dross is compressed as a result. The downward motion continues until the press head reaches the lowest position allowed by the controller. This may be a position and/or when the press head or a part thereof contacts the container unit30or a part thereof. During the time in the container34molten metal is able to drain from the container34through one or more apertures provided in it. The molten metal collects in a sow mould beneath, in the support structure32. The drainage of molten metal particularly occurs when the dross is compressed by the press head. The press head also provides for the cooling of the dross. The controller applies the press head to the container34and contents for a desired time and at a desired load or load profile. The controller may provide for rotation and/or other motion being applied to the press head. The controller then brings the actuators712and hence press head up out of the dross and out of the container34. The door714is then opened on the command of the controller. The forklift truck returns, engages with the container unit30. Once again, the breaking of the light beam causes the controller to active the interlock preventing door714movement and/or movement of the press head. If the forklift truck withdraws without the container unit30, the light beam configuration indicates that the container unit30has not been withdrawn and so the interlock remains. Only if the container unit30is withdrawn is the controller able to recognise another sequence of the method starting. Once withdrawn from the enclosure718, the container unit30is moved to a storage location to complete its cooling. At this stage, the forklift truck brings a closure element900and places it on the container unit30. The closure element900can be seen inFIG.9. The closure unit900is in the form of a cover element902which is substantially planar in terms of its upper surface904. The upper surface904has a pair of recesses904a,904bwhich are configured to receive the lifting fork of a forklift truck, not shown. The extent of the upper surface904is such as to cover the receiving location within the container34. A contact surface on the underside of the closure element900abuts a contact surface on the upper of the container unit30. The upper surface904is provided with a series of protrusions906which increase the surface area of thereof and hence increase heat loss to the environments of the apparatus formed by the container unit30and closure element900combination. The under surface of the closure element900is provided with a series of further protrusions which extend into contact with the dross. These serve to increase the area of the dross in contact with the closure element900and hence increase heat transfer to the closure element900. Once the dross has cooled to the required degree, the dross may be extracted and reprocessed. Each closure element900is provided with a pair of recesses904a,904bfor this purpose to allow it to be lifted off the container unit30to allow emptying. Air Flow Management As shown inFIG.10andFIG.11, the cooling air1000for the press head1002is kept completely separate from the cooling air1004for the space1006within the enclosure1008. The cooling air1004for the space1006also has a role in dust collection and control and in off gas collection and control. In more detail, the cooling air1004for the space1006within the enclosure1008is drawn through the enclosure1008as a result of an air pump or blower, not shown, generating a pressure below atmospheric in conduit1010. The conduit1010leads to an off-gas and dust treatment unit. The majority of the air flow into the enclosure1008from the environment1012in which the enclosure1008is positioned, is through the upper gap1014. A lower proportion is drawn through lower gap1016. As a consequence, the air velocity through the upper part of the enclosure1008is higher to ensure all off-gas and dust is effectively swept from the enclosure1008. The ratio of the volume of air passing through the upper gap compared with the lower gap may be in excess of 1.5 to 1, preferably in excess of 1.75 to 1, more preferably in excess of 2.5 to 1 and ideally in excess of 3.5 to 1. The velocity of air in the upper gap may be at least 2 times that in the lower gap, preferably at least 3 times, more preferably at least 4 times and ideally at least 5 times. As shown inFIG.11, the cooling air1000for the press head1002is drawn through feed conduit1020to a blower1022and then to flow conduit1024. The air flow entering the conduit1020is preferably taken from outside of the plant or another dust free environment. The flow may be filtered to ensure it is dust free. The flow conduit1024enters the enclosure1008and through flexible connectors1026passes the cooling air1000into the top of the press head1002. The cooling air1000is in contact with the inside surface of the domed press head surface which contacts the hot by-product. The cooling air1000, now at a raised temperature, leaves the press head and passes through flexible connector1028, then out of the enclosure1008and through exit conduit1030. The exit conduit may feed the cooling air1000to a treatment unit, but generally this is not necessary as the cooling air1000is kept apart from the dust and off-gases all the time it is within the process plant. | 17,161 |
11859907 | DETAILED DESCRIPTION OF ONE FORM OF EMBODIMENT With reference toFIG.1, a vessel according to the present invention, which in the example shown here is a reactor10for the production of direct reduced iron (DRI) has a shape substantially axial-symmetrical with respect to an axis X, vertical. In its upper part, the reactor10comprises an upper zone, called reduction zone11, inside which reduction gases at temperatures comprised between 700° C. and 1,100° C. flow in counter-flow with respect to a bed of charge material M consisting of iron minerals in granular form that falls due to gravity from the top to the bottom. In the reduction zone11, defined by a first lateral wall13having a substantially cylindrical tubular shape, the reduction reactions that transform the charge material M into DRI take place. Under the reduction zone11, the reactor10comprises a lower zone, called discharge zone12, communicating with the reduction zone11in correspondence with the lower end14of the latter. The discharge zone12has a truncated cone shape defined by a second lateral wall16converging toward the axis X and inclined with respect thereto by an angle a which in this case is equal to about 12°. The function of the discharge zone12is to convey the particles of DRI toward a discharge aperture15, located at its lower end. From the discharge aperture15the DRI exits from the reactor10and can be conveyed directly toward an electric arc furnace to be melted, or to a briquetting machine, to be shaped into briquettes for subsequent storage or transport. In the portion of reactor10comprised between the lower end14of the reduction zone11and the lower zone17of the discharge zone12, the second lateral wall16is provided with an internal lining22. In the form of embodiment shown inFIG.2, the internal lining22comprises a first layer23, disposed toward the inside of the discharge zone12, a second layer24, under the first layer23, and a third layer25, interposed between the second layer24and the second lateral wall16. The first layer23is made of a composite ceramic material comprising a mixture of alumina, in the form of corundum (Al2O3), zircon oxide (zirconia—ZrO2) and silica (SiO2). These oxides are combined in proportions such as to confer on the first layer23a surface hardness greater than or equal to 8.5 Mohs. In a preferential form of embodiment, the first layer23contains between 48% and 53% in weight of corundum, between 30% and 33% of zircon oxide and between 13% and 17% of silica. In this way, a material is obtained with the desired hardness, density comprised between 3,000 kg/m3 and 4,000 kg/m3 and conductivity comprised between 3.5 W/mK and 5.0 W/mK. The considerable hardness and high surface finish obtainable for this material, for example using sintering production techniques, allow the first layer23to have a low friction coefficient, and also a friction angle considerably less than that of the refractories normally used for lining the second lateral wall16and than that of the steel the latter is normally made of. To give an example,FIG.3shows, for the three different materials cited above, a graph of the development of the wall friction angle (WFA) as the temperature (T) increases, where SR indicates standard refractory, CS indicates carbon structural steel, and CM indicates the ceramic material the first layer23of the internal lining22is made of. It should be noted that, along nearly the whole range of temperatures considered, the value of the friction angle of the ceramic material CM remains between about 50% and about 60% of the values relating to the angles of friction of carbon steel CS and standard refractory SR, which between them differ only by 2-3°. The difference between the value of the friction angle of the ceramic material CM and those of the carbon steel CS and standard refractory SR is highest between 600° C. and 700° C., a range in which the first goes below 50% of the second and below 45% of the third. Since the friction angle of the second lateral wall16is inversely proportionate to the maximum inclination that it can have in the discharge zone12, the first layer23of the internal lining22allows to achieve considerably higher inclinations. This has a positive effect due to the fact that the greater inclination of the second lateral wall16implies a proportionate reduction in the height of the reactor10. In particular, the maximum angle of inclination of the second lateral wall16usable with carbon steel is 12-13°, while with a standard refractory it goes down to 9°. Subject to solving other problems that come into play at angles of more than 13°, the first layer23of the internal lining22would allow to achieve an inclination of the second lateral wall16even much higher than 13°, while still keeping the speed of descent of the DRI substantially unchanged, and therefore not affecting the efficiency of the process. The first layer23(FIG.2) is advantageously made with modular elements, for example tiles or blocks26, substantially parallelepiped and smooth on the surface. The blocks26are laid adjacent and have a minimum thickness of 40 mm, advantageously comprised between 45 mm and 50 mm. To allow the first layer23to deform under the thermal loads without causing cracks, a thin layer of deformable material27, resistant to the high temperatures of the process, for example a high density refractory or an insulator, can be used to surround the blocks26and separate them from each other. The second layer24, under the first layer23, has a minimum thickness of 45 mm, advantageously comprised between 50 mm and 80 mm. In this case, the second layer is made of a silico-aluminous insulating material, with a density comprised between about 2,000 kg/m3 and about 3,000 kg/m3 and a conductivity comprised between about 1.4 W/mK and about 1.7 W/mK. The main function of the second layer24is to act as a binder between the first layer26and the third layer25but, where necessary, it can also be used as a filling to contribute to the heat insulation. The third layer25, interposed between the second layer24and the second lateral wall16, has the main function of contributing to the heat insulation of the discharge zone12with respect to the outside. The third layer25, in the example given here, is made of a silica-based insulating material, but it can also be made of other insulating materials and have other thicknesses, in proportion to the degree of insulation desired. In particular, also in order to reach a suitable compromise between thickness and insulating capacity of the third layer25, the value of conductivity of the latter is advantageously comprised between 0.01 W/mK and 0.1 W/mK. The speed at which the DRI passes through the discharge zone12and the insulation achieved by the internal lining22allow the DRI to keep a good part of its heat energy, thus maintaining, in correspondence with the exit aperture15, a temperature of more than 700° C. It is clear that modifications and/or additions of parts may be made to the vessel as described heretofore, which has been identified by way of example as a reactor10, without departing from the field and scope of the present invention. Indeed, this form of embodiment has been described merely by way of a non-restrictive example, and the considerations made in the description above are to be understood as valid also for other types of vessels suitable for containing hot DRI. Alternative types of vessels may be for example storage bins or hoppers, or other containers used for moving the DRI from the reactor to user devices such as melting furnaces or briquetting machines, in order to feed them. In fact, in all these vessels, as in the reactor10, it is advantageous that the temperature and the slidability of the DRI are high. It is also clear that, although the present invention has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of apparatus, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby. | 8,147 |
11859908 | DETAILED DESCRIPTION OF THE INVENTION In one aspect, the invention features an effluent removal system10as illustrated inFIG.1. Input gas12is combined with recycled cleaned or scrubbed process gas14into process gas input stream16which is introduced into a process chamber18of a furnace20. In other non-limiting embodiments, the process gas stream can include only non-recycled input gas or only recycled process gas. includes only recycles process gas. In non-limiting embodiments the furnace20is an oven, a reflow furnace or oven, or other type of furnace or oven. A product conveyor22is used to convey or to pass one or product(s)24into the process chamber18from the entrance26to the exit28of the furnace20as shown by directional arrow30. Each of the one or more product(s)24can include one or more component(s)32. In one non-limiting embodiment, the product conveyor22is a conveyor belt. In other non-limiting embodiment(s), the one or more product(s)24being conveyed or passed through the furnace20include one or more circuit boards each having one or more components. The furnace20includes the process chamber18having one or more heating zone(s) or region(s)34and a cooling section38having one or more cooling zone(s) or region(s)39. Each heating region34is equipped with one or more heating element(s)40, as shown by heating elements40a-fin the non-limiting embodiment ofFIG.1. AlthoughFIG.1is shown with one heating zone34and one cooling zone39, the furnace20can include the process chamber18having more than one heating zone34and the cooling section38having more than one cooling zone or region39. The heating zones34and the cooling zones38are linearly disposed in the furnace20such that the product conveyor22passes from furnace entrance26into the process chamber18, consecutively through the one or more heating zones34, and subsequently consecutively through the cooling section38having one or more cooling zones or regions39before exiting the furnace20at the furnace exit28. In other non-limiting embodiments, the cooling section38is disposed outside the walls, frame, or housing of the furnace20. In a non-limiting embodiment,FIG.1shows heating elements40a-cpositioned vertically above the product conveyor22and heating elements40d-fpositioned vertically below the product conveyor22. In other non-limiting embodiments, the one or more heating elements(s)40can be positioned at various angles with respect to product conveyor22, and the one or more product(s)24, including the one or more components32. During processing, the heating elements(s)40(a-f) provide heat to the products24including components32. The heating of the one or more products24, and the one or more components32of each of the products24cause the emission, evolution, or vaporization of an effluent42into the process gas16of the furnace20. In non-limiting embodiments, the effluent42can include several constituents such as, for non-limiting examples, flux, solvent, resins such as sticky resins, and/or other effluents which are vaporized from the product(s)24during heating and introduced into the atmosphere of the process chamber18. Different effluent constituents can be evolved or vaporized at different heating conditions including different temperatures. FIGS.1,2,3and/or4show the invention featuring one or more effluent extraction tube(s)44each having one or more slots46or openings which are positioned above the product conveyor22. Each tube44preferably has length49which spans the width50of the process chamber. Preferably each tube44is disposed above and perpendicular to the travelling path52of the product conveyor22on which the product(s)24are conveyed through the furnace20. In other embodiments, the length49of each tube44is disposed in a direction other than perpendicular to the travelling path52of the product conveyor22through the furnace20. Each tube44is preferably disposed at a selected height or height range54above the product(s)24including component(s)32being passed through the furnace20in an area corresponding to a heavy release of effluent42, at a specific temperature on a thermal profile. The slots46of the tubes44can be preferably disposed to generally correspond to the product(s)24or product component(s)32passing through the furnace beneath the tubes44. Thus, each tube44and more particularly the slots46of each tube44can be selectively disposed to be as proximate to the point of generation of effluent42. The slots46of each tube44can be oriented towards the generation of effluent. Each tube44is fluidically connected to one or more trunklines. Preferably each tube44is fluidically to a common trunkline or manifold58via a tube port48and corresponding trunkline extraction port56disposed on the common trunkline or manifold58, as shown inFIGS.2, and/orFIGS.5-6. The trunkline58can be located inside or outside the process chamber18of the furnace20. Preferably, at least a first portion or segment of the trunkline57is disposed within the process chamber18. Thus, the temperature of the process chamber18heating zones34can maintain the process gas16laden with effluent42in a gaseous state and can minimize condensation of effluent prior to subsequent delivery to an effluent management system.FIG.2shows a first portion57of the trunkline58disposed in the heating region34of the process chamber18until the first portion of the trunkline exits the furnace20at a trunkline exit port shown as exit port72inFIG.1on a case of the furnace. This configuration minimizes or eliminates condensation and deposition of the effluent42within the tubes44and the trunkline58. The one or more tube(s)44are fluidically connected to one or more trunklines58via a connection device, shown in a non-limiting embodiment as connection device66inFIG.6. In a non-limiting embodiment shown inFIGS.5and6, the connection device includes an internally threaded female connection coupling60on a tube44which can mate with a corresponding externally threaded connection coupling64on the trunkline58. The connection device66enables the tube44to have selective and reversible fluidic connection to or detachment from the corresponding extraction tube port56on the trunkline58. The trunkline also includes a threaded weld nut65, as shown inFIGS.5and6, on each end of the trunkline. A threaded rod is screwed into the corresponding weld nut65on each end of the trunkline58. Each threaded rod is adjusted for contact with the bottom of the process chamber of the oven for leveling and stabilization of the trunkline. In addition, or in the alternative, either the tube44or the trunkline58includes a mechanism, such as, for a non-limiting example, a valve, which can be used to reversibly seal or release fluidic flow from the tube or into the trunkline without having to remove the tube44from the trunkline58. In preferred embodiments, various features of the tubes44and/or the trunkline58can be used to influence the general flow of process gas16within the furnace18. The general flow of process gas in the furnace can be influenced by biasing, adjusting, or altering longitudinal flow extracted through the trunkline extraction ports56and the volumetric flow passing through the trunkline58, and/or by biasing, adjusting, or altering the lateral flow along the length of one or more individual extraction tube(s)44. Gas flow in specific regions of the furnace can thus be increased or decreased to achieve a selected or desired balanced flow condition. The improvement of the gas flow balance within a convection furnace provides better thermal uniformity. In addition, gas usage can be reduced thereby reducing operational costs particularly in systems which employ an inert process or cover gas. Typical process gases for reflow furnaces include nitrogen or air as non-limiting examples. For example, regarding longitudinal flow biasing, in a non-limiting embodiment of the invention, one or more valve(s)68can be provided in the trunkline58, as shown by valves68a,68b, and68cin the non-limiting embodiment ofFIG.7A. Each valve68can be disposed downstream of a respective extraction port56of the trunkline58, as shown by valves68a,68b, and68cdown stream of respective extraction ports56a,56b, and56cinFIG.7A. The respective valve68can be used to bias, adjust, or alter the volume of effluent laden process gas70extracted through the extraction port56and passing through the trunkline58prior to the effluent laden process gas70passing through the trunkline exit port72and along the effluent management feed line74to the effluent management system76, as shown inFIG.1. In non-limiting embodiments, the longitudinal flow can be reversibly or non-reversibly biased. For example, regarding reversible biasing of longitudinal flow, one or more valve(s)68including a manual or an automated adjustable valve, such as, for a non-limiting example, a throttle valve, can be used to bias, adjust, or alter reversibly the volume of effluent laden process gas70extracted through each extraction port56and thus passing through the trunkline58. Regarding non-reversible biasing of longitudinal flow, in lieu of a manual or an automated adjustable valve, one or more valve(s)68can include a fixed valve, such as, for a non-limiting example, an orifice plate, which can be used to bias, alter, or adjust non-reversibly the volume of effluent laden process gas70withdrawn through each extraction port56and thus passing through the trunkline58. In addition to or in lieu of any such one or more fixed valve(s)68, the inner diameter of one or more segment(s) or portion(s) of the trunkline58can be varied for selected non-reversible biasing of longitudinal flow of effluent laden process gas70, as shown by the non-limiting embodiment ofFIG.7B. InFIG.7B, a segment or portion78of the trunkline58has an inner diameter80which is less than the inner diameter84of the segment or portion82of trunkline58. Thus, the selected difference or variation in internal cross section area of the trunkline58can be used to selectively bias the flow of effluent laden process gas70withdrawn into the extraction ports56and through the trunkline58. In addition to or in lieu of such one or more fixed valve(s)68, and the selected variation of the inner diameter of one or more segment(s) or portion(s) of the trunkline58, selected variation of the cross-sectional area of one or more extraction port(s)56can also provide non-reversible biasing of the effluent laden process gas70extracted through an extraction port56and thus passing through trunkline58. Lateral flow through the interior passageway41along the length49of one or more individual tube(s)44can also be reversibly or non-reversibly biased, altered or adjusted, as shown inFIGS.8A,8B and/or8C. For example, regarding reversible biasing of lateral flow, one or more individual tube(s)44can be provided with one or more lateral flow adjusting devices,86. The one or more lateral flow adjusting device(s)86can include, for non-limiting examples, one or more adjustable damping devices or dampers disposed in the body45of one or more individual tube(s)44, as shown by adjustable damper86ain the non-limiting embodiment ofFIG.8A. Alternatively, or in addition to such one or more adjustable dampers, the lateral flow adjusting device86can include, for non-limiting examples, one or more adjustable shutter(s) where each adjustable shutter corresponds to and is adapted for a corresponding slot46on an individual tube44, as shown by the adjustable shutter86bshown in the non-limiting embodiment ofFIG.8A. The adjustable shutters can be selectively adjusted thereby altering the cross-sectional area of the related slot46and thus the flow of gas therethrough and through the interior passageway41of the tube44. Regarding non-reversible biasing of lateral flow, each of one or more of slot(s)46can be selectively sized for a cross-sectional flow through area for achieving a desired flow bias through the interior passageway41along the length49of the corresponding tube44. In the non-limiting embodiments ofFIGS.8B and8C, each of four slots46a,46b,46cand46dcorrespond to a respective cross-sectional flow-through area47a,47b,47c,47d, where each area and/or the combination of areas is selected for a desired lateral flow bias through the interior passageway41along a length49of the respective individual tube44. In addition, or in lieu of the selectively sized cross-sectional flow through areas47of slots46described above, the location for the tube port48through which gas passes from the tube44through the corresponding trunkline extraction port56into the trunkline58can be selected based on a desired biasing of lateral flow through the interior passageway41along the length49of the individual tube44. For example, in the non-limiting embodiment ofFIG.8B, the tube port48is disposed at an end of the body45of the tube44in comparison with the tube port48disposed in a center of the body45of the tube44in the non-limiting embodiment ofFIG.8C. The progression and/or inter-relationship of cross-sectional flow through areas of the openings46can also be selected for flow biasing. In the non-limiting embodiment shown in inFIG.8B, cross-sectional flow through areas of47d,47c,47b, and47aprogressively decrease with the smallest cross-sectional flow through area47adisposed proximate to the tube port disposed at one end of the tube. In the non-limiting embodiment ofFIG.8C, cross-sectional flow through areas47aand47ddecrease in comparison with respective adjacent cross-sectional flow through areas47band47dwhich are disposed proximate to the tube port disposed in the middle of the tube. In addition, the extraction tubes44or branches can be selectively positioned along the length of the trunkline58to adapt to specific unique and/or desired thermal profiles and materials sets including variables such as, for non-limiting examples, solder paste, circuit board type, size and component load. In one non-limiting embodiment, the trunkline can be equipped with a linear array of connection couplings, such as, for a non-limiting example, the connection coupling64shown inFIG.5. Extraction tubes44or branches can be selectively attached to a corresponding coupling using, for a non-limiting example, the connection coupling60shown inFIGS.5and6, as needed for a particular thermal profile. An unused coupling, that is, a coupling not attached to an extraction tube44can be sealed with a cap. The cap can include a reversible or non-reversible sealing mechanism. Preferred embodiments include reversible sealing caps. The linear array of trunkline connection couplings can be disposed within or between every heating zone for maximum flexibility, or within or between every other heating zone for reasonable flexibility. In a preferred embodiment, each trunkline connection coupling and corresponding extraction tube44or branch is disposed in a space between respective heating zones. Such a configuration enables placement of the extraction tube44flush with the heater diffuser plate. In addition, placement of the extraction tube44between respective heating zones provides the least impact on zone heater convection flow. The material; placement, configuration, and/or disposition; and dimensions of the extraction tubes44are selected based upon the type of processing, the related heating conditions and/or for optimization of flow efficiencies. The extraction tubes44include a material which can withstand the high temperatures of the heating zones of the furnace. The material of the tubes is selected from the group consisting of aluminum, steel, stainless steel, Inconel®, austenitic nickel-chromium-based superalloy, high-temperature rated plastic, and a combination of two or more of the aforementioned. For purposes of this application, a high-temperature rated plastic includes a plastic which can withstand temperatures in a range of 20° C. to 400° C., and preferably in a range of 100° C. to 380° C., more preferably in a range of 200° C. to 375° C., and most preferably in a range of 300° C. to 350° C. The extraction tubes44can be disposed from the face of the extraction port56on the extraction trunkline58above the pass line of the product conveyor22at a height in range of 0.5 inches to 5.5 inches, preferably in a range of 1.0 inches to 4.0 inches, more preferably in a range of 1.25 inches to 3.0 inches, and most preferably in a range of 1.5 inches to 2.0 inches above the product conveyor22being passed through the furnace20. Each of the extraction tubes44spans at least in part or wholly the width of the product conveyor. Each of the extraction tubes44has a length corresponding to a percentage of the width of the process chamber including the heating zones in a range of 75% to 100%, preferably in a range of 80% to 100%, more preferably in a range of 90% to 100%, and most preferably in a range of 98% to 100%. The inner diameter of the extraction tubes44is in a range of 0.5 inch to 3 inches, preferably in a range of 0.6 inches to 2.5 inches, more preferably in a range of 0.7 inches to 2 inches, and most preferably in a range of 0.8 inches to 1.5 inches. The openings or slots of each tube have a cross sectional area in a range of 0.2 in2to 7.1 in2, preferably in a range of 0.28 in2to 4.9 in2, more preferably in a range of 0.38 in2to 3.1 in2, and most preferably in a range of 0.5 in2to 1.8 in2. The outer diameter of the extraction tubes44is in a range of 0.75 inches to 3.25 inches, preferably in a range of 0.9 inches to 2.75 inches, more preferably in a range of 1.0 inches to 2.25 inches, and most preferably in a range of 0.75 inches to 1.75 inches. Similarly, the material; placement, configuration, and/or disposition; and dimensions of the trunklines58are selected based upon the type of processing, the related heating conditions and/or for optimization of flow efficiencies. The material of the extraction trunkline58is selected from the group consisting of aluminum, steel, stainless steel, Inconel®, austenitic nickel-chromium-based superalloy, high-temperature rated plastic, and a combination of two or more of the aforementioned. The extraction trunkline58can have at least a first portion for disposition within the process chamber and having a length corresponding to a percentage of the length of the process chamber including the heating zones in a range of 20% to 100%, preferably in a range of 50% to 100%, more preferably in a range of 75% to 100% and most preferably in a range of 80% to 100%. The inner diameter of the extraction trunkline58is in a range of 1.5 inches to 3 inches, preferably in a range of 1.75 inches to 2.75 inches, more preferably in a range of 1.85 inches to 2.5 inches, and most preferably in a range of 2.0 inches to 2.25 inches. The trunkline58has a cross sectional area in a range of 1.77 in2to 7.0 in2, preferably in a range of 2.4 in2to 5.9 in2, and more preferably in a range of 2.69 in2to 4.9 in2, and most preferably in a range of 3.1 in2to 3.9 in2. The outer diameter of the extraction trunkline58is in a range of 1.75 inches to 3.25 inches, preferably in a range of 2 inches to 3 inches, more preferably in a range of 2 inches to 2.75 inches, and most preferably in a range of 2.25 inches to 2.5 inches. In another aspect, the invention features a method for removal of an effluent, such as a flux, from a gas stream. Steps of the method90are shown in the flow chart illustrated inFIG.9Aaccording to one non-limiting embodiment. The method includes providing in one or more heating zones of a process chamber of a furnace one or more extraction tubes each having a plurality of openings or slots in fluidic communication with an interior passageway, as illustrated by step92; conveying a product through the one or more heating zones of the process chamber of the furnace, as illustrated by step94; heating the product in the one or more heating zones thereby vaporizing one or more effluents into a process gas in the furnace, as illustrated by step96; withdrawing the effluent laden process gas through the plurality of openings or slots into the interior passageway(s) of the one or more tubes, as illustrated by step98; withdrawing the effluent laden process gas from one or more tubes into one or more fluidically connected trunklines, as illustrated by step100. In other non-limiting embodiments, the method includes delivering the effluent laden process gas from the one or more trunklines to an effluent management system for scrubbing or cleaning effluent from the gas, as illustrated by step102; cooling the scrubbed or cleaned gas, as illustrated by step104; and recycling the scrubbed or cleaned gas back into the process chamber of the furnace in step105. In non-limiting embodiments, the cleaned process gas is cooled either in a cooling section of the furnace or in a cooler exterior to the furnace prior to recycling the process gas back into the process chamber. In non-limiting embodiments, after heating the product in the one or more heating zones of the process chamber, the method includes conveying the heated product into a cooling section of the furnace and cooling the product, as illustrated by step106; and conveying the product through an exit of the furnace, as illustrated by step108. In other non-limiting embodiments of the invention, the method of the invention can include conveying the product consecutively through the one or more heating zones where the product is heated at selected increasing temperatures in each consecutive heating zone for vaporization of targeted constituents of the effluent. Preferable, the method provide a common trunkline fluidically connected to the one or more extraction tubes. For a non-limiting example, in one non-limiting embodiment as shown inFIG.9B, the method of the invention includes providing in a first heating zone of a process chamber of a furnace one or more first extraction tubes each having a plurality of openings or slots in fluidic communication with an interior passageway, as illustrated by step112; introducing or conveying a product into the first heating zone of the process chamber; heating the product in the first heating zone to a first temperature thereby vaporizing a first effluent into the process gas of the furnace, as illustrated by step116; withdrawing the process gas laden with first effluent through the plurality of openings or slots into the interior fluidic passageway(s) of the one or more first extraction tubes, as illustrated by step118; and withdrawing the gas laden with first effluent from the one or more first extraction tubes into a fluidically connected trunkline, as illustrated by step120. In a non-limiting example, the first heating zone can be operated in a temperature range of 70° C. to 100° C. In such a temperature range, light solvents can vaporize and can be withdrawn through the first extraction tube(s) and the fluidically connected trunkline. The method also includes providing in a second heating zone of a process chamber of a furnace one or more second extraction tubes each having a plurality of openings or slots in fluidic communication with an interior passageway, as illustrated by step122; introducing or conveying the product into the second heating zone of the process chamber, as illustrated by step124; heating the product in the second heating zone to a second temperature thereby vaporizing a second effluent into the process gas of the furnace, as illustrated by step126; withdrawing the process gas laden with second effluent through the plurality of openings or slots into the interior passageway(s) of the one or more second extraction tubes, as illustrated by step128; and withdrawing the gas laden with the second effluent from the second extraction tubes into the fluidically connected trunkline as illustrated by step130. In a non-limiting example, the second heating zone can be operated in a temperature range of 100° C. to 200° C. In such a temperature range, second effluent including, for non-limiting examples, heavy solvent and flux resin constituents can vaporize and can be withdrawn through the second extraction tube(s) and the fluidically connected trunkline. The method also includes providing in a third heating zone of a process chamber of a furnace one or more third extraction tubes each having a plurality of openings or slots fluidically connected to an interior passageway of the tube, as illustrated by step132; introducing or conveying the product into the third heating zone of the process chamber as illustrated by step134; heating the product in the third heating zone to a third temperature thereby vaporizing a third effluent into the process gas of the furnace as illustrated by step136; withdrawing the process gas laden with third effluent through the plurality of openings or slots into the interior passageway(s) of the one or more third extraction tubes as illustrated by step138; and withdrawing the gas laden with the third effluent from the one or more third extraction tubes into the fluidically connected trunkline as illustrated by step140. In a non-limiting example, the third heating zone can be operated in a temperature range of 200° C. to 300° C. In such a temperature range, third effluent including, for non-limiting examples, volatilized and combusted flux can vaporize and can be withdrawn through the one or more third extraction tubes and the fluidically connected trunkline. In other non-limiting embodiments, the method includes passing the effluent laden gas from the trunkline to an effluent management system for scrubbing or cleaning effluent from the gas, as illustrated by step142; cooling the cleaned gas; as illustrated by step144; and recycling the scrubbed or cleaned gas back into the process chamber of the furnace as illustrated by step145. In other non-limiting embodiments, after heating the product in the three heating zones of the process chamber, the method includes conveying the heated product into a cooling section of the furnace for cooling the product as illustrated by step146; and conveying the product through an exit of the furnace as illustrated by step148. In the systems, devices and method of the invention, the extraction trunkline can be disposed exterior to the process chamber but preferably at least a first part of the extraction trunkline is included within the process chamber, as discussed above. In different embodiments the cleaned process gas is cooled either in the cooling section of the furnace or in a cooler separate from the furnace prior to recycling the process gas back into the furnace process chamber, although in other embodiments cleaned process gas is not recycled back into the furnace. In different embodiments, the product can be cooled in a cooling section which is incorporated in the furnace, as shown in the flow charts ofFIG.9, or alternatively, after heating the product can be conveyed through the exit of the furnace and the product cooled in a cooler disposed outside of the furnace. EXAMPLE 1 Extraction tests were conducted using different extraction configurations using a Pyramax 150 oven or furnace. The Pyramax 150 oven included a process chamber having a length of 156 inches and a width of 32 inches. The process chamber of the Pyramax 150 oven included 12 separate heating zones disposed linearly in a consecutive sequence along the travelling path of a product conveyor belt. The Pyramax 150 oven also included a cooling section. The product conveyor belt passed into the entrance of the oven into the process chamber including 12 consecutive heating zones and subsequently through a cooling section including a cooler1before exiting the oven. Process gas including air which was extracted from the oven was passed through an effluent management system where effluent was scrubbed or cleaned from the gas. The cleaned process gas was then passed through the cooling section of the oven including a cooler1and recycled back into the oven at the bottom of heating zones1,7and10. The same return configuration for the recycle of cooled effluent process gas back into oven was used for each extraction configuration described below. Deposition targets including five-inch diameter polished silicon wafers or coupons were attached at three different locations along the Pyramax 150 oven for deposition or collection of condensed effluent including flux condensed from process gas during operation of the oven. Before testing, each of the coupons was weighed to establish a tare weight. After each test, each coupon was re-weighed and the tare weight subtracted to determine the weight of effluent deposited on the respective coupon. Each coupon was then cleaned and re-weighed to determine a new tare weight prior to the next test. A first deposition target corresponding to Coupon #1 was attached on the near side of the oven inside wall near heating zone1, as shown inFIG.10A. A second deposition target corresponding to Coupon #2 was attached on the far side of the oven inside wall near heating zone12, as shown inFIG.10B. A third deposition target corresponding to Coupon #3 was attached on the outside wall of the cooler1top area of the cooling section, as shown inFIG.10C. Square aluminum plates having dimensions of 12-inch by 12-inch were used to simulate product passing through the oven. Prior to each test, a volumetric amount corresponding to 50 grams of Indium Corporation Floxot-84999Y flux was deposited on each aluminum plate. Each aluminum plate including the deposited flux was then passed through the oven having a particular extraction configuration including a particular set of heating conditions and residence times. The plates were not weighed before or after the test. After the test, each aluminum plate was cleaned, 50 grams of Indium Corporation Floxot-84999Y flux was reapplied volumetrically to each plate, and each plate was then passed through the oven according to the next extraction configuration. An amount of 300 to 400 aluminum plates were used in testing. Oven residence times for the aluminum plates varied between a period of 5 to 6 hours depending upon the extraction test. The density of white smoke exiting the oven entrance during loading of the test plates onto the product conveyor was recorded based on a visual observation range of 1-10 with the rating of 10 and 1 corresponding to the respective highest and lowest level of white smoke density observed. The product conveyor included a chain driven edge conveyor that supported two opposing edges of the product using two separate rail/chain assemblies. These rails were moved in or out against each other to accommodate different product sizes. Two-inch outside diameter or O.D., 0.049-inch wall thickness 304 stainless steel was used for the extraction trunklines. One-inch outside diameter or O.D., 0.035-inch wall thickness 304 stainless steel or S.S. tubing was used for extraction tubes also called branches. Baseline Test The extraction configuration for the baseline test included an extraction port at the bottom of each of heating zones3and1. Each extraction port included a 2-inch manual ball valve and KF50 fitting connected to a KF50 flex line. Extraction Test #1 The extraction configuration for Extraction Test #1 is shown inFIGS.11A,11B and11C. The extraction configuration for Extraction Test #1 included extraction ports disposed at the top of heating zones5,6,7,10,11and12. Each extraction port included a 2-inch size manual ball valve and KF50 fitting connected to a KF50 flex line. Extraction Test #2 The extraction configuration for Extraction Test #2 is shown inFIGS.12A,12B and12C. The extraction configuration for Extraction Test #2 included a single extraction trunkline having a length of 192 inches disposed at the far side inside the process chamber next to the product conveyor belt and running along the length of the process chamber including heating zones1-12, as show inFIG.12A. The extraction trunkline was equipped with a K50 flange at a first end of the extraction trunkline and a blank cap at a second end of the extraction trunkline. The K50 flange fluidically connected the extraction trunkline to an outlet connection protruding at the entrance of the process chamber of the oven, as shown inFIG.12B. The extraction trunkline included ten (10) horizontal extraction ports disposed linearly along the trunkline adjacent to heating zones3to12. Each of the horizontal extraction port measured 0.125 inches×6 inches. Effluent laden process gas from the oven was extracted through the horizonal extraction ports into the single extraction trunkline. The effluent laden gas passed through and exited the extraction trunkline through the K50 flange and passed through and exited the oven through the outlet connection disposed at the entrance to the oven. Extraction Test #3 The extraction configuration for Extraction Test #3 is shown inFIGS.13A and13B. The extraction configuration for Extraction Test #3 included a single extraction trunkline having a length of 180 inches disposed at the far side inside the process chamber next to the product conveyor belt and running along the length of the process chamber including heating zones1-12, as show inFIGS.13A and13B. The extraction trunkline was equipped with a K50 flange at a first end of the extraction trunkline and a blank cap at a second end of the extraction trunkline. The K50 flange fluidically connected the extraction trunkline to an outlet connection protruding at the entrance of the process chamber of the oven, similar to the extraction configuration of Extract Test #2. Four individual extraction tubes or branches were fluidically connected to the extraction trunkline. Each of the four extraction tubes or branches was fluidically connected to the extraction trunkline at a first end and included a blank cap at a second end. The first extraction tube or branch was located between heating zones2and3. The second extraction tube or branch was located between heating zones5and6. The third extraction tube or branch was located between heating zones9and10. The fourth extraction tube or branch was located between heating zones10and11. Each of the four extraction tubes had a length of 29.5 inches as measured from the center radius of the coupling opening to the end of the tube at the cap. Thus, each extraction tube length corresponded to a percentage of the 32-inch width of the process chamber including the heating zones in range of 80% to 100%. Each extraction tube included four slots spaced one inch apart. Each slot had a 5-inch in length and 0.25-inch in width. Effluent laden process gas from the oven was extracted from the process chamber through the horizonal slots of each of the four extraction tubes or branches. The effluent laden gas passed through the extraction tubes into the extraction trunkline. The effluent laden process gas passed along and exited the extraction trunkline through the K50 flange into the outlet connection. The effluent laden gas passed through and exited the oven through the outlet connection for subsequent processing in the effluent management system. Extraction Test #4 The extraction configuration for Extraction Test #4 is shown inFIGS.14A and14B. The extraction configuration for Extraction Test #4 included a single extraction trunkline having a length of 180 inches disposed at the far side inside the process chamber next to the product conveyor belt and running along the length of the process chamber including heating zones1-12, as show inFIGS.14A and14B. The extraction trunkline was equipped with a K50 flange at a first end of the extraction trunkline and a blank cap at a second end of the extraction trunkline. The K50 flange fluidically connected the extraction trunkline to an outlet connection protruding at the entrance of the process chamber of the oven, similar to the extraction configurations of Extraction Test #2 and #3. Three individual extraction tubes or branches were fluidically connected to the extraction trunkline. Each of the three extraction tubes or branches was fluidically connected to the extraction trunkline at a first end and included a blank cap at a second end. The first extraction tube or branch was disposed between heating zones2and3. The second extraction tube or branch was disposed between heating zones5and6. The third extraction branch or tube was disposed between heating zones9and10. Each of the three extraction tubes had a length of 29.5 inches as measured from the center radius of the coupling opening to the end of the tube at the cap. Thus, each extraction tube length corresponded to a percentage of the 32-inch width of the process chamber including the heating zones in range of 80% to 100%. Each extraction tube included four slots spaced one inch apart. Each slot had a 5-inch in length and 0.25-inch in width. Effluent laden process gas from the oven was extracted from the process chamber through the horizonal slots of each of the three extraction tubes or branches. The effluent laden gas passed through the extraction tubes into the extraction trunkline. The effluent laden process gas passed along and exited the extraction trunkline through the K50 flange into the outlet connection. The effluent laden gas passed through and exited the oven through the outlet connection for subsequent processing in the effluent management system. Extraction Test #5 The extraction configuration for Extraction Test #5 is shown inFIGS.15A,15B, and15C. The extraction configuration for Extraction Test #5 included a single extraction trunkline having a length of 156 inches disposed at the far side inside the process chamber next to the product conveyor belt and running along the length of the process chamber including heating zones1-12, as show inFIGS.15A and15B. The extraction trunkline was equipped with a K50 flange at a first end of the extraction trunkline and a blank cap at a second end of the extraction trunkline. The K50 flange fluidically connected the extraction trunkline to an outlet connection protruding at the entrance of the process chamber of the oven, as shown inFIG.15Cand similar to the extraction configurations of Extraction Test #2, #3 and #4. The extraction trunkline corresponded to the extraction trunkline of Extraction Test #4 but without the extraction tubes or branches. In lieu of extraction tubes, the extraction trunkline included three extraction ports including one-inch diameter union connections. The first extraction port was located between heating zones2and3. The second extraction port was located between heating zones5and6. The third extraction port was located between heating zones9and10. Effluent laden process gas from the oven was extracted from the process chamber through the three extraction ports into the extraction trunkline. The effluent laden process gas passed along and exited the extraction trunkline through the K50 flange into the outlet connection. The effluent laden gas passed through and exited the oven through the outlet connection for subsequent processing in the effluent management system. Extraction Test #6 The extraction configuration for Extraction Test #6 is shown inFIGS.16A,16B and16C. The extraction configuration for Extraction Test #6 included a single extraction trunkline a length 92 inches disposed at the far side inside the process chamber next to the product conveyor belt and running along the length of the process chamber including heating zones1-12, as show inFIGS.16A and16B. The extraction trunkline was equipped with a blank cap disposed at a first end of the extraction trunkline and a blank cap disposed at a second opposing end of the extraction trunkline. The extraction trunkline was fluidically connected to an outlet connection protruding from the back of process chamber in the area of heating zone7, as shown inFIG.16C. Three individual extraction tubes or branches were fluidically connected to the extraction trunkline. Each of the three extraction tubes or branches was fluidically connected to the extraction trunkline at a first end and included a blank cap at a second end. The first extraction tube or branch was disposed between heating zones2and3. The second extraction tube or branch was disposed between heating zones5and6. The third extraction tube or branch was disposed between heating zones9and10. Each of the three extraction tubes had a length of 29.5 inches as measured from the center radius of the coupling opening to the end of the tube at the cap. Thus, each extraction tube length corresponded to a percentage of the 32-inch width of the process chamber including the heating zones in range of 80% to 100%. Each extraction tube included four slots spaced one inch apart. Each slot had a 5-inch in length and 0.25-inch in width. Effluent laden process gas from the oven was extracted from the process chamber through the horizonal slots of each of the three extraction tubes or branches. The effluent laden gas passed through the extraction tubes into the extraction trunkline. The effluent laden process gas passed along and exited the extraction trunkline through the outlet connection disposed in the back side of the process chamber in heating zone7. The effluent laden gas passed through and exited the oven through the outlet connection, as shown inFIG.16DandFIG.16E, and was passed through flex lines as shown inFIG.16Ffor subsequent processing in the effluent management system. The results of the Extraction Tests #1-6 are shown in Table I below in comparison with the results of the Baseline Extraction Test including extraction ports at the bottom of heating zones3and12in the Pyramax 150 oven. TABLE IExtractionTestBaseline#1#2#3#4#5#6ExtractionBottomTopExtractionExtractionExtractionExtractionExtractionconfigurationExtractionExtractiontrunklinetrunklinetrunklinetrunklinetrunklineat Zones 3at Zonesto Zonewith 4with 3with 3with 3& 125, 6, 7,12 withbranchesbranchesextractionbranches10, 11 &10at Zonesat Zonesports atat Zones12extraction2/3, 5/6,2/3, 5/6 &Zones2/3, 5/6 &ports and9/10 &9/102/3, 5/6 &9/10 withno10/119/10 onoutlet portbranchesmoveableat Zone 7rail (nobranches)No. of Plates300300300300300300400Residence5555556Time (hours)Coupon #112.42.112.941.215.913.214.8Post-TestWeight (mg)Coupon #20.020.118.80.00.00.00.0Post-TestWeight (mg)Coupon #383.2416.499.880.070.7154.952.0Post-TestWeight (mg)White81044131SmokeDensity 1-10, where 10is themaximumand 1 is theminimum A comparison of the Extraction Tests shows that even taking into consideration the greater number of 400 aluminum plates and the greater oven residence time of 6 hours used in Extraction Test #6, Extraction Test #6 resulted in relatively less effluent deposition on the three deposition coupons as compared to Extraction Tests #1-#5. Extraction Test #6 also showed relatively less white smoke emission as compared to Extraction Tests #2-#3 and #5. Only Extraction Test #4 showed a white smoke density rating of 1 similar to Extraction Test #6. In the present specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The contents of all references, pending patent applications and published patents, cited throughout this application are hereby expressly incorporated by reference as if set forth herein in their entirety, except where terminology is not consistent with the definitions herein. Although specific terms are employed, they are used as in the art unless otherwise indicated. | 43,780 |
11859909 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG.1shows a first embodiment of a device according to the invention for producing an expanded granulated material2from mineral material in the form of grains of sand with an expanding agent. In the exemplary embodiments shown, the mineral material from which the expanded granulated material2is produced is perlite sand1containing bound water as an expanding agent. The device comprises a furnace3having a substantially vertically disposed furnace shaft4, which has an upper end5and a lower end6, wherein between the two ends5,6a conveying section7extends, which leads through several heating zones8arranged separately from one another in a conveying direction10. The conveying direction10is substantially parallel to the direction of gravity and can in principle face in the direction of gravity or against the direction of gravity. In the exemplary embodiments shown, the conveying direction10faces against the direction of gravity, i.e. from the lower end6to the upper end5. The heating zones8each have at least one heating element9that can be controlled independently of one another in order to heat the perlite sand1to at least a critical temperature and to expand the perlite sand grains1. In particular, the heating elements9may be electrical heating elements9. Furthermore, at least one feeding means is provided, which is adapted to feed at least the unexpanded perlite sand1at one of the two ends5,6of the furnace shaft4in the direction of the other of the two ends6,5of the furnace shaft4into the furnace shaft4in order to inflate the perlite sand1, as seen in the conveying direction10, in the last half, preferably in the last third, of the conveying section7. In the exemplary embodiments shown, the feeding of the unexpanded perlite sand1takes place at the lower end6in the direction of the upper end5, with the expanded granulated material2exiting at the upper end5. A suction nozzle31cooperating with a fan Q can be provided, for example, as the feeding means for this, which nozzle31is connected upstream of the furnace shaft4and is set up to suck the unexpanded perlite sand1together with a quantity of air at the lower end6of the furnace shaft4in the direction of the upper end5into the furnace shaft4. The quantity of air thereby forms an air flow flowing from bottom to top, by means of which the perlite sand1is conveyed as a particle flow25from bottom to top along the conveying section7in order to be expanded in the upper half, preferably in the uppermost third, of the conveying section7. In an operating state of the device, caking15or agglomeration of perlite sand1, some of which may already be expanded, occurs on an inner wall13of the furnace shaft4. In the illustrated exemplary embodiments of the device according to the invention, a rotatable shaft insert11is provided in each case, which is arranged in the furnace shaft4, wherein a drive shaft28of the shaft insert11projects from the upper end5of the furnace shaft4. The shaft insert11has at least one scraper blade12, which forms at least one gap14with the inner wall13of the furnace shaft4, having a gap width18, and is set up to remove the caking15arranged in the gap14on the inner wall13in sections when the shaft insert11is rotated in the operating state of the device, if a thickness16of the caking15, cf.FIG.2, is greater than the respective gap width18. The gap width18is typically in the range of 2 mm to 5 mm. This means that in the operating state of the device, when the perlite sand1is conveyed through the furnace shaft4and expanded therein, the gap14is covered with caking15within a short time. This caking15is then continuously removed by means of the at least one scraper blade12as the shaft insert11rotates. As a result of the removal, the thickness16of the caking15is limited and remains approximately constant, more precisely in a certain range around the gap width18. This approximately constant, approximately uniform thickness16of the caking15guarantees an approximately constant radiation intensity which can be introduced into the furnace shaft4—through the caking15—by means of the heating elements9. The resulting approximately or substantially constant energy input into the furnace shaft4in turn provides for a uniform expansion and ensures (substantially throughout the operation of the device) a substantially constant expansion result. The fact that the perlite sand grains1in the furnace shaft4move as a particle flow25along the conveying section7largely in well-defined movement ranges29between the at least one scraper blade12, the remaining shaft insert11and the caking15also contributes to the uniform expanding or constant and defined expansion result. Accordingly, the residence time of the perlite sand grains1in the furnace shaft4—and thus the expansion process or the expansion result—can be determined or controlled quite precisely. In the illustrated exemplary embodiments, the shaft insert11is rotatable in a direction of rotation26— and, optionally, also against the direction of rotation26—about an axis of rotation20that extends parallel to and coincides with a longitudinal axis21of the furnace shaft4and with a radial center17of the furnace shaft4. On the one hand, the drive shaft28serves to rotatably support the shaft insert11in the region of the upper end5of the furnace shaft4. In the region of the lower end6of the furnace shaft4, the shaft insert11is floatingly mounted, for example by means of a centering pin (not shown), which extends along the axis of rotation20and is movably supported parallel thereto. On the other hand, drive means (not shown) may engage the drive shaft28to rotate the shaft insert11. In the illustrated exemplary embodiments, the drive means are arranged to rotate the shaft insert11at a variable rotational speed, with the rotational speed preferably being in the range of 0.125 rpm to 3 rpm, particularly preferably in the range of 0.5 rpm to 2 rpm. In the exemplary embodiments shown, the shaft insert11has a substantially rotationally cylindrical base body22from which the at least one scraper blade12projects with a directional portion parallel to a radial direction24, wherein the radial direction24lies in a plane normal to the axis of rotation20of the shaft insert11and, starting from the axis of rotation20, faces away therefrom. As viewed in the conveying direction10, a respective tapering shaft insert section23is arranged upstream and downstream of the base body22and is flush with the base body22. The taper is in each case along the axis of rotation20and faces away from the base body22. The drive shaft28is connected to the rear shaft insert section23as seen in the conveying direction10. The base body22as well as the shaft insert sections23are substantially closed in shape, so that the perlite sand grains1cannot enter the shaft insert11. Accordingly, the particle flow25can move practically exclusively in the movement ranges29along the conveying section7, wherein the particle flow25is guided into and out of the movement ranges29in a flow-promoting manner by the tapering shape of the shaft insert sections23. Preferably, the shaft insert11is hollow with an interior, wherein smaller pressure relief openings can be provided on the base body22and/or on the shaft insert sections23to allow air or gas, which is located in the interior of the shaft insert11and expands (or contracts) due to temperature, to pass out of (or into) the interior of the shaft insert11and thus effect pressure equalization. In the exemplary embodiments shown, the shaft insert11is made of the same material as a limiting element27forming the inner wall13, namely high-temperature steel. This ensures that the shaft insert11, just like the inner wall13, can easily withstand the temperatures that can occur in the operating state of the device in the furnace shaft4during expansion. Furthermore, the same choice of material also results in the same coefficients of thermal expansion, thus avoiding distortion due to different thermal expansion and ensuring a consistent shape or size of the gap14. By forming the inner wall13by the limiting element27, the geometry of the inner wall13or the (clear) cross-section of the furnace shaft4normal to the longitudinal axis21can be shaped in a well-defined manner, wherein said cross-section is substantially circular in the illustrated exemplary embodiments. In the first exemplary embodiment shown inFIG.1, four scraper blades12are provided which project uniformly from the base body22in a radial direction24and are arranged one behind the other as viewed in a circumferential direction19around the radial center17, with an angular spacing between the scraper blades12being substantially constant. The scraper blades12thereby extend substantially rectilinearly and parallel to the conveying direction10. Accordingly, there are four movement ranges29arranged symmetrically around the radial center17and extending rectilinearly parallel to the conveying direction10. The gap width28is correspondingly essentially constant as viewed in the conveying direction10. The rotation of the shaft insert11creates an annular gap between the scraper blades12and the inner wall13bounding the circular clear cross-section of the furnace shaft4, with the gap width18being essentially constant when viewed in the circumferential direction19. Accordingly, from a purely geometric point of view, without taking into account any turbulence that may occur in practice, perlite sand grains1in the particle flow25can move along straight lines parallel to the conveying direction10through the movement ranges29. InFIG.2, said symmetrical arrangement of the movement ranges29can be seen, wherein for reasons of clarity, the particle flow25is only indicated in two movement ranges29. The second exemplary embodiment, which is shown inFIG.3, differs from the first exemplary embodiment only in the design of the scraper blades12. Unless explicitly stated otherwise, what has been said above about the first exemplary embodiment therefore also applies analogously to the second exemplary embodiment and will therefore not be repeated here. As can be seen fromFIG.3, in the second exemplary embodiment, two scraper blades12are provided which project uniformly from the base body22in the radial direction24and each extend spirally about the axis of rotation20of the shaft insert11. The two spiral or helical shapes of the scraper blades12are thereby nested within one another. Accordingly, the resulting two movement ranges29also extend in a spiral or helical shape around the axis of rotation20. Consequently, when the perlite sand grains1move through the movement ranges29in the particle flow25, they must also follow the respective spiral or helical shape, which—from a purely geometrical point of view—results in a significantly longer path for the perlite sand grains1through the furnace shaft4compared to the first exemplary embodiment. The thermal treatment of the perlite sand grains1in the furnace shaft4can therefore be comparatively longer and thus even more precise in order to further optimize the expansion result. It should also be noted that in the second exemplary embodiment shown, the gap width28is also essentially constant when viewed in the conveying direction10. Likewise, the rotation of the shaft insert11creates an annular gap between the scraper blades12and the inner wall13delimiting the circular clear cross-section of the furnace shaft4, with the gap width18being substantially constant when viewed in the circumferential direction19. Finally, it should be noted that in the second exemplary embodiment, the choice of the direction of rotation26offers a further possibility to influence the residence time of the perlite sand grains1in the furnace shaft4and thus the expansion result. If, in contrast to what is shown inFIG.3, the direction of rotation26in interaction with the specific screw shape of the scraper blades12is such that the direction of movement of a corresponding screw along the axis of rotation26is opposite to the conveying direction10, this can effectively reduce the path of the perlite sand grains1again. In fact, in purely theoretical terms, it would then be conceivable that with the “correct” rotational speed and the correct flow velocity, a linear movement of the perlite sand grains1in the particle flow25parallel to the conveying direction10could practically result. Conversely, in the second exemplary embodiment, the direction of rotation26shown inFIG.3leads to an extension of the residence time, since a spiral or helical particle flow25is forced. A further aspect concerning the direction of rotation26is that, given the specific helical geometry of the scraper blades12, this determines whether the caking15is primarily scraped off on a top side of the blade or on a bottom side of the blade, wherein inFIG.3the bottom side of the blade is in front of the top side of the blade10as viewed in the conveying direction10. Preferably, as shown inFIG.3, the direction of rotation26is selected in such a way that the underside of the blade assumes the stripping function, since the caking15then does not remain on the respective stripping blade12due to the force of gravity, but is transferred to the air flow and is discharged together with the expanded granulated material2. LIST OF REFERENCE SIGNS 1Perlite sand2Expanded granulated material3Furnace4Furnace shaft5Upper end of the furnace shaft6Lower end of the furnace shaft7Conveying section8Heating zone9Heating element10Conveying direction11Shaft insert12Scraper blade13Inner wall of the furnace shaft14Gap15Caking16Caking thickness17Radial center of the furnace shaft18Gap width19Circumferential direction20Axis of rotation21Longitudinal axis of the furnace shaft22Base body23Tapering shaft insert section24Radial direction25Particle flow26Direction of rotation27Limiting element28Drive shaft29Movement range30Fan31Suction nozzle | 14,003 |
11859910 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION FIG.1shows a heat exchanger20providing heat exchange between a first flowpath900(FIG.1A) and a second flowpath902and thus between their respective first and second fluid flows910and912. In the example embodiment, the flowpaths900,902are gas flowpaths passing respective gas flows910,912. In the illustrated example, the first flow910enters and exits the heat exchanger20as a single piped flow and the flow912is an axial annular flow surrounding a central longitudinal axis500(FIG.1) of the heat exchanger.FIG.1also shows an axial direction502as a generally downstream direction along the second flowpath902. In a coaxial duct within a gas turbine engine, the axis500may be coincident with a centerline of the engine and an axis of rotation of its spools, the direction502is an aftward/rearward direction, and a radial direction is shown as504. The heat exchanger20has a first flow inlet22(FIG.1A) and a first flow outlet24along the first flowpath900. The example inlet and outlet are, respectively, ports of an inlet manifold26and an outlet manifold28. Example manifolds are metallic (e.g., nickel-based superalloy). The inlet manifold and outlet manifold may each have a respective fitting30A,30B providing the associated port22,24. Each manifold26,28further has a body piece32A,32B extending circumferentially about the axis500from the associated fitting30A,30B and port22,24. The example manifolds have continuously curving arcuate form. The example heat exchanger20is a continuous full annulus heat exchanger. An alternative but otherwise similar heat exchanger may be circumferentially segmented into a plurality of segments (e.g., four segments or three to eight segments). Each segment may, itself, be identified as a heat exchanger. Depending upon situation, the segments may be plumbed to have respective first flow910segments in parallel, in series, or two totally different first flows. As is discussed further below, the manifolds26and28are portions of a larger manifold assembly (manifold)34which comprises a stack36of plates discussed below in addition to the body pieces32A,32B. A plurality of heat transfer tubes40each extend between a first end42(FIG.2C) and a second end44at ends of respective legs46and48. Portions of the tubes at the ends42and44are received in the manifold stack36. The tubes are bent to be generally U-shaped in planform having respective distal turns49(FIG.2B). The example turns are about 180°. Interiors of the tubes fall along the associated branch of the first flowpath900to pass an associated portion of the first flow910. Central exposed exterior surfaces of the tubes are along the second flowpath902in heat exchange relation with the second flow912. In the example implementation, the manifold assembly34is a combined inlet, outlet, and transfer manifold including the overall heat exchanger inlet and outlet manifolds26,28. The transfer manifold function involves transferring from one stage of tubes to the next. In theFIG.1Aexample, there are three stages of tubes. The transfer manifolds (and their associated plenums) thus lack immediate external communication.FIG.1Ashows the inlet manifold26having an inlet plenum46A feeding the first legs46of the first stage40A of tubes. A transfer plenum46B transfers output of the first stage40A second legs48to a transfer plenum46C which feeds the first legs of the tubes of the second tube stage40B. A transfer plenum46D receives the output of the second stage40B second legs and passes it to a transfer plenum46E which feeds the third stage40C first legs. A discharge plenum46F of the discharge manifold28receives the output of the third stage40C second legs48for discharge from the outlet24. As is discussed further below, the stack36of plates (FIG.1A) extends between a first axial end50and a second axial end52. Each of the plates has a pair of opposite faces (axially-facing or radially/circumferentially extending), an inner diameter (ID) surface, and an outer diameter (OD) surface. In the example embodiment, the plates are stacked with the aft (downstream along the example second flowpath902) face of one plate contacting and secured to the forward/upstream face of the next. From upstream-to-downstream along the second flowpath902or fore-to-aft in the axial direction502, the end sections or portions of groups of the tube legs are mounted in pockets58(FIG.1A) formed between the mating plates. In the example, the tubes of each stage are circumferentially in phase with the tubes of the other stages. However, other configurations may have the tubes of each stage staggered to relative to the adjacent stage(s) to provide out-of-phase registry with the tubes of the adjacent stage fore or aft (e.g., each tube of a given stage is circumferentially directly between two adjacent tubes of each of the two adjacent stages). In the example, the bent tubes have a first face54(FIG.3) and a second face56(e.g., facing toward and away from a viewer when viewing the tube as a “U”) with a centerplane510parallel to and between the faces. The example plane is at an angle θ relative to a transverse (circumferential in the annular embodiment) direction506. Example θ (when not 0° of 180°) is about 45°, more broadly 30° to 60°. FIG.1Ashows pockets58at the plate junctions accommodating the tube leg end sections. With example circular-section tubing, the pockets58are essentially right circular cylindrical pockets split evenly between the two plates and provided by respective semi-cylindrical grooves90(FIG.4) in the two faces. The grooves (or pocket segments/sections)90have surfaces91and extend between the associated OD surface100of a plate on the one hand and a plenum46A-F discussed above on the other hand. With the example arcuate manifold configuration, the tubes in each group circumferentially diverge from each other in the radial direction from the manifold assembly34. Despite this radial fanning arrangement, each group may be identified as a “row” as is common practice with tube-bank heat exchangers. Depending upon implementation, the two legs of each tube may be parallel to each other with the divergence occurring only between adjacent legs of different tubes or the legs of a given tube may slightly diverge. The former (legs of a given tube parallel to each other) may make assembly easier. The plates of the stack36(FIG.1A) include a first end plate60, a second end plate62, and one or more intermediate plates. Depending on implementation, the intermediate plates may be the same as each other or different from each other. In the illustrated example, the intermediate plates are: first and second penultimate plates64and66respectively adjacent the first and second end plates60and62; and alternating first intermediate plate(s)68and second intermediate plate(s)70. For full annular heat exchangers there may be a thousand or more tubes per row. Even for a smaller segment of a circumferentially segmented heat exchanger, there may be hundreds of tubes per row or more in the segment. There may be at least an example twenty in a segment (whether stand-alone or assembled with other segments such as sectors discussed above) or a range of twenty to one thousand or twenty to two hundred). An example number of rows is at least two or at least three. Upper limits may be influenced by diminishing return on heat transfer and by increasing fluidic losses along both flowpaths. Thus, an example upper limit on rows is ten with a likely sweet spot of three to six rows. The manifold34has an outer diameter (OD) surface from which the tubes protrude. This outer diameter surface is formed by the combined outer diameter (OD) surfaces100(FIG.4) of the plates (inner diameter (ID) surfaces shown as102). In the example, the end plates60and62have flat first end faces forming the axial ends50and52and second (axially inboard/interior) faces including the groove90surfaces91.FIG.4shows such a second face of the end plates60,62having the grooves90in an outer diameter section110(between intact portions111of a planar face) and having a relieved inner diameter section112. The inner diameter section cooperates (with a similar section of the axially outboard face of the mating penultimate plate64,66) to form an annular portion of the associated plenum46A,46F in combination with an adjacent annular outwardly-open channel in the inlet or outlet manifold body piece32A,32B. The adjacent face of the respective penultimate plate64,66is similarly configured to and represented by theFIG.4illustration. The opposite face (axially inboard) of each penultimate plate64,66has a similar outboard portion114(FIG.5) but an intact inboard portion116leaving an intermediate portion forming an annular channel118radially therebetween for forming one half of the associated transfer plenum46B,46C,46D,46E.FIG.6shows the intact nature of the section116having a face117coplanar with intact portions of the outboard portion between the grooves90. The first intermediate plates68have two faces (FIG.7) of similar geometry to the penultimate plate64,66inboard faces ofFIG.5but having an intermediate channel-forming section130differing from section118by having a plurality of through-holes132. Through-holes132provide transfer from one stage to the next. The second intermediate plate(s)70have both faces similar to theFIG.5face and lacking such through-holes132. Aerodynamic forces from the second flow912as well as other vibrations may cause deleterious resonant behavior in the tubes. Accordingly, it is desirable to support the tubes at one or more radial locations outboard of the manifold34.FIG.1Ashows an outer diameter support assembly200and an intermediate support assembly202. The number of support assemblies may depend upon numerous factors including the radial span of the tubes. Each support assembly200,202includes one or more fiber members engaging the tubes. The fiber members help maintain spacings of the tubes, preventing/damping potentially damaging vibrations while accommodating differential expansion of the tubes (e.g., the tubes can radially slide relative to the fiber members). In the illustrated example, each support comprises pairs of fiber members interwoven with the tube legs. Example fiber members are in strip form such as woven straps or tapes. Example fiber material is glass fiber. Alternative forms include woven twisted threads and non-woven batts (batting). In this example, the OD support200has pairs of fiber straps engaging groups of tube legs out of phase with each other. For example, with the outer support, there are pairs of associated proximal (closer to the manifold) fiber straps204(FIG.3) and distal fiber straps206.FIG.2Bshows the fiber straps as having proximal edges208and distal edges210(FIG.2B). In the example, the proximal fiber strap204distal edge abuts the distal fiber strap206proximal edge.FIG.3also shows a woven fiber strap212at one end of the tube array. This may have a height equivalent to the combined heights of the other straps on a given stage. An example of a woven sheet material that may be cut into strips/straps204,206,212is “Fiberglass Fabric Gasket Sheet” of USA Sealing, Buffalo, NY. Thread instead of such straps may be particularly relevant to finer tubes with smaller spacings. An example twisted fiberglass thread is “Chemical-Resistant High-Temperature Thread” of McMaster-Carr, Aurora, OH. Typically sold as PTFE coated. the PTFE may ease interweaving of the thread with the tubes but may be baked off pre-use. If such thread is directly used, is particularly likely that there are many more than two spanwise stacked threads. Alternative fiber material includes carbon fiber and ceramic fiber. Glass has advantages of durability over ceramic and temperature capability over carbon fiber In the example, the intermediate support202(FIG.2A) has similar proximal straps230and distal straps232. The OD support further includes metallic end plates224,226and the intermediate support includes similar end plates244,246. Each support may be axially held together by fasteners250such as bolt252and nut254combinations extending through axial holes in the stack. The fiber members are shown particularly schematically because actual configurations may have the fiber members locally compressed to conform where contacting, tubes, plates, and other fiber members. Additionally, axial spacers (not shown) such as axial metallic struts may, under compression, bridge the end plates to maintain their axial/longitudinal spacing and prevent overcompression by the fasteners. Or the fasteners themselves may maintain the spacing such as via intermediate nuts. Or struts alone may be fastened to the end plates by fasteners, welding, or the like. Additional overwrapping or other means for further radially containing/constraining the fiber members may be provided. FIG.3further shows radial straps or struts260,262(e.g., metallic) securing supports224,226,244,246to each other and to the manifold34to maintain spacing. The example radial straps260,262may be secured via the fasteners250to the associated supports and may be similarly secured to the manifold or may be brazed, welded, or diffusion bonded to the manifold. In alternative embodiments, the straps only connect the supports to each other and not to the manifold and/or may connect to radially outboard environmental structure (e.g., an outer duct wall). Depending upon implementation, the fiber straps204,206,212may be retained against becoming dislodged such as via additional through-fasteners (not shown) similar to the fasteners250. For example, at a given transverse position (circumferential in the annular embodiment) there may be two radially-spaced through-fasteners associated with the respective two radial positions of fiber straps204,206. In variations, there may different interweavings of the fiber straps relative to the tubes. This may depend upon tube orientation. In further variations, there simply could be fiber layers extending transversely (circumferentially for the annular heat exchanger) between stages and/or between legs of a given stage. For example, with theFIG.3angled configuration, the alternative could include alternating sets of layers between tube stages and layers between the legs of a given tube stage. Similarly, other variations could involve axial interweaving and/or diagonal interweaving. Although separate weaves and layers are shown, there may be a continuous weave progressing from one stage to another or otherwise from one group of tubes or tube legs to another. For example, a weave might proceed axially through an axial group of tube legs from one axial end of the heat exchanger to the other then turn and come back along the next axial group, etc. In use, differential thermal expansion may cause relative sliding of the tubes and fiber members. The particular direction of motion may depend on several factors including the temperatures of the fluid flows. An example temperature domain may involve peak temperature in the range of 500° C. to 600° C. for placement in a combustion gas flow. When used as an intercooler, temperatures may be about 150° C. Example tube outer diameters are 1.0 mm to 3.0 mm, more broadly 1.0 mm to 10.0 mm. Example tube radial protrusions (radial span between manifold OD and turn OD) are at least 10 cm, more particularly 10 cm to 50 cm. Component materials and manufacture techniques and assembly techniques may be otherwise conventional. The tubes may be formed by extrusion or sheet metal rolling techniques, cut to length, and bent to shape. The manifold components may be machined from ingot stock or may be forged and machined or may be cast and machined. Fiber straps may be woven via conventional techniques. Metal straps and supports may be cut and bent from sheet or strip stock. Depending upon implementation, assembly of the fiber members to the tubes may be performed before assembly of the tubes to the manifold or manifold components or after, or during. In an example of pre-assembly, groups of tubes may be held in a fixture and the fiber strap(s) may be pre-formed into a wave and slid past the ends of the tube legs. Material may be compatible with operational conditions. Example tube, manifold, support224,226, and strap260,262material are stainless steels and nickel-based superalloys. Elements shown as individual pieces may be formed as multiple pieces and then integrated (e.g., via casting annular manifold segments and integrating into a full annulus such as by welding, diffusion bonding, and the like). The heat exchanger may be assembled in layers starting with a plate at one end of the manifold stack and the associated fiber member(s) at the end(s) of their stack(s). The tubes may be put in place and subsequent layers built up. Depending upon implementation, the tubes may be placed flat atop exposed faces of the plates and fiber member(s) or may need to be inserted radially inward. On stack completion, the bolts252or other fasteners (if present) may be inserted through pre-drilled holes (or the fasteners may have been preinstalled and used to help align subsequent blocks during stacking). Nuts254may be attached and tightened. The plates of the manifold may be secured to each other such as via brazing, diffusion bonding, or welding, or may be secured such as by using fasteners as with the bolts. In some embodiments/uses, the first flow910may be a pumped liquid and may remain a pumped liquid. In alternative embodiments/uses, the first flow may be a gas or may start out as a liquid and may be fully or partially vaporized. An example specific use situation is in a recuperator or waste heat recovery wherein the first flow910is of the recuperator working fluid (e.g., carbon dioxide). The heat exchanger20may be used as a heat absorption heat exchanger in the hot section of the engine (e.g., absorbing heat from combustion gases (as the second flow912) in an exhaust duct downstream of the turbine). Alternatively, the heat exchanger may be used as a heat rejection heat exchanger (e.g., rejecting heat to air (as the second flow912) in a fan duct or other bypass). FIG.8schematically illustrates a gas turbine engine800, including the heat exchanger20in a waste heat recovery system (recuperator)801. The example engine is an aircraft propulsion engine, namely a turbofan. The engine has a fan section805, one or more compressor sections810, a combustor section820and one or more turbine sections830, sequentially along a primary fluid flowpath (core flowpath). The fan also drives air along an outboard bypass flowpath. The example engine is a two-spool engine with the low spool directly or indirectly (e.g., via reduction gearbox) driving the fan. Example combustors are annular combustors and can-type combustor arrays. A downstream section of the core flowpath provides the second flowpath902. Downstream of the turbine section830is an exhaust casing840which exhausts combustion gas (as the fluid flow912) into an ambient atmosphere downstream of the turbine. In order to recapture the waste heat from the combustion gas flow912and convert the waste heat to work, the heat exchanger20is positioned within the exhaust casing840. The first flowpath900is a leg of a supercritical CO2(sCO2) bottoming Brayton cycle (referred to herein as the waste heat recovery system801). The heat exchanger20is connected to transfer heat from the turbine exhaust to the waste heat recovery system801, and the waste heat recovery system801converts the heat into rotational work (which may be used for various purposes such as driving an electrical generator (not shown) to power aircraft systems). The waste heat recovery system801may additionally recuperate waste heat within the recovery system801and is referred to as a recuperating bottoming cycle. The waste heat recovery system801has a turbine870with an inlet872connected to an output of the heat exchanger20. The turbine870expands the heated working fluid (CO2or other cryogenic fluid910) and expels the heated working fluid through a turbine outlet874. The expelled working fluid is passed through a relatively hot passage of a recuperating heat exchanger880, and is passed to a relatively hot passage of a heat rejection heat exchanger882. The heat exchanger882may be positioned to reject thermal energy from the working fluid to ambient air (e.g., fan bypass air). After passing through the heat rejection heat exchanger882, the working fluid is passed to an inlet892of a compressor890. The compressor890(driven by the turbine870(e.g., co-spooled)) compresses the working fluid, and passes the compressed working fluid from a compressor outlet894to a cold passage of the recuperating heat exchanger880. During operation of the waste heat recovery system801, the compressor890compresses the working fluid, and passes the compressed working fluid through the recuperating heat exchanger880and the heat exchanger20, causing the compressed working fluid to be heated in each of the heat exchangers20,880. The heated working fluid is provided to the inlet872of the turbine870and expanded through the turbine870, driving the turbine870to rotate. The rotation of the turbine870drives rotation of the compressor890and of an output shaft802. The output shaft802may be mechanically connected to one, or more, additional turbine engine systems and provides work to those systems using any conventional means for transmitting rotational work. Additionally or alternatively, the rotational work can be converted into electricity and used to power one or more engine or aircraft systems using a conventional electrical generator system coupled to the output shaft. Numerous variations may be implemented. For example, whereas theFIGS.1-3heat exchanger is full annulus, the heat exchanger and/or various of its components may be circumferentially segmented. At a minimum, the fiber member(s) may be formed as segments of an annulus with each fiber member(s) stage being assembled as a circumferential array of segments. The segments of each sequential stage may be out of phase with each other to improve structural rigidity. The circumferentially segmented fiber member(s) stage may held mounted in annular form by the front and rear plates and/or by mounting to environmental structure. Although shown with transversely-extending fiber member(s) (circumferential in the annular example) other fiber member(s) orientations may be provided including axially-extending. Although eachFIG.1-3tube is oriented diagonally (e.g., the legs are axially and circumferentially offset from each other) other configurations may involve tubes wherein the legs are not axially offset from each other or not circumferentially offset from each other. That latter example may be particularly amenable to the aforementioned axially-extending alternative block configurations. Although the example fiber member(s) capture portions of the legs leaving the turn protruding out from the associated fiber member(s) alternative examples may involve embedding the turn in the associated fiber member(s). Although discussed in the context of an annular heat exchanger other configurations are possible. For example, in a rectangular duct a bank of tubes may extend parallel from a straight/flat manifold. Depending upon implementations, there may be two opposite banks extending in opposite directions such as from opposite faces of a single central manifold. As an example of several such variations,FIGS.9and10show a heat exchanger600that is not full annulus but, extends between ends602,604transverse to the flowpath902. Additionally, rather than being an arcuate segment (e.g., in a situation where multiple segments are assembled end-to-end to form an annulus) the heat exchanger is straight (e.g., rectangular in footprint looking up or down the flowpath902). Additionally, the legs of a given tube are oriented similarly transverse to the second flowpath902rather than at an angle (θ is 0° vs the ˜45° ofFIG.3).FIGS.9and10show a downstream direction512of the flowpath902, a direction514outward from the manifold, and a transverse direction516normal thereto. The tube support610(FIG.9) comprises fiber members passing between legs of the tube sections. For each stage of tubes, the fiber members include a proximal fiber member620(closer to the manifold640) and a distal fiber member622. In the example, for each stage of tubes, the fiber members620and622are exactly out-of-phase with each other. Thus, when one fiber member620,622passes in front of a tube leg, the other passes behind that tube leg. From stage-to-stage, the fiber members620and622may be in-phase with the corresponding members of the adjacent stages.FIG.10shows how this results in nesting of the fiber members of adjacent rows. To better illustrate the interweaving of the members and tube legs,FIG.9thus is not a true sectional view where the fiber members of one or two adjacent stages would be cut. Rather it's a cutaway by entirely removing the fiber members of the tube stages that are above the view lane. Example fiber members620,622are woven fiber straps. Alternative fiber members may be as discussed above. Optionally, at ends of the stack, fiber members such as straps or batts (batting)630(FIG.10) may intervene between the end plate(s)612,614the adjacent members620,622. The end plates may sandwich the tubes and fiber members and be secured via fasteners as discussed above. In the example, the manifold640(FIG.9) has, at each fore-to-aft stage location, a pair of plenums642,644separated from each other by a wall646. Tube end portions may be of different lengths so that one end portion of each tube is in communication with the first plenum642and another in communication with the second plenum644. As a further variation, intermediate portions of the tube legs are shown flattened transversely to the flowpath902to improve rigidity and aerodynamic stability and increase thermal exposure while limiting restriction of the flow912. A flattening elongates such intermediate sections of the tubes in the direction of the flowpath902. If an intermediate support is present, an adjacent portion of the tube may be undeformed and of circular cross-section. Or, a different weave may be used to accommodate. FIGS.9and10show supports, straps, and fasteners similar to those inFIGS.1-3. Not shown are the overall inlet and outlet ports and plenum-to-plenum transfer apertures similar to those of theFIGS.1-3embodiment. Also, regarding use variations, some variations may have a fuel as the first fluid flow910. Although the heat exchanger may transfer heat to a conventional liquid fuel (e.g., kerosene-type jet fuels (such as Jet A, Jet A-1, JP-5, and JP-8), the heat exchanger may be used for future fuels such as liquid hydrogen, potentially vaporizing that fuel. The use of “first”, “second”, and the like in the following claims is for differentiation within the claim only and does not necessarily indicate relative or absolute importance or temporal order. Similarly, the identification in a claim of one element as “first” (or the like) does not preclude such “first” element from identifying an element that is referred to as “second” (or the like) in another claim or in the description. Where a measure is given in English units followed by a parenthetical containing SI or other units, the parenthetical's units are a conversion and should not imply a degree of precision not found in the English units. One or more embodiments have been described. Nevertheless, it will be understood that various modifications may be made. For example, when applied to an existing baseline configuration, details of such baseline may influence details of particular implementations. Accordingly, other embodiments are within the scope of the following claims. | 27,986 |
11859911 | To illustrate different views of the embodiments, three orthogonal directions Sx, Sy, and Sz are indicated in the figures. In use, the direction Sz is substantially vertical and upwards. In this way, the direction Sz is substantially reverse to gravity. DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS FIG.1a shows a circulating fluidized bed boiler1in a side view. The circulating fluidized bed boiler1comprises a furnace50, a cyclone40, which is a means40for separating bed material from flue gas, and a loopseal5. the loopseal5is configured to receive bed material from the cyclone40. InFIG.1a, a flue gas channel is indicated by the reference number20. Flue gas is expelled from the furnace50via the flue gas channel20.FIG.1bshows a bubbling fluidized bed boiler1in a side view. The bubbling fluidized bed boiler1comprises a furnace50and a flue gas channel20. Typically, the fluidized bed boiler1(bubbling or circulating) comprises flue gas heat exchangers26,28within the flue gas channel20. The flue gas heat exchangers26,28are configured to recover heat from flue gases. Some of the flue gas heat exchangers may be superheaters26configured to superheat steam by recovering heat from flue gas. Some of the heat exchangers may be economizers28configured to heat and/or boil water by recovering heat from flue gas. In a circulating fluidized bed boiler (FIG.1a), bed material is conveyed from an upper part of the furnace50to the cyclone40in order to separate the bed material from gases. From the cyclone40, the bed material falls through a channel60to a loopseal5. In the loopseal5, a layer of bed material is formed. The bed material is returned from the loopseal5to the furnace50via a pipeline15configured to convey bed material from the loopseal5to the furnace50. In the loopseal5, the walls51of the loopseal5limit a volume V into which a fluidized bed of the circulating bed material is arranged. In a bubbling fluidized bed boiler (FIG.1b), the bed material is fluidized in the furnace50. Thus, the walls51of the furnace50limit a volume V into which a fluidized bed of the bed material is arranged. In general, a fluidized bed boiler1comprises piping for heat transfer medium. In use, the heat transfer medium circulates in the piping and becomes heated by heat exchangers, in particular the flue gas heat exchangers26,28and the fluidized bed heat exchanger10. The piping forms a circulation for heat transfer medium. In the circulation, the same heat transfer medium may flow in between the flue gas heat exchangers26,28and the fluidized bed heat exchanger10. Typically the circulation is formed such that the heat transfer medium is first heated in the economizers28and thereafter in the superheaters26. Moreover, after the superheaters26, the heat transfer medium is heated in the fluidized bed heat exchanger10. Thereafter, the heat transfer medium, e.g. superheated steam, is typically conveyed to a steam turbine. The present invention relates in particular to a structure of a coaxial heat transfer tube and a method for manufacturing such a coaxial tube. In use, the coaxial heat transfer tube may be arranged as a part of a heat exchanger. In a preferably use, the heat exchanger is arranged in a fluidized bed, such as in the loopseal5of a circulating fluidized bed boiler or in the furnace of a bubbling fluidized bed boiler. In general, a heat exchanger comprises a number of tubes, in which a first heat transfer medium, such as water and/or steam, is configured to flow. Outside the tubes, second heat transfer medium, such as bed material, is configured to flow, whereby heat is transferred from the second heat transfer medium to the first heat transfer medium through a wall of the tube. The heat exchanger10, which, when installed in a fluidized bed, forms a fluidized bed heat exchanger10, can be manufactured as a part boiler or as a spare part for the boiler. In addition, the coaxial heat transfer tube100can be manufactured as a spare part for the heat exchanger10or as a part of a heat exchanger10. A fluidized bed heat exchanger10can be manufactured by assembling several coaxial heat transfer tubes100together. Thus, an embodiment concerns a coaxial heat transfer tube100. In addition, an embodiment concerns a heat exchanger10. In addition, an embodiment concerns fluidized bed boiler1. In this description, the following terms are used: A heat transfer tube refers to a tube. The heat transfer tube may be made from only one substantially homogeneous material, e.g. metal, such as steel. When considered feasible a heat transfer tube may is referred to as a “plain heat transfer tube” to distinct from a “coaxial heat transfer tube”. A plain heat transfer tube may consist of some metal, since metals in general conduct heat well. A coaxial heat transfer tube refers to an arrangement of tubes, in which an outer heat transfer tube radially surrounds an inner heat transfer tube. A coaxial heat transfer tube is an arrangement of tubes (typically only two tubes), which are mutually coaxial (seeFIGS.2a,3a, and3b). A straight part refers to such a part of a heat transfer tube (plain tube or coaxial tube), that has been obtained from a tube manufacturer, and has not been bent. Commonly, tube manufacturers supply straight rigid tubes. In terms of a radius of curvature, a radius of curvature rs(seeFIG.2b) of a central line of the straight part is at least 1 meter (1 m). A radius of curvature rsof a straight part may be infinite or substantially infinite. A curved part refers to such a part of a heat transfer tube (plain tube or coaxial tube), that has been bent. In terms of a radius of curvature, a radius of curvature rc(seeFIG.2bor2c) of a central line of the curved part is less than 1 meter (1 m). Preferably, a radius of curvature rcof a curved part is at least three times a diameter of the heat transfer tube (plain or coaxial). In particular, radii of curvature of the plain tubes of a bent coaxial tube are the same, since the central line of the curved part defines the radius of curvature. FIG.2ashows a coaxial heat transfer tube100, i.e. a first coaxial heat transfer tube100, according to an embodiment of the invention in a side view. As indicated inFIG.2a, the first coaxial heat transfer tube100comprises a first primary straight part101, a first primary curved part102, a first secondary straight part103, a first secondary curved part104, a first tertiary straight part105, and also a further (i.e. tertiary) curved part106and a further (i.e. quaternary) straight part107. Such a tube100may form a part of a heat exchanger10. InFIG.2a, the direction of flow of fluid transfer medium within the tube in the first primary straight part101is reverse to the direction of flow of fluid transfer medium within the tube in the first secondary straight part103. This is also reverse to the direction of flow of fluid transfer medium within the tube in the first tertiary straight part105. Referring toFIGS.2band3a, the first coaxial heat transfer tube100comprises a first inner heat transfer tube110. The first inner heat transfer tube110comprises a first primary straight part111, a first primary curved part112, a first secondary straight part113, a first secondary curved part114, a first tertiary straight part115, and also a further curved part116and a further straight part117. Referring toFIGS.2cand3a, the first coaxial heat transfer tube100comprises a first outer heat transfer tube120. The first outer heat transfer tube120comprises a first primary straight part121, a first primary curved part122, a first secondary straight part123, a first secondary curved part124, a first tertiary straight part125, and also a further curved part126and a further straight part127. The first outer heat transfer tube120radially surrounds at least a part of the first inner heat transfer tube110. The first outer heat transfer tube120radially surrounds at least a part of the first inner heat transfer tube110in all radial directions. Thus, at least a part of the first inner heat transfer tube110is protected in all radial directions. Moreover, as detailed in connection with a method for manufacturing such a tube100, the first outer heat transfer tube120is formed from one or several tubular pieces of a tube (e.g. plain tube). Therefore, the first outer heat transfer tube120does not comprise a longitudinal seam. A longitudinal seam refers to a seam that extends in the longitudinal direction de120(i.e. the direction of extension,FIG.2c) of the first outer heat transfer tube120. Correspondingly, a longitudinal seam does not extend fully around the first outer heat transfer tube120in a tangential direction of the tube120, the tangential direction being perpendicular to the longitudinal direction. Thus, at least the first primary curved part121of the first outer heat transfer tube120is free from (i.e. does not comprise) a longitudinal seam. If the outer tube120is made by welding tubular parts together in the longitudinal direction, the outer tube120may comprise a transversal seam or several transversal seams. However, this is not preferable from manufacturing point of view. Thus, in an embodiment, the first outer heat transfer tube120does not comprise a seam; neither longitudinal nor transversal. In an embodiment, the first inner heat transfer tube110does not comprise a seam, i.e. the first inner heat transfer tube110is seamless. When in use, the first outer heat transfer tube120radially surrounds the first inner heat transfer tube110, at least within the space V. However, as indicated inFIG.2c, outside the space V, the inner heat transfer tube110need not be protected by the outer heat transfer tube120. FIG.3ashows the sectional view IIIa-IIIa ofFIG.2a. As indicated therein, the first outer heat transfer tube120, in particular the first primary straight part121of the first outer heat transfer tube120, radially surrounds at least a part of the first primary straight part111of the first inner heat transfer tube110.FIG.3bshows the sectional view IIIb-IIIb ofFIG.2a. As indicated therein, the first outer heat transfer tube120, in particular the first primary curved part122of the first outer heat transfer tube120, radially surrounds the first primary curved part112of the first inner heat transfer tube110. Such an arrangement has the technical effect that the curved parts112,212,114,214,116,216need not be further protected from the fluidized bed material. Therefore, heat can be recovered from the fluidized bed material also in the curved parts. Moreover, as indicated in connection with a method for manufacturing, such a coaxial tube is relatively easy to manufacture. As known from prior art, the temperature of the outer surface of the outer heat transfer tube120should remain reasonably high in order to avoid excessive corrosion. Therefore, the thermal resistance between an inner surface of the inner heat transfer tube110and an outer surface of the outer heat transfer tube120should be reasonable and preferably substantially constant throughout the coaxial heat transfer tube100. In other words, a distance d (seeFIG.3c) between an inner surface of the outer heat transfer tube120and an outer surface of the inner heat transfer tube110should be substantially constant, at least outside spacer arrangements (131,132). Moreover, some thermally insulating material530is arranged in between the tubes110,120to control heat transfer. However, in practice the bending of the coaxial tube to form a bent part has the effect that the spacing is typically not exactly constant throughout the coaxial tube. The spacing may be somewhat decreased on an outer side of a curved part. Another function of thermally insulating material530in between the tubes110,120is to help the bending of the tubes110,120such that the distance d does not change much even in the curved parts during bending. In an embodiment, the distance d is constant within the straight parts (101,103,105) and outside spacer arrangements (131,132). Thus, in an embodiment, the distance d is the same regardless of the point of observation within a straight part, except for a spacer arrangement or spacer arrangements (131,132), if present. The distance d may be e.g. from 0.3 mm to 5 mm, such as from 1 mm to 4 mm, preferably from 1 mm to 2 mm. The distance d at a straight part, except for a spacer arrangement or spacer arrangements, may be constant and e.g. from 0.5 mm to 5 mm, such as from 1 mm to 4 mm, preferably from 1 mm to 2 mm. Since the tubes110,120of the coaxial tube100are coaxial, the distance d is constant within a cross section of the coaxial tube. In addition, also within a curved part102, a distance d between the tubes may be e.g. from 0.3 mm to 5 mm, such as from 1 mm to 4 mm, preferably from 1 mm to 2 mm. The first inner heat transfer tube110should withstand a reasonably high temperature and a high pressure. To quantify this, the first inner heat transfer tube110should withstand a pressure difference of100bar between the inner and outer surface of the tube110at a temperature of 500° C. More preferably, first inner heat transfer tube110should withstand a pressure difference of150bar between the inner and outer surface of the tube110at a temperature of 600° C. For these reasons, in an embodiment, the first inner heat transfer tube110comprises steel, preferably ferritic steel or austenitic steel. In an embodiment, a thickness of a wall of the first inner heat transfer tube110is at least 3 mm, such as from 3 mm to 10 mm. The first outer heat transfer tube120needs not withstand such a high pressure. Therefore, in an embodiment, a thickness of a wall of the first inner heat transfer tube110is greater than a thickness of a wall of the first outer heat transfer tube120. However, the first outer heat transfer tube120need to withstand a higher temperature than the first inner heat transfer tube110; because of the thermal insulator530in between the tubes. Therefore, in an embodiment, the first outer heat transfer tube120comprises steel, preferably austenitic steel. Moreover, because of thermal expansion of the tubes110,120in use, preferably, the first outer heat transfer tube120and the first inner heat transfer tube110are made of same material, e.g. austenitic steel. Referring toFIGS.3aand3b, solid, e.g. hardened, thermally insulating material530has been arranged in between the first inner heat transfer tube110and the first outer heat transfer tube120. The material530has been arranged [i] in between the first primary straight part111of the first inner tube110and the first primary straight part121of the first outer tube120and [ii] in between the first primary curved part112of the first inner tube110and the first primary curved part122of the first outer tube120. Regarding both straight and curved parts, the thermal conductivity of the thermally insulating material530should be sufficiently high to recover heat by the coaxial tube100. Moreover, the thermal conductivity should be sufficiently low to keep the temperature of the outer surface of the coaxial tube sufficiently high in use. Preferably, the thermal conductivity of the thermally insulating material530is from 1 W/mK to 10 W/mK at 20° C. Moreover, the thermally insulating material530should be resistant to heat in order to withstand high operating temperatures. Therefore, in an embodiment, the thermally insulating material530is heat resistant at least up to 1000° C. Regarding the curved part(s) a function of the thermally insulating material530is to act as a mechanical support during bending of the coaxial tube100. Thus, when hardened, the thermally insulating material530should have a reasonably high Young's modulus in order not to be compressed. In an embodiment, a Young's modulus of the thermally insulating material530of the coaxial tube100is at least 1 GPa at a temperature of 20° C. In an embodiment, a Young's modulus of the thermally insulating material530of the coaxial tube100is at least 5 GPa at a temperature of 20° C. A coaxial heat transfer tube100can be manufactured by inserting at least a part of an inner tube110into an outer tube120such that the outer tube120radially surrounds the part of the inner tube110. Moreover, the inner tube110extends at least through the outer tube120from one end120aof the outer tube120to another end120bof the outer tube120(seeFIGS.2ato2c). In this way, a straight coaxial heat transfer tube is formed. Thereafter, thermally insulating material530is arranged in between the tubes110,120. In an embodiment, the thermally insulating material530is hardenable and also injectable before hardening. Thus, in an embodiment, the thermally insulating material530is injected in between the tubes110,120. Thereafter, the thermally insulating material530may be hardened (e.g. dried) to make the material sufficiently hard for it to act as a mechanical support, as indicated above. Hardening may be done at a temperature of from 100° C. to 400° C. Last, the straight coaxial heat transfer tube is bent at suitable locations to a suitable radius of curvature. Preferably, the straight coaxial tube is bent at such a temperature that the temperature of the tubes110,120is below 300° C., such as below 200° C. The temperature at which the straight tube is bent may be at least −50 ° C., such as, for example from 0° C. to 50° C., such as substantially room temperature. Because of said bending, the inner and outer tubes110,120(i.e. the plain tubes) should be made of bendable material, i.e. ductile material. Suitable materials include many metals, in particular steel, such as ferritic steel or austenitic steel. It has been found that a ratio rc/d120of the radius of curvature rcof a bent (i.e. curved) part of the coaxial heat transfer tube100to an outer diameter d120of the first coaxial heat transfer tube100is preferably at least 3. This has the effect that the distance between the tubes110,120remains substantially constant during bending. More preferably, the ratio rc/d120is at least 3.3, such as at least 3.5. It has also been found that a ratio rc/d110of the radius of curvature rcof a bent (i.e. curved) part of the coaxial heat transfer tube100to an outer diameter duo of the first inner heat transfer tube110is preferably at least 3. This has the effect that the capability of the inner heat transfer tube110to withstand high pressures remains substantially unchanged during bending the coaxial tube100. More preferably, the ratio rc/d110is at least 3.3, such as at least 3.5. Bending the inner heat transfer tube110to a small (or smaller) radius of curvature would necessitate post bend heat treatment (PBHT) of the coaxial heat transfer tube100and in this way complicate the manufacturing process. Thus, having a reasonably large radius of curvature results in a simpler manufacturing process. The thermally insulating material530may be fed in between the tubes110,120from an end (120a,120b) of the outer heat transfer tube120. In addition or alternatively, an orifice135(seeFIG.3a) may be formed to the outer heat transfer tube120, and the thermally insulating material530may be injected in between the tubes110,120via the orifice. The orifice may be later closed. Thus, in an embodiment, the first outer heat transfer tube120comprises an orifice135, which may be closed, though which the thermally insulating material530has been fed (e.g. injected) in between the first inner heat transfer tube110and the first outer heat transfer tube120. Injecting through an orifice135has the beneficial effect that a higher injection pressure can be used, whereby the injection can be made faster. It has been found that centering of the tubes110,120may be hard, since neither of them is naturally infinitely straight. It has been found that the tubes110,120can be centered relative to each other by using at least one spacer arrangement131. The spacer arrangement131is made from solid material different from the thermally insulating material530. Thus, it has been found that the distance d in between the tubes before bending can be made substantially constant by the first spacer arrangement131comprising at least one spacer element (119,129). The first spacer arrangement131is configured to define a constant distance d between an inner surface of the first outer heat transfer tube120and an outer surface of the first inner heat transfer tube110at a location of a straight part (101,111,121,103,113,123) of the tube100. In effect, the first spacer arrangement131is configured to align the inner tube110and the outer tuber120in such a way that they are parallel and coaxial. Referring toFIG.3c, a protrusion129on the inner surface of the first outer heat transfer tube120may serve as one of the spacer elements of the spacer arrangement. In an embodiment, protrusions129on the inner surface of the first outer heat transfer tube120serve as (at least some of) the spacer elements of the spacer arrangement. In addition or alternatively, a protrusion119(seeFIG.3c) on the outer surface of the first inner heat transfer tube110may serve as one of the spacer elements of the spacer arrangement. Such a protrusion may be e.g. welded on the outer surface of the first inner heat transfer tube110. In an embodiment, protrusions119on the outer surface of the first inner heat transfer tube110serve as (at least some of) the spacer elements of the spacer arrangement. However, since the inner heat transfer tube110needs to withstand a high pressure at a high temperature, preferably the outer surface of the first inner heat transfer tube110is free from protrusions119. Thus, preferably a spacer arrangement131consists of protrusions129on the inner surface of the first outer heat transfer tube120. Referring toFIG.3c, the first spacer arrangement131comprises at least three protrusions129. Preferably, the first spacer arrangement131comprises at most ten protrusions129. More preferably, the first spacer arrangement131comprises from four to eight protrusions129. The protrusions129need not be at a same longitudinal position of the tube100. Protrusions may be arranged within a length of the first spacer arrangement131, wherein the length of the first spacer arrangement131may be e.g. at most 100 mm. In an embodiment, the first coaxial heat transfer tube100comprises a second spacer arrangement132. Also the second spacer arrangement132, in combination with the first spacer arrangement131, is configured to align the inner tube110and the outer tuber120in such a way that they are parallel and coaxial. Also the second spacer arrangement is made from solid material other than the thermally insulating material530. Referring toFIG.2c, preferably a distance d131in between the first spacer arrangement131and the second spacer arrangement132is from 100 mm to 2000 mm along the first coaxial heat transfer tube100. The distance d131should not be too large in order to center the tubes110,120relative to each other. Moreover distance d131should not be too small to have a suitable heat insulating property in between the tubes110,120. Preferably, the distance d131is from 300 mm to 1000 mm. A protrusion129may be made on an inner surface of the outer heat transfer tube120e.g. by punching a blind hole128onto the outer surface of the outer heat transfer tube120, whereby the outer heat transfer tube120will be locally bent to form the protrusion129on the inner surface of the outer tube120, as indicated inFIG.3d. Such bending occurs at least when the outer heat transfer tube120comprises metal, e.g. ductile metal, e.g. steel, such as austenitic steel. In general, a metal suitable for the purpose has a melting point of at least 1000° C. Thus, in an embodiment, the outer heat transfer tube120comprises one of these materials. In an embodiment, the first outer heat transfer tube120(and optionally also a second outer heat transfer tube220) comprises blind holes128on its outer surface. In particular, referring toFIGS.3dand2c, such blind hole128and corresponding projections129(i.e. a first spacer arrangement131) is arranged in between a first end120aof the first outer heat transfer tube120and a second end120bof the first outer heat transfer tube120in a direction de120of extension of the first outer heat transfer tube120. The first space arrangement131(such as the blind hole128corresponding to the projections129) may arranged e.g. at least 50 mm or at least 150 mm apart from both the ends120a,120bof the first outer heat transfer tube120. Therefore, an embodiment of the method comprises punching blind holes128onto the outer surface of the outer heat transfer tube120such that protrusions129are formed on the inner surface of the outer tube120. Such protrusions129form a first spacer arrangement131. Preferably, the blind holes128are punched [i] after arranging at least a part of the inner tube110into the outer tube120and [ii] before arranging (e.g. injecting) the thermally insulating material530in between the tubes110,120. It may be possible to provide the projections129of the outer tube120and/or the projections119of the inner tube110before arranging the inner tube110into the outer tube120. It has been noticed that a spacer arrangement131,132, if used at such a part of the tube100that is bent, will easily deteriorate the mechanical properties of the inner tube110during bending. Therefore, preferably no spacer arrangement131,132is arranged in a curved part of the coaxial tube100. Consequently, in an embodiment, such a curved part of the coaxial heat transfer tube100, of which central axis' radius of curvature is less than 1 m, does not comprise a spacer arrangement (131,132). However, as indicated above, the thermally insulating material530defines a space in between the tubes110,120; and the material530is used also within the curved parts. Thus, preferably, a curved part of the coaxial heat transfer tube100, does not comprise a spacer arrangement other than the thermally insulating material530, which spacer arrangement (131,132) is configured to define a distance d between an inner surface of the first outer heat transfer tube120and an outer surface of the first inner heat transfer tube110at least at a straight part (101,103) of the tube100. Thus, the blind holes128may be punched only to such parts of the tube100that will remain straight after said bending. Correspondingly, an embodiment comprises bending only such part/parts of the tube100that is/are free from a spacer arrangement. Moreover, in an embodiment, the first coaxial heat transfer tube100comprises blind holes128, but comprises blind holes128only at a straight part or straight parts of the coaxial tube100. A heat exchanger10, e.g. a fluidized bed heat exchanger10, comprises the first coaxial heat transfer tube100. Moreover, at least a part of the fluidized bed heat exchanger10, is arranged, in a preferable use, in the space V, to which a fluidized bed is configured to form in use. In an embodiment, both [i] the first primary curved part122of the first outer heat transfer tube120and [ii] the first primary straight part121of the first outer heat transfer tube120are configured to contact bed material of the fluidized bed in use. Having a coaxial heat transfer tube100with an inner tube110and an outer tube120, which is coaxial with the inner tube110, has several beneficial effects as indicated e.g. in U.S. Pat. No. 9,371,987. Not having a separate protective vessel for the curved parts of the coaxial heat transfer tubes improves heat transfer into the inner heat transfer tube110. Referring toFIG.2b, and in accordance with the definition given above, in an embodiment, a radius of curvature rsof a central line of the first primary straight part111of the first inner heat transfer tube110is at least 1 m. In addition or alternatively, a radius of curvature rcof a central line of the first primary curved part112of the first inner heat transfer tube110is less than 1 m. Referring toFIG.2a, the definition given above, may apply also to the first coaxial heat transfer tube100comprising the inner tube110and the outer tube120. Referring toFIG.2b, in an embodiment the tube100is bent in such a manner that at least the straight parts of the first coaxial heat transfer tube100are arranged to extend within a plane P. In an embodiment, the first primary straight part101and the first secondary straight part103extend within the plane P. Moreover, in an embodiment wherein the first coaxial heat transfer tube100comprises a first tertiary straight part105, also the first tertiary straight part105extends within the plane P. This applies also to both the inner tube110and the outer tube120and their parts111,113,115and121,123,125, as indicated inFIGS.2ato2c. Referring toFIG.2a, in an embodiment, a heat exchanger10(and/or the fluidized bed boiler1) comprises the first coaxial heat transfer tube100, a distributor header510configured to feed heat transfer medium to the first coaxial heat transfer tube100, in particular the inner heat transfer tube110thereof; and a collector header520configured to collect heat transfer medium from the first coaxial heat transfer tube100, in particular the inner heat transfer tube110thereof. As indicated above and inFIG.2a, the first coaxial heat transfer tube100(in particular the inner heat transfer tube110thereof) extends from the distributor header510to the collector header520. The outer tube120needs not to extend from the distributor header510to the collector header520, as indicated inFIG.2c. If only one coaxial heat transfer tube100with straight and curved parts is used, in order to have a reasonably large heat transfer surface within a reasonably small volume, the tube100should be bent to a reasonably small radius of curvature at several locations. Having a small radius of curvature would bring the straight parts101,103close to each other (seeFIG.2a). However, as indicated above, for manufacturing reasons, preferably a radius of curvature of a curved part is not too small. Therefore, a distance between two straight parts of a coaxial tube, such as the parts101,103ofFIG.2a, is in practice reasonably large. In order to diminish the distance between the heat transfer tubes, and in this way increase the heat transfer surface within a volume, it has been found that at least two coaxial tubes side by side within such a plane that a part a second coaxial tube is left in between two parts of a first coaxial tube, can be used. In particular, referring toFIG.4a, the distance between the straight parts201and101can be made as small as necessary, even if the distance between the straight parts103and101needs to be reasonably large because of the proper radius of curvature. In this way, the heat transfer surface per volume obtainable by the coaxial tubes ofFIG.4ais much larger than with the coaxial tube ofFIG.2a. It has also been found that the optimal number of coaxial heat transfer tubes arranged side by side is at least two, such as two (FIG.4a), three (FIG.5a), four (FIG.5b), five (not shown), or six (not shown), even if the number may be only one (FIG.2a). As indicated inFIGS.2aand4athe first primary curved part (102,112, and122, of the first coaxial tube100and its plain tubes110,120) connects the first primary straight part (101,111, and121, respectively) to the first secondary straight part (103,113, and123, respectively). Thus, the first primary curved part (102,112, and122, respectively) is arranged along the first (coaxial or plain) heat transfer tube (100,110, and120, respectively) in between the first primary straight part (101,111, and121, respectively) and the first secondary straight part (103,113, and123, respectively). Herein the term “along” refers the direction in which the heat transfer medium is configured to flow in the first (coaxial or plain) heat transfer tube (100,110, and120). Referring toFIGS.4a,6a, and6bin an embodiment, a heat exchanger10further comprises a second coaxial heat transfer tube200having a second inner heat transfer tube210and a second outer heat transfer tube220. The second coaxial heat transfer tube200extends from the distributor header510to the collector header520. Moreover, with reference toFIGS.6aand6b, the second coaxial heat transfer tube200comprisesa second primary straight part201having a second primary straight part211of a second inner tube210and a second primary straight part221of a second outer tube220,a second secondary straight part203having a second secondary straight part213of the second inner tube210and a second secondary straight part223of a second outer tube220, anda second primary curved part202having a second primary curved part212of the second inner tube210and a second primary curved part222of the second outer tube220. As indicated inFIG.4a, the second primary curved part (202,212, and222, of the second coaxial tube200and its plain tubes210,220) connects the second primary straight part (201,211, and221, respectively) to the second secondary straight part (203,213, and223, respectively). Thus, the second primary curved part (202,212, and222, respectively) is arranged along the second (coaxial or plain) heat transfer tube (200,210, and220) in between the second primary straight part (201,211, and221, respectively) and the second secondary straight part (203,213, and223, respectively). Herein the term “along” refers the direction in which the heat transfer medium is configured to flow in the second (coaxial or plain) heat transfer tube (200,210, and220). A limiting radius of curvature for a curved part and a straight part has been defined above. This applies both to the first coaxial tube100and the second coaxial tube200. Referring toFIG.4a, in an embodiment, the first coaxial heat transfer tube100and the second coaxial heat transfer tube200are arranged relative to each other such that the first primary straight part101is arranged in between the second primary straight part201and the second secondary straight part203. This applies also to the straight parts111,211, and213of the inner tubes (110,120) as well as to the straight parts121,221, and223of the outer tubes (120,220), as indicated inFIGS.4ato4c. As indicated above, this has the technical effect that a reasonably high heat transfer surface of the coaxial heat transfer tubes can be provided in a reasonably small space V without using excessively bent tubes (i.e. curved parts with small radii of curvature). This simplifies manufacturing of a coaxial tube (100,200). As defined above, the term “coaxial heat transfer tube” refers to an arrangement of tubes that are arranged coaxially. Therefore, in this context, the different coaxial heat transfer tubes (100,200) are not mutually co-axial. Thus, in an embodiment, no parts of the first coaxial heat transfer tube100are coaxial with a part of the second coaxial heat transfer tube200. It is noted that a similar arrangement of two heat transfer tubes can be used in other applications wherein a high area in a small volume is needed, regardless of the heat transfer tubes of that application being coaxial or plain. The distributor header510is configured to the feed heat transfer medium to the first coaxial heat transfer tube100, in particular the first inner tube110, and the second coaxial heat transfer tube200, in particular the second inner tube210. In a similar manner, the collector header520is configured to collect the heat transfer medium from the first coaxial heat transfer tube100, in particular the inner tube110and the second coaxial heat transfer tube200, in particular the second inner tube210. As is evident, the heat transfer medium becomes heated as is flows through the coaxial heat transfer tubes100,200from the distributor header510to the collector header520. Referring toFIG.2a, in an embodiment a number Ntubeof coaxial heat transfer tubes extending within a same plane P from the distributor header510to the collector header520may be only one, since only the first coaxial tube100is present. However, referring toFIG.4a, in an embodiment the number Ntubeof coaxial heat transfer tubes extending within a same plane P from the distributor header510to the collector header520may be two, since also a second coaxial tube200is present. Referring toFIG.5a, the heat exchanger10may comprise a third coaxial heat transfer tube300extending within the same plane P from the distributor header510to the collector header520. Thus, the number Ntubemay be three. Referring toFIG.5b, the heat exchanger10may comprise a fourth coaxial heat transfer tube400extending within the same plane P from the distributor header510to the collector header520. Thus, the number Ntubemay be four. Even if not shown, the number Ntubemay be five, six, or more than six. Referring toFIG.5a, if needed, the coaxial tubes100,200,300may be bound together with a binder540. The binder540improves mechanical stability of a heat exchanger10. Preferably, number Ntubeof coaxial heat transfer tubes extending within a same plane P is at least two or at least three. In an embodiment, the fluidized bed boiler1or the fluidized bed heat exchanger10thereof comprises a number Ntubeof such coaxial heat transfer tubes (100,200,300,400) that [i] extend from the distributor header510to the collector header520and [ii] comprise at least a primary straight part (101,201,301), a secondary straight part (103,203,303), and a primary curved part (102,202,302), which connects the primary straight part of the tube in question to the secondary straight part of the tube in question. Referring toFIG.5ain an embodiment the number Ntubeis three. Such embodiment comprises the first and second coaxial heat transfer tubes100,200, as discussed above. Furthermore in that embodiment, the fluidized bed boiler1or the heat exchanger10suitable for a fluidized bed of the boiler1, comprises a third coaxial heat transfer tube300extending from the distributor header510to the collector header520. The third coaxial heat transfer tube300comprises the parts as discussed above for the first and second tubes100,200. In an embodiment, the third coaxial heat transfer tube300comprises an inner heat transfer tube and an outer heat transfer tube as discussed above in connection with the first coaxial heat transfer tube100and/or the second coaxial heat transfer tube200. Referring toFIG.5a, the third coaxial heat transfer tube300comprises a third primary straight part301and a third secondary straight part303. The second coaxial heat transfer tube200and the third coaxial heat transfer tube300are arranged relative to each other such that the second primary straight part201is arranged in between the third primary straight part301and the third secondary straight part303. What has been said above about the relative arrangement of the first and second coaxial heat transfer tubes (100,200) applies. As discussed above, in an embodiment, the first primary straight part101, the second primary straight part201, and the second secondary straight part203, extend within the plane P. When the third coaxial heat transfer tube is present, in an embodiment, also the third primary straight part301and the third secondary straight part303extend within the plane P. Referring toFIG.5b, in an embodiment the number Ntubeis four. Such embodiment comprises the first, second, and third coaxial heat transfer tubes100,200,300as discussed above. Furthermore in that embodiment, the fluidized bed boiler1or the heat exchanger10suitable for a fluidized bed of the boiler1, comprises a fourth coaxial heat transfer tube400that extends from the distributor header510to the collector header520. Referring toFIG.4a, in an embodiment, the coaxial heat transfer tubes100,200have more curved parts. This has the effect that a distance between the distributor header510and the collector header520can be made reasonably large while still having the straight parts of the tubes100,200close to each other. Having the straight parts of the tubes100,200close to each other increases the heat transfer area, which improves the heat recovery. Preferably the straight parts of the tubes are arranged in a same plane P. As shown in the figures, the curved parts are preferably such curved parts that the direction of propagation of the heat transfer medium within the tube changes by from 30 to 180 degrees within a curved part. For exampleFIGS.2aand4ashow curved parts102,202that change the direction of flow by 180 degrees. For exampleFIG.5dshows a curved part302that changes the direction of flow by 90 degrees and another curved part306that changes the direction of flow by another 90 degrees. In a similar manner a turn of 180 degrees could be made by using more curved parts separated from each other by straight parts. Referring toFIG.5c, the radius of curvature of a curved part need not to be constant. Preferably the fluidized bed heat exchanger10as disclosed above is used in a loopseal5of a circulating fluidized bed boiler. Thus, in an embodiment, the fluidized bed boiler1comprises means40for separating bed material from flue gas. Referring toFIG.1a, in an embodiment, the fluidized bed boiler1comprises a cyclone40for separating bed material from flue gas. The fluidized bed boiler comprises a loopseal5configured to receive bed material from the means40for separating bed material from flue gas (e.g. from the cyclone). Moreover, at least a part of the fluidized bed heat exchanger10is arranged in the loopseal5. Referring toFIGS.2band2cfor example, the distributor header510and to collector header520may be arranged outside the loopseal. However, at least most of the coaxial heat transfer tubes (100,200) are arranged in to the loopseal as indicated above. For example, in an embodiment, at least 90% of the coaxial heat transfer tubes (100,200) of the fluidized bed heat exchanger10, as measured lengthwise, are arranged in the loopseal5as indicated above. | 41,895 |
11859912 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS According to the invention, so-called “microchannel” heat exchange devices, sometimes also known as “printed circuit” heat exchangers, are used as the heat absorber and/or the heat sink for heat pipes. The inventor has found that heat pipes incorporating such devices afford exceptionally high heat transfer rates between the heat source or sink and working fluid. Without wishing to be bound by any particular theory or explanation, the inventor speculates that the very high efficiency of the inventive heat pipes may result from overcoming a limitation of typical conventional heat pipes, namely that the heat transfer capacity of the central tubular section of the pipe is significantly higher than is realized, due to limitations in the rates at which the heat absorber and/or heat sink transfer heat to and from the central section. In typical conventional configurations, the conductive material in contact with the working fluid and the heat source or cooling medium is relatively thick, typically on the order of 1.2-15 mm in the thinnest dimension. This may limit the rate of heat transfer, due to thermal resistance of the heat exchange material. It is also speculated that heat transfer is further impeded by the fluid film resistance at the boundary layer of the boiling or condensing working fluid adjacent to the heat exchanger material. The fluid velocity (hydrodynamic) boundary layer thickness is a function of the Reynolds number (Re), and the thermal boundary layer thickness is a function of the hydrodynamic boundary layer thickness divided by the cube-root of the Prandtl number (Pr). The particular functions and equations depend on the system geometry (e.g. flat plates vs. tubes); although phase change can complicate matters. Re=velocity*characteristic length*density/viscosity Pr=heat capacity*viscosity/thermal conductivity The characteristic length is the diameter for tubes, and the hydraulic diameter for non-circular channels. The ratio of the convective to conductive heat transfer across (normal to) the boundary is given by the Nusselt number (Nu). Nu=heat transfer coefficient*thermal conductivity/characteristic length In laminar flow (as is the case in microchannels), the Nusselt number is a constant (at least for a given phase), so it can be seen that the heat transfer improves with the inverse of the diameter of channel thickness. This is why heat transfer improves dramatically as the channels get smaller. (The trade-off is the increasing pressure drop/flow reduction as the channels get smaller). For internal flows (e.g. closed channels and tubes), the flow is laminar when Re<2200. So, one skilled in the art of fluid mechanics can calculate the hydrodynamic and thermal boundary layer thicknesses for known fluid properties, flow conditions, and channel geometry. The maximum microchannel diameter/thickness should be twice the lesser of the either the thermal boundary layer or hydrodynamic boundary layer thickness (factor of two because the boundary layer can extend no farther than the mid-point of the channel). From the fluid boundary layer equation pertinent to the geometry of interest, which is a function of Re, and the velocity, density, and viscosity (used to calculate Re), one can solve for the limiting dimension or thickness such that the fluid boundary layer thickness is equal to the distance from the wall to the centerline, when Re=2200. From the thermal boundary layer equation pertinent to the geometry of interest, which is a function of fluid boundary layer thickness divided by the cube-root of Pr, and the heat capacity, thermal conductivity and viscosity (used to calculate Pr), one can calculate the thermal boundary layer thickness. In contrast to conventional heat pipes, the heat absorber and/or heat sink sections of the inventive heat pipes have sub-millimeter channels and wall thicknesses whose characteristic length is smaller than the thermal boundary layer thickness, substantially reducing both the conductive resistance and the convective/thermal resistance values. While microchannel heat exchangers have been used in ordinary heat transfer services, they have not hitherto been used in conjunction with heat pipes, to transfer heat at high rates between physically separated heating and cooling sources. The heat pipes of the present invention provide significant enhancement of heat transfer by maximizing heat exchange at the heat absorber and/or heat sink though the use of microchannel heat exchange devices, coupled with the high heat transfer rates over distances associated with the phase changes and movements of the working fluid. In some embodiments of the invention, the heat absorber and/or the heat sink are passive, by which it is meant that no pumps, fans, valves, or other energy-consuming devices are employed in their operation. An entirely passive heat pipe results if both the heat absorber and the heat sink are passive. The heat pipes of the present invention may be used for any purpose, and are particularly advantageous for use in the dissipation of heat generated by electronic devices. Suitable arrangement of the heat pipe allows the heat generated by the electronic devices to be rejected externally to the enclosures and rooms housing the electronic devices, reducing or obviating the need to air-condition the rooms in which the electronic devices are housed. The invention will next be illustrated with reference to the Figures, wherein similar numbers indicate similar elements in all Figures. The Figures are intended to be illustrative rather than limiting and are included to facilitate explanation of the invention. The Figures are not to scale, and are not intended to be engineering drawings. Also, it will be appreciated that the devices of the invention may be used for a wide variety of applications, and accordingly the dimensions and materials useful for making them also cover a wide range, and are sometimes interdependent. Therefore, the invention should not be construed as limited by the materials and dimensions explicitly noted in the Figures and associated text. Heat Pipes Employing Microchannel Heat Absorbers Prior art microchannel heat exchangers are used as the heat absorber and optionally as the heat sink for heat pipes according to the invention. The cores of the microchannel heat exchangers comprise one or more layers of parallel microchannels, wherein the largest cross-sectional dimension of the microchannels is less than 1000 microns, and preferably less than 250 microns, and the materials of construction of the heat transfer surfaces are materials with thermal conductivities in excess of 5 watts/m-° C., and preferably in excess of 17 watts/m-° C., and most preferably in excess of 170 watts/m-° C. If more than one layer of microchannels is used, the number of layers may be any number from 2 to 10, or in some cases an even larger number, e.g., as high as 20. Referring now toFIG.1, the working fluid microchannels16of a parallel flow microchannel core14for a heat exchanger may optionally be arranged in multiple layers12, whereby heat transfer to outer layers is achieved by thermal conduction through the material walls connecting the layers of the microchannels. This increases the total effective heat transfer area (internal to the microchannel device) available for evaporation or condensation of the working fluid, without requiring an increase in the surface area in contact with the heat source or sink. When multiple layers are used, each layer is typically fabricated from a thin sheet with etched open channels or grooves, and the layers are bonded or fused to each other, sealing the open tops of the channels or grooves, forming closed microchannels. This arrangement results in a monolithic heat exchanger, with only one thin conducting surface interspersed between adjacent stacks of fluid channels. It also eliminates the need for a conductive spacer and its associated resistance to heat transfer. Such devices are available commercially, with one example being “Ardex” liquid coolers, manufactured by Atotech Deutschland GmbH, headquartered in Berlin, Germany. By using such a configuration for the heat absorber, the heat pipes of the present invention enjoy inherently high rates of conductive heat transfer. FIGS.2aand2bdepict another prior art heat exchanger core, shown generally at15, suitable for use in heat pipes according to the invention. Core15, referred to herein as a cross-flow microchannel core, has two or more alternating layers12of microchannels, i.e., working fluid microchannels16as described above alternating with intermediate fluid microchannels38. The orientation of the layers is such that alternating layers meet at common inlet and outlet regions, allowing the intermediate fluid to flow through the unit without co-mingling with the working fluid. The intermediate fluid may be any liquid or gas suitable for transferring heat away from cross-flow microchannel core15(in the case where the core is used in a heat sink) or to core15(if the core is used in a heat absorber). It is preferable to arrange the channel and layer orientation so that two fluids flow through their respective channels in directions substantially perpendicular to each other.FIG.2ashows the heat exchanger from the side showing the working fluid microchannels16, andFIG.2bshows it from a side perpendicular to the first, i.e., rotated 90° about a vertical axis, showing the intermediate fluid microchannels38. Such devices are available commercially, with one example being a Printed Circuit Heat Exchanger (PCHE), manufactured by Heatric, headquartered in Dorset, England. Referring now toFIG.3, there is shown an exploded view of a microchannel heat absorber101for use in a heat pipe according to the invention. This type of heat pipe is a 2-pipe configuration, also known as a thermosyphon. Heat is conducted from the heat source, i.e., the object or fluid that is to be cooled (not shown), through the surface of the bottom-most layer of core14by conduction. The heat is further conducted into the working fluid microchannels16of the parallel flow microchannel core14, constructed for example as shown inFIG.1. Where multiple layers of microchannels are used in core14, some of the heat is conducted to the succeeding layers by conduction through the sidewalls of layers. The heat absorber is connected to an elevated heat sink shown schematically at13by means of two pipes or tubes of ordinary dimensions, typically having an inside diameter from about 50 mils to about one inch. However, there is no fundamental limit to the diameter—the larger the diameter, the higher the axial power rating, i.e. the amount of heat that can be transferred between the heat source and the heat sink. Thus, the diameter may be 2 or 3 inches or even greater. Vaporized working fluid exits parallel flow microchannel core14into the warm side manifold20and flows from the heat absorber to the heat sink by means of warm side pipe26(preferably of larger diameter than cool side pipe30). At heat sink13, the working fluid gives up its heat to a cooling medium, causing it to condense back to liquid. The condensed liquid working fluid returns from the heat sink by gravity via cool side pipe30to cool side manifold18and then into parallel flow microchannel core14, completing the cycle. While heat sink13is preferably a microchannel heat exchanger, it may alternatively be of any of any convenient design to facilitate condensation of the working fluid, e.g., a conventional heat exchanger, air-cooled finned tubes or hollow plates, thermoelectric cooler, etc. FIG.4shows an embodiment of the invention in which the heat absorber102is similar to that described inFIG.3, but is connected to heat sink13by means of common connecting pipe32, through which vaporized working fluid24and liquid working fluid28move co-axially and counter-currently. The heat pipe functions in a manner similar to that ofFIG.3, except that vaporized working fluid24moves through the central portion of common connecting pipe32, and liquid working fluid28travels along the walls of the pipe, e.g., as a moving annular film. In another embodiment (not shown), common connecting pipe32has an annular or co-axial wick for co-axial counter-flow of the liquid and vaporized working fluid. For example, the walls of the connecting pipe may be lined with an annular band of, or packed co-axially with, a porous wicking material. The liquid travels by capillary action through the porous wicking material. This allows the heat pipe to be oriented other than substantially vertically, e.g., with the heat sink level with or even below the heat absorber. FIG.5shows another embodiment of the invention, employing a heat absorber103that includes a cross-flow microchannel core15such as shown inFIG.2. Heat is transferred from the heat source to heat absorber104by means of an intermediate fluid, e.g., liquid, gas, or condensable vapor. The relatively hot/warm intermediate fluid enters through inlet pipe36into inlet manifold37, flows through intermediate fluid microchannels38, exits cross-flow microchannel core15into outlet manifold39at a lower temperature, and exits the heat absorber via outlet pipe42. While in cross-flow microchannel core15, the intermediate fluid is cooled by the working fluid through heat conduction into the (boiling) working fluid in the intervening layers, via the walls of the working fluid microchannels16and the intermediate fluid microchannels38. FIG.6shows another embodiment of the invention, in which the heat absorber104is connected to a heat sink shown schematically at13by means of common connecting pipe32, through which vaporized working fluid24and liquid working fluid28move co-axially and counter-currently. The vapor moves through the central portion of the connecting pipe, and the liquid travels along the walls of the pipe, e.g., as a moving annular film. Heat is transferred from the heat source to heat absorber103by means of an intermediate fluid, e.g., liquid, gas, or condensable vapor. The intermediate fluid enters through inlet pipe36into inlet manifold37, flows through intermediate fluid microchannels38, exits cross-flow microchannel core15into outlet manifold39, and exits103via outlet pipe42. While in cross-flow microchannel core15, the intermediate fluid is cooled by the (boiling) working fluid through heat conduction into the working fluid in the intervening layers, via the walls of the working fluid microchannels16and the intermediate fluid microchannels38. In another embodiment (not shown), common connecting pipe32has an annular or co-axial wick for co-axial counter-flow of the liquid and vaporized working fluid. For example, the walls of the connecting pipe may be lined with an annular band of, or packed co-axially with, a porous wicking material. The liquid travels by capillary action through the porous wicking material. This allows the heat pipe to be oriented other than substantially vertically, e.g., with the heat sink level with or even below the heat absorber. Heat Pipes Employing Microchannel Heat Sinks Referring now toFIG.7, there is shown an embodiment of the invention in which the heat sink105is a microchannel heat exchanger with extended surfaces cooled by natural or forced convection with air or other fluid coolants, and the heat pipe has separate connecting pipes for the liquid and vaporized working fluid. The structure is similar to that of the heat absorber shown inFIG.3, with the addition of cooling surfaces44, and the spatial orientation is typically as shown inFIG.7, i.e., rotated about a horizontal axis extending into the page 90° relative to the way it would be oriented when used as a heat absorber such as inFIG.3. The cooling surfaces44are provided on the outside of one or both sides of a single-layer unit, or the outsides of one or both of the outermost layers in a multi-layer unit. They may comprise thin extensions of thermally conductive material, to provide additional heat transfer surface area exposed to the air or other final cooling medium. The extended surfaces may be of any convenient geometry or orientation, e.g., pins, parallel perpendicular fins, spaced fibers, ribs, and the like. Heat sink105is connected to a microchannel heat absorber shown schematically at17located at a lower elevation by means of two pipes or tubes of ordinary dimensions, typically having an inside diameter from about 50 mils to about one inch. However, there is no fundamental limit to the diameter—the larger the diameter, the higher the axial power rating. Vaporized working fluid flows from the heat absorber to the heat sink by means of warm side pipe26and enters parallel flow microchannel core14at warm side manifold20. Heat is conducted out of the heat sink via cooling surfaces44into a surrounding fluid, which may be a gas such as air or a liquid, resulting in condensation of the working fluid in working fluid microchannels16. The condensed liquid working fluid exits parallel flow microchannel core14at cool side manifold18returns via cool side pipe30to by gravity to the heat absorber. Warm side pipe26is preferably connected at a high point above parallel flow microchannel core14. FIG.8shows an embodiment of the invention in which the heat sink, shown generally at106, is a microchannel heat exchanger similar to the heat absorber shown inFIG.4, with the addition of cooling surfaces44as described above. As shown inFIG.8, its typical orientation will be inverted relative to the orientation when used as a heat absorber. Vaporized working fluid24flows from the heat absorber shown schematically at17to the heat sink by means of common connecting pipe32and enters parallel flow microchannel core14at warm side manifold20. Heat is conducted out of the heat sink via cooling surfaces44into a surrounding fluid, which may be a gas such as air or a liquid, resulting in condensation of the working fluid in working fluid microchannels16. Condensed liquid working fluid28travels along the walls of common connecting pipe32, e.g., as a moving annular film. In another embodiment (not shown), common connecting pipe32has an annular or co-axial wick for co-axial counter-flow of the liquid and vaporized working fluid. For example, the walls of the connecting pipe may be lined with an annular band of, or packed co-axially with, a porous wicking material. The liquid travels by capillary action through the porous wicking material. This allows the heat pipe to be oriented other than substantially vertically, e.g., with the heat sink level with or even below the heat absorber. In another embodiment of the invention, the heat sink is constructed in substantially the same manner as the heat absorber shown inFIG.5, but with an inverted orientation. Heat is transferred out of the heat sink by means of the intermediate fluid (liquid or gas), which is at a relatively low temperature when it enters cross-flow microchannel core15via inlet pipe36and inlet manifold37, and which exits cross-flow microchannel core15at a higher temperature via outlet manifold39and outlet pipe42. Condensation of vaporized working fluid occurs in a manner substantially the same as described above with respect toFIG.7, except that heat exits the heat sink via the intermediate fluid. In another embodiment of the invention, the heat sink is constructed in substantially the same manner as the heat absorber shown inFIG.6, but with an inverted orientation. Entry and condensation of vaporized working fluid24, and return of liquid working fluid28, occur substantially the same way as described with respect toFIG.8, and heat is transferred out of the heat sink in substantially the same way as in the device ofFIG.1. In another embodiment common connecting pipe32has an annular or co-axial wick for co-axial counter-flow of the liquid and vaporized working fluid, as described previously. In another embodiment of the invention, the heat sink is constructed in substantially the same manner as the heat absorber shown inFIG.3, but with an inverted orientation. Heat is removed from the heat sink by thermal conduction through the outer surfaces into a cooling medium. The cooling medium may be a fluid (e.g., the heat sink is immersed), or a cool solid which is kept cool by external means, e.g., by refrigeration, thermo-electric cooling, evaporation of an external fluid, sensible heating of a flowing external fluid, etc. Condensation of vaporized working fluid occurs in a manner substantially the same as described above with respect toFIG.7. In another embodiment of the invention, the heat sink is constructed in substantially the same manner as the heat absorber shown inFIG.4, but with an inverted orientation. Entry and condensation of vaporized working fluid24, and return of liquid working fluid28, occur substantially the same way as described with respect toFIG.8, and heat is removed from the heat sink by thermal conduction through the outer surfaces into a cooling medium as described in the immediately preceding embodiment. In another embodiment, common connecting pipe32has an annular or co-axial wick for co-axial counter-flow of the liquid and vaporized working fluid, as described previously. According to the invention, any microchannel heat absorber may be combined with any heat sink. Microchannel heat sinks will be used in many situations. For example, the heat sink ofFIG.8may be combined with the heat absorber ofFIG.4. Or, the heat sink ofFIG.7may be combined with the heat absorber ofFIG.3. Other combinations will be apparent to those of skill in the art, and all of these are contemplated by the invention. Working Fluids Many fluids may be used as the working fluid in heat pipes according to the invention. The fluid must have sufficient vapor pressure under the temperature and pressure conditions of use to allow significant vaporization and condensation, as described earlier herein. Since temperature and pressure conditions vary substantially from one application to the next, a wide variety of fluids may be used. Common examples include water, alcohols and hydrocarbons. The inventor has found that heat pipes according to the invention are particularly useful when the working fluid is a fluorocarbon (FC) or hydrofluorocarbon (HFC) or a chlorofluoroalkene (CFA) or a chlorinated hydrofluoroalkene (CHFA), or a mixture of these. In the event of a loss of containment, these materials are unlikely to ignite, have minimal adverse environmental or health consequences, cause no damage to electronic components, create no risk of electric shock, and are readily dissipated. They are low in toxicity, electrically non-conductive, non-corrosive to most materials, and have little or no flammability. Suitable FC, HFC, CFA or CHFA working fluids will typically be chosen to match their thermodynamic properties to the particular working temperatures and pressures of the heat pipe systems in which they are used. Exemplary fluids include any of the various commercially available pentafluoropropanes, hexafluoropropanes, pentafluorobutanes, and monochloro fluoropropenes. For heat pipes operating in the range of ambient (about 20° C.) to about 100° C., exemplary suitable working fluids include those having normal boiling points (i.e., boiling points at atmospheric pressure) in the range of 10° C. to 80° C., and more typically in a range from 10° C. to 45° C. Suitable classes of HFC's include pentafluoropropanes, hexafluoropropanes, and pentafluorobutanes. Specific examples of suitable HFC's include HFC-245fa, HFC-245ca, HFC-236ca, HFC-365mfc, and mixtures thereof. Specific examples of suitable CHFA's include HCFC 1233zd, and HCFC 1233cf. Heat pipe systems including these working fluids typically operate at pressures mildly elevated with respect to atmospheric pressure. In some embodiments, heat pipes according to the invention may have heat sinks operating at a condensation temperature of about 30° C. to about 50° C., and HFC-245fa, HFC-245ca, HFC-236ca, HFC-365mfc, HCFC 1233zd, and HCFC 1233cf may be particularly well suited for use in such systems. In one embodiment, the invention provides a method of cooling an article, liquid, or gas with heat pipe system using as its working fluid HFC-245fa, HFC-245ca, HFC-236ca, HFC-365mfc, HCFC 1233zd, HCFC 1233cf, or a mixture of these. In this embodiment the structure of the heat pipe may be any described herein, but the inventor contemplates the use of these fluids in a heat pipe of any structure as well. Thus heat pipe systems of any structure containing these fluids, and methods of cooling by the use of such systems, are also claimed. In some embodiments of the invention, heat pipes such as disclosed herein may be installed in electronic equipment to expel heat from a microelectronic device to a location external to the electronic device enclosure. For microelectronic devices/enclosures that are housed in air-conditioned rooms, e.g., computer data centers, industrial control rooms, and the like, it is preferred in some embodiments to place the heat sink of the heat pipe in an air duct or water pipe, so that a flow of ambient (externally supplied) air or cooling water, rather than air-conditioned air, is used to carry away the rejected heat to a location external to the room or building housing the microelectronic devices. Such an arrangement allows the cooling and heat removal to be accomplished with little or no air conditioning dedicated to the electronic devices, thereby reducing energy consumption. The interconnecting pipe(s) need not be integral with the heat absorber or heat sink sections. The connecting pipes may be assembled separately from and joined to the heat absorber and heat sink sections. As a consequence, the interconnecting pipes can be of any convenient length, provided that the pressure drop is less than the driving force (gravity and/or capillary pressure) for returning the condensed liquid to the heat absorber. The use of relatively long interconnecting pipes allows the heat sink and its associated cooling medium to be located remotely from the heat source. In the case of an enclosed heat source, e.g., microelectronic device, combustion chamber, radioactive area, etc., this allows the heat to be removed without transferring heat back to other objects in the immediate vicinity of the heat source. It also allows the use of cooling media other than air at the heat sink, e.g., water, refrigerated fluids, thermoelectric cooling devices, etc. In some embodiments, the length of the pipes may be in a range from 5 to 10 inches, for example when used to reject heat from inside an electronic device (e.g., a personal computer) to the surrounding air. In other embodiments, the length may be from 5 to 10 feet or even from 5 to 30 feet, such as when heat is to be rejected to an air vent or outside of the room containing the heat source. However, the length may be even greater if the connecting diameter is sufficiently large to keep the pressure drop low enough for good flow. Warm side pipe26, cool side pipe30, and common connecting pipe32will typically have smaller cross-sections than the heat absorber or heat sink sections, to facilitate the collection and flow of the liquid and vaporized working fluid. The pipes may be of any arbitrary shape and, if suitably thin-walled, may be readily flexed or bent to accommodate off-set placement of the heat sink relative to the heat absorber, and/or routing of the connecting pipes around other objects. EXAMPLES Example 1—Air-Cooled Single-Tube Non-Wick Heat Pipe System A heat pipe system is constructed, consisting of a microchannel block-type heat absorber, a finned microchannel heat sink, a connecting pipe, and a working fluid. The heat absorber is an Atotech “Ardex MC-1” microchannel CPU cooler, manufactured by Atotech Deutschland GmbH of Berlin, Germany. One of the two threaded ports is provided with a male adapter ⅜″ tube fitting. The other threaded port is closed off with a pipe plug. The heat sink is an Atotech “Ardex MC-1” microchannel CPU cooler, modified by the addition of thin sheet metal copper cooling fins soldered to the flat side of the MC-1 device. One of the two threaded ports is provided with a male adapter ⅜″ tube fitting. The other threaded port is closed off with a pipe plug. The connecting pipe is a ⅜″ diameter semi-flexible copper or perfluoroalkoxy (PFA) plastic tube, connected to the absorber and heat sink by means of the tube fittings. The connecting pipe is preferably insulated, to minimize heat transfer between the connecting tube and the air space surrounding it. This is useful if the heat pipe connecting tube is within an enclosure (and the heat sink outside the enclosure), to minimize the temperature rise in the enclosure and ensure maximum rejection of heat from the heat source while minimizing heat-up of the enclosure. The connecting pipe is optionally bent, to allow the heat sink to be offset from the heat absorber. The heat pipe assembly and a container of working fluid (HFC-245fa) is chilled in a domestic refrigerator, to approximately 4.4° C. (40° F.). The chilled liquid working fluid is charged to the heat pipe assembly by removing the pipe plug from the heat absorber, and poured in until the liquid level is approximately at same level as the top of the microchannel plate stack. After charging with the working fluid, the pipe plug is replaced, sealing the system. The heat pipe assembly is oriented vertically, with the heat absorber block at the bottom, and the finned heat sink section at the top. The heat absorber block is placed in direct contact with the hot object to be cooled, e.g., a central processing unit (CPU) of a computer, which generates heat during operation. The finned heat sink section is exposed to ambient temperature air, which may optionally be circulated around the fins by means of an external fan, to improve the rate of heat removal. Conduction of heat from the hot object via the heat transfer block causes the working fluid to boil. The vapors travel via the central portion of the connecting pipe, and are cooled and condensed by conduction with the finned heat sink section, and the heat is rejected by convection to the ambient air. The condensed fluid returns by gravity along the walls of the connecting pipe to the heat transfer block, allowing the cycle to repeat. During operation, the temperature of the working fluid rises to a value intermediate between that of the heat source and that of the ambient air external to the heat sink. At steady state conditions (e.g., assuming heat generation at a constant rate or wattage) the temperature of the working fluid is determined by the heat absorption being in balance with the heat rejection, according to the following relationships: Qabsorbed=Uabsorber×Aabsorber×(Thot-Tfluid)Qrejected=Usink×Asink×(Tfluid-Tair)Tfluid=(Uabsorber×Aabsorber×Thot)+(Usink×Asink×Tair)Uabsorber×Aabsorber×Usink×Asink Where Q=heat transfer rate U=heat transfer coefficient A=heat transfer area Thot=temperature of heat source Example 2: Air-Cooled Two-Tube Non-Wick Heat Pipe System A heat pipe system was constructed, consisting of an Atotech Ardex P microchannel block-type heat absorber, a finned microchannel heat sink, two connecting pipes, and a working fluid. The microchannel heat sink consisted of an Atotech Ardex P microchannel block soldered to a CompUSA Pentium 4 Socket 478 CPU cooler fin-fan assembly. The heat pipe assembly consisted of substantially the same equipment and construction as used in Example 1, with the following differences. The second port of the heat absorber was provided with a ¼″ tube fitting male run tee, in lieu of the pipe plug. The second port of the heat sink was provided with a male adapter ¼″ tube fitting, in lieu of the pipe plug. Two connecting pipes were used. The vapor pipe was a ⅜″ diameter PFA tube, and the liquid pipe was a ¼″ PFA tube. The connecting tubes were connected to the absorber by means of the tube fittings on the heat absorber and the heat sink. The working fluid was charged by means of the unused port on the tee connected to heat absorber. After charging, the port was capped with a tube-fitting plug. The heat pipe assembly was oriented vertically, with the heat absorber block at the bottom, and the finned heat sink section at the top. The heat absorber block was placed in direct contact with the hot object to be cooled. A 2¼ inch square×½ inch thick aluminum block, provided with an electrical cartridge heater embedded in the middle of the block and connected to a Variac™ power source, was used to simulate the central processing unit (CPU) of a computer, which generates heat during operation. The heated block was provided with a thermocouple embedded in the block, adjacent to the cartridge heater. The finned heat sink section was exposed to ambient-temperature air. (Note, although not done in this example, air may optionally be circulated around the fins by means of an external fan, to improve the rate of heat removal.) Conduction of heat from the hot object via the heat transfer block caused the working fluid to boil. The vapors traveled via the larger diameter vapor pipe, and are cooled and condensed by conduction with the finned heat sink section, and the heat was rejected by convection to the ambient air. The condensed fluid returned by gravity to the heat transfer block via the smaller diameter liquid return pipe, allowing the cycle to repeat. The fluid flow was visible in the semi-transparent PFA tubing. The temperature and pressure of the working fluid reached steady state, substantially as described in Example 1. A plot of the block temperature as a function of cartridge heater power (wattage) is shown inFIG.9, in comparison with the temperatures obtained using an un-cooled block, a block cooled by a conventional “pin-fin” CPU cooler, and an empty Ardex P cooling block. As can be seen from the data inFIG.9, the bare block without cooling became extremely hot at the higher power input levels, and a prior art pin-fin CPU cooler provided some degree of cooling. However, the two heat pipes using microchannel heat absorbers according to the invention provided substantially more cooling than the pin-fin cooler. In fact, the microchannel systems provided better cooling (lower block temperature) at 100 watts power input than the pin-fin cooler did at only 80 watts. For comparison, a run is also shown using a microchannel heat pipe without any working fluid (labeled “Block w. Empty Ardex P”), and this provided minimal cooling as expected. Example 3: Liquid-Cooled Single-Tube Heat Pipe System A heat pipe system is constructed, consisting of a microchannel block-type heat absorber, a water-cooled microchannel heat exchanger heat sink, a connecting pipe, and a working fluid. The heat pipe assembly consists of substantially the same equipment as described in Example 2, with the following differences. The heat sink is a cross-flow 2-fluid microchannel heat exchanger. The working fluid is the first fluid, and flowing cooling water is the second fluid, so that heat is removed from the system by heat transfer from the condensing working fluid vapors, through the walls of the microchannel heat sink, into the cooling water. Example 4: Air-Cooled Single-Tube Heat Pipe System with Wick A heat pipe system similar to that of Example 1 is constructed, except that an annular band of porous wicking material is inserted along the inside wall of the connecting pipe. In this example, the wicking material is an annular roll of sintered −35+65 mesh spherical fine-mesh stainless steel powder. The wicking material causes the condensed working fluid to return to the heat absorber block by capillary action. This allows the heat pipe to be oriented horizontally or even with the heat sink section below the heat absorber block, provided that the capillary force is greater than the gravitational force acting on the returning fluid. Example 5: Air-Cooled Dual-Tube Heat Pipe with Liquid Return Line Wick A heat pipe system similar to that of Example 2 is constructed, except that the liquid return pipe is packed with porous wicking material. In this example, the wicking material is a braid of fiberglass. The wicking material causes the condensed working fluid to return to the heat absorber block by capillary action. This allows the heat pipe to be oriented horizontally or even with the heat sink section below the heat absorber block, provided that the capillary force is greater than the gravitational force acting on the returning fluid. Example 6: Cooling of a Microelectronic Device by Means of a Heat Pipe, Rejecting the Heat Externally to the Microelectronic Device Enclosure and Room A heat pipe system similar to that of Example 4 is constructed with the heat absorber in contact with the surface of a computer microprocessor (i.e. central processing unit or CPU) to provide a means of cooling to remove the heat generated by the CPU, to prevent overheating. The wick-bearing connecting tube is routed externally to the housing of the computer, and the finned heat sink is placed in an air duct, wherein the air duct is supplied with non-air-conditioned fresh air from outside the room or building housing the computer. Using this configuration, the air warmed by the rejected heat is routed outside the room or building housing the computer. This arrangement may be repeated for multiple microelectronic devices, e.g., other heat generating processors (such as graphic processing units or GPUs), controller “chips”, power supplies, and the like that are housed in a common enclosure, and/or multiple separately enclosed microelectronic devices, with some or all of the heat pipes rejecting their heat to a common externally-supplied and vented system of air ducts. This arrangement is in contrast to conventional practices, wherein heat removed from the computer components devices is rejected within the enclosure (e.g., by fin/fan combinations mounted on the CPU, GPU, and controller chips), and fans are used to blow air through the enclosure, moving the heat out into the room housing the computer, heating up the air in the room. This arrangement often requires that the room housing the computer(s) be air conditioned, to prevent the air temperature from rising beyond acceptable limits. It has been calculated that the power requirements associated with the air-conditioning of the rooms housing the computers, e.g., for data centers, is comparable to the power consumed by the computers. Thus, by transferring the heat to externally-supplied and vented non-air-conditioned air, the overall power requirements for the computer system and its ancillary systems may be reduced by nearly half. In another embodiment, the external cooling may be provided by an inexpensive liquid coolant, e.g., cooling water, in lieu of air. In either case, the air or the liquid coolant may flow to a location outside of the room, thereby reducing the amount of heat added to the room environment. Although the invention is illustrated and described herein with reference to specific embodiments, it is not intended that the subjoined claims be limited to the details shown. Rather, it is expected that various modifications may be made in these details by those skilled in the art, which modifications may still be within the spirit and scope of the claimed subject matter and it is intended that these claims be construed accordingly. | 39,664 |
11859913 | DESCRIPTION OF EMBODIMENTS Embodiments of the present invention are described below with reference to the accompanying drawings. Note that in the drawings attached hereto, for ease of illustration and understanding, the scale, the aspect ratio, and the like are changed from the actual ones and are exaggerated as appropriate. As used herein, the geometrical conditions, physical properties, the terms identifying the degrees of the geometrical conditions or physical properties, and numerical values indicating the geometrical conditions or physical properties are not defined strictly. Accordingly, these geometric conditions, physical properties, terms, and numerical values shall be interpreted to include the extent to which similar functions can be expected. Examples of a term that identifies geometric conditions include “length”, “angle”, “shape”, and “arrangement”. Examples of a term that identifies geometric conditions include “parallel”, “orthogonal”, and “identical”. In addition, for clarity of the drawings, the shapes of a plurality of portions that could be expected to function in a similar manner are illustrated regularly. However, the shapes need not be defined strictly, and the shapes of the portions may differ from one another as long as the portions function as expected. Furthermore, in the drawings, a boundary line indicating the joint surfaces of members and the like are denoted by a straight line for simplicity, but it is not limited to a strict straight line, and any shape of the boundary line may be employed as long as the joint surfaces provide the expected joint performance. First Embodiment A wick sheet for a vapor chamber, a vapor chamber, and an electronic apparatus according to the first embodiment of the present invention are described below with reference toFIGS.1to21. A vapor chamber1in the present embodiment is housed in a housing H of an electronic apparatus E together with an electronic device D that generates heat. The vapor chamber1is a device for cooling the electronic device D. An example of the electronic apparatus E is a mobile terminal, such as a portable terminal or a tablet. Examples of the electronic device D include a central processing unit (CPU), a light emitting diode (LED), and a power semiconductor. The electronic device D is also referred to as a “device to be cooled”. The electronic apparatus E including the vapor chamber1according to the present embodiment is described first with reference to a tablet as an example. As illustrated inFIG.1, the electronic apparatus E includes the housing H and the electronic device D and a vapor chamber1housed in the housing H. The electronic apparatus E illustrated inFIG.1is provided with a touch panel display TD on the front surface of the housing H. The vapor chamber1is housed in the housing H and is disposed so as to be in thermal contact with the electronic device D. This allows the vapor chamber1to receive the heat generated by the electronic device D when the electronic apparatus E is in use. The heat received by the vapor chamber1is dissipated to the outside of the vapor chamber1via working fluids2aand2b(described below). In this manner, the electronic device D is effectively cooled. If the electronic apparatus E is a tablet, the electronic device D corresponds to the central processing unit or the like. The vapor chamber1according to the present embodiment is described below. As illustrated inFIGS.2and3, the vapor chamber1has a sealed space3in which the working fluids2aand2bare enclosed, and the phase changes of the working fluids2aand2bin the sealed space3are repeated. Thus, the electronic device D described above is cooled. Examples of the working fluids2aand2binclude pure water, ethanol, methanol, acetone, and any mixture thereof. Note that the working fluids2aand2bmay have freezing-expansion property. That is, the working fluids2aand2bmay be fluids that exhibit expansion upon freezing. Examples of the working fluids2aand2bthat exhibit expansion upon freezing include pure water and an aqueous solution of pure water and an additive, such as alcohol. As illustrated inFIGS.2and3, the vapor chamber1has a lower sheet10, an upper sheet20, and a wick sheet30for a vapor chamber. The lower sheet10is an example of a first sheet. The upper sheet20is an example of a second sheet. The wick sheet30for a vapor chamber is sandwiched between the lower sheet10and the upper sheet20. Hereinafter, the wick sheet for a vapor chamber is simply referred to as a wick sheet30. In the present embodiment, the lower sheet10, the wick sheet30, and the upper sheet20are stacked in this order. The vapor chamber1is formed in the shape of a substantially thin, flat plate. The vapor chamber1may have any planar shape, and the planar shape of the vapor chamber1may be rectangular as illustrated inFIG.2. For example, the planar shape of the vapor chamber1may be a rectangle with one side of length 1 cm and adjacent side of length 3 cm, or a square with side 15 cm on one side. The vapor chamber1may have any planar dimensions. The present embodiment is described below with reference to the vapor chamber1having a rectangular planar shape with the longitudinal direction being a X direction (described below). In this case, as illustrated inFIGS.4to7, the lower sheet10, upper sheet20, and wick sheet30may have the same planar shape as the vapor chamber1. Note that the planar shape of the vapor chamber1is not limited to a rectangle but instead may be any shape, such as a circle, an ellipse, L shape, or T shape. As illustrated inFIG.2, the vapor chamber1has an evaporation region SR in which the working fluids2aand2bevaporate and a condensation region CR in which the working fluids2aand2bcondense. The evaporation region SR overlaps the electronic device D in plan view and is the region where the electronic device D is attached. The evaporation region SR can be disposed anywhere in the vapor chamber1. In the present embodiment, the evaporation region SR is formed on one side of the vapor chamber1in the X direction (the left side inFIG.2). Heat is transferred from the electronic device D to the evaporation region SR, and the heat causes the liquid of the working fluid to evaporate in the evaporation region SR. The heat from the electronic device D may be transferred not only to the region that overlaps the electronic device D in plan view, but also to a region around the region where the electronic device D overlaps. Accordingly, the evaporation region SR includes the region that overlaps the electronic device D and a region that surrounds the region in plan view. As used herein, the term “plan view” is referred to the view in the direction orthogonal to a surface of the vapor chamber1that receives heat from the electronic device D and a surface that dissipates the received heat. The surface that receives heat corresponds to a second upper sheet surface20bof the upper sheet20(described below). The surface that dissipates heat corresponds to a first lower sheet surface10aof the lower sheet10(described below). For example, as illustrated inFIG.2, the view of the vapor chamber1viewed from above or from below corresponds to the plan view. Note that the vapor of the working fluid is referred to as “working vapor2a”, and the liquid of the working fluid is referred to as “working liquid2b”. The condensation region CR is a region that does not overlap the electronic device D in plan view and is a region in which mainly the working vapor2adissipates heat and condenses. The condensation region CR can be referred to as a region surrounding the evaporation region SR. In the condensation region CR, the heat from the working vapor2ais dissipated to the lower sheet10, and the working vapor2ais cooled and condenses in the condensation region CR. When the vapor chamber1is mounted inside a tablet, the top-bottom relationship may be changed depending on the posture of the tablet. However, in the present embodiment, for convenience, the sheet that receives heat from the electronic device D is referred to as the upper sheet20described above, and the sheet that dissipates the received heat is referred to as the lower sheet10described above. Thus, the configuration of the vapor chamber1is described with reference to the lower sheet10disposed on the lower side and the upper sheet20disposed on the upper side. As illustrated inFIG.3, the lower sheet10has the first lower sheet surface10aprovided on the opposite side from the wick sheet30and a second lower sheet surface10bprovided on the opposite side from the first lower sheet surface10a. The second lower sheet surface10bis closer to the wick sheet30. The entire lower sheet10may be formed as a flat sheet. The entire lower sheet10may have a constant thickness. A housing member Ha that constitutes part of the housing H described above is attached to the first lower sheet surface10a. The entire first lower sheet surface10amay be covered with the housing member Ha. As illustrated inFIG.4, an alignment hole12may be provided in each of the four corners of the lower sheet10. As illustrated inFIG.3, the upper sheet20has a first upper sheet surface20athat is closer to the wick sheet30, and a second upper sheet surface20bprovided on the opposite side from the first upper sheet surface20a. The first upper sheet surface20ais closer to the wick sheet30. The entire upper sheet20may be formed as a flat sheet. The entire upper sheet20may have a constant thickness. The electronic device D described above is attached to the second upper sheet surface20b. As illustrated inFIG.5, an alignment hole22may be provided in each of the four corners of the upper sheet20. As illustrated inFIG.3, the wick sheet30has a sheet body31and a vapor flow channel portion50, a liquid flow channel portion60, and a liquid storage portion70provided in the sheet body31. The sheet body31has a first body surface31aand a second body surface31bprovided on the opposite side from the first body surface31a. The first body surface31ais closer to the lower sheet10. The second body surface31bis closer to the upper sheet20. The vapor flow channel portion50, the liquid flow channel portion60, and the liquid storage portion70constitute the sealed space3described above. The second lower sheet surface10bof the lower sheet10and the first body surface31aof the sheet body31may be diffusion bonded to each other. The second lower sheet surface10band the first body surface31amay be permanently bonded to each other. Similarly, the first upper sheet surface20aof the upper sheet20and the second body surface31bof the sheet body31may be diffusion bonded to each other. The first upper sheet surface20aand the second body surface31bmay be permanently bonded to each other. Note that instead of using a diffusion bonding technique, the lower sheet10, upper sheet20, and wick sheet30may be bonded using another technique, such as brazing, if the sheets can be permanently bonded together. As used herein, the term “permanently bonded” is not defined strictly. The term is used to mean that the sheets are bonded to such a degree that the sealing of the sealed space3is maintained during the operation of the vapor chamber1. It is only required that the lower sheet10and the wick sheet30are permanently bonded to maintain bonding between the lower sheet10and the wick sheet30during operation of the vapor chamber1. It is only required that the upper sheet20and the wick sheet30are permanently bonded to maintain bonding between the upper sheet20and the wick sheet30during operation of the vapor chamber1. The sheet body31of the wick sheet30according to the present embodiment includes a frame body portion32and a plurality of land portions33. As illustrated inFIGS.3,6, and7, the frame body portion32is formed in a rectangular frame shape in plan view. The land portions33are provided inside the frame body portion32. The frame body portion32and the land portions33are portions where the material of the wick sheet30remains without being etched in the etching process (described below). The vapor flow channel portion50is defined inside the frame body portion32. That is, the working vapor2aflows inside the frame body portion32and around the land portions33. In the present embodiment, the land portions33may extend in an elongated shape so that the longitudinal direction thereof is the X direction in plan view. The planar shape of the land portion33may be an elongated rectangular shape. The X direction is an example of a first direction. The X direction corresponds to the right-left direction inFIG.6. In addition, the land portions33are equally spaced apart from each other in a Y direction. The Y direction is an example of a second direction. The Y direction corresponds to the top-bottom direction inFIG.6. The land portions33may be arranged parallel to one another. The working vapor2aflows around each of the land portions33and is delivered toward the condensation region CR. This inhibits the flow of the working vapor2afrom being obstructed. A width w1of the land portion33(refer toFIG.8A) may be, for example, 100 μm to 1500 μm. Note that the width w1of the land portion33is the dimension of the land portion33in the Y direction. The width w1refers to the dimension at the position at which a penetration portion34(described below) exists in the thickness direction of the wick sheet30. The frame body portion32and the land portions33are diffusion bonded to the lower sheet10and to the upper sheet20. Thus, the mechanical strength of the vapor chamber1can be increased. A wall surface53aof a lower vapor flow channel recess53(described below) and a wall surface54aof the upper vapor flow channel recess54(described below) constitute a side wall of the land portion33. The first body surface31aand the second body surface31bof the sheet body31may be formed flat over the frame body portion32and the land portions33. The vapor flow channel portion50is an example of a penetration space that penetrates the sheet body31. The vapor flow channel portion50is a channel through which mainly the working vapor2apasses. The vapor flow channel portion50penetrates from the first body surface31ato the second body surface31b. As illustrated inFIGS.6and7, the vapor flow channel portion50in the present embodiment has a first vapor passage51and a plurality of second vapor passages52. The first vapor passage51is formed between the frame body portion32and the land portions33. The first vapor passage51is formed in a continuous manner inside the frame body portion32and outside the land portions33. The planar shape of the first vapor passage51is a rectangular frame shape. Each of the second vapor passages52is formed between two neighboring land portions33. The planar shape of the second vapor passage52is an elongated rectangle. The vapor flow channel portion50is partitioned into the first vapor passage51and the plurality of second vapor passages52by the plurality of land portions33. As illustrated inFIG.3, the first vapor passage51and the second vapor passages52extend from the first body surface31ato the second body surface31bof the sheet body31. Each of the first vapor passage51and the second vapor passages52is composed of the lower vapor flow channel recess53on the first lower sheet surface10aand the upper vapor flow channel recess54on the upper sheet surface20b. The lower vapor flow channel recess53communicates with the upper vapor flow channel recess54and, thus, the first vapor passage51and the second vapor passages52of the vapor flow channel portion50extend from the first body surface31ato the second body surface31b. The lower vapor flow channel recess53is formed by the fact that the first body surface31aof the wick sheet30is etched through an etching process (described below). The lower vapor flow channel recess53having a concave shape is formed on the first body surface31a. As a result, as illustrated inFIG.8A, the lower vapor flow channel recess53has the wall surface53athat is curved. The wall surface53adefines the lower vapor flow channel recess53and is curved so as to expand towards the second body surface31b. The lower vapor flow channel recesses53formed in this manner constitute part (the lower half) of the first vapor passage51and part (the lower half) of the second vapor passage52. The upper vapor flow channel recess54is formed by the fact that the second body surface31bof the wick sheet30is etched through an etching process (described below). The upper vapor flow channel recess54having a concave shape is formed on the second body surface31b. As a result, as illustrated inFIG.8A, the upper vapor flow channel recess54has the wall surface54athat is curved. The wall surface54adefines the upper vapor flow channel recess54and is curved so as to expand towards the first body surface31a. The upper vapor flow channel recess54formed in this manner constitute part (the upper half) of the first vapor passage51and part (the upper half) of the second vapor passage52. As illustrated inFIG.8A, the wall surface53aof the lower vapor flow channel recess53and the wall surface54aof the upper vapor flow channel recess54are connected to form the penetration portion34. The wall surface53aand the wall surface54aare each curved toward the penetration portion34. In this manner, the lower vapor flow channel recess53communicates with the upper vapor flow channel recess54. In the present embodiment, like the planar shape of the first vapor passage51, the planar shape of the penetration portion34in the first vapor passage51is a rectangular frame shape. Like the planar shape of the second vapor passage52, the planar shape of the penetration portion34in the second vapor passage52is an elongated rectangle. The wall surface53aof the lower vapor flow channel recess53may merge with the wall surface54aof the upper vapor flow channel recess54, and the ridge line may define the penetration portion34. As illustrated inFIG.8A, the ridge line may be formed so as to protrude inwardly of the vapor passages51and52. The plane area of the first vapor passage51is minimized at the penetration portion34, and the plane area of the second vapor passage52is minimized at the penetration portion34. A width w2(refer toFIG.8A) of the penetration portion34may be, for example, 400 μm to 1600 μm. Note that the width w2of the penetration portion34corresponds to a gap between two neighboring land portions33in the Y direction. The position of the penetration portion34in a Z direction may be the middle position between the first lower sheet surface10aand the upper sheet surface20b. Alternatively, the position of the penetration portion34may be a position shifted downward or upward from the middle position. The position of the penetration portion34in the Z direction is any position as long as the lower vapor flow channel recess53communicates with the upper vapor flow channel recess54. In the present embodiment, the sectional shape of each of the first vapor passage51and the second vapor passage52is formed to include the penetration portion34defined by the ridge line formed to protrude inwardly. However, the shape is not limited thereto. For example, the sectional shape of the first vapor passage51and the sectional shape of the second vapor passage52may be trapezoidal, rectangular, or barrel-shaped. The vapor flow channel portion50including the first vapor passage51and the second vapor passage52formed in this manner constitutes part of the above-described sealed space3. As illustrated inFIG.3, the vapor flow channel portion50according to the present embodiment is defined mainly by the lower sheet10, the upper sheet20, and the frame body portion32and the land portions33of the sheet body31described above. Each of the vapor passages51and52has a relatively large flow channel cross-sectional area so that the working vapor2apasses through the vapor passages51and52. Note that for clarity of the drawing,FIG.3is an enlarged view of the first vapor passage51, the second vapor passage52, and the like. The numbers and arrangement of the vapor passages51and52differ from those inFIGS.2,6and7. Note that although not illustrated, a plurality of support portions may be provided in the vapor flow channel portion50to support the land portions33with respect to the frame body portion32. In addition, support portions may be provided to support two neighboring land portions33. These support portions may be provided on either side of the land portion33in the X direction or on either side of the land portion33in the Y direction. It is desirable that the support portions is formed so as not to obstruct the flow of working vapor2adispersing through the vapor flow channel portion50. For example, the support portion may be disposed at a position closer to one of the first and second body surfaces31aand31bof the sheet body31of the wick sheet30, and a space functioning as a vapor flow channel recess may be formed at a position closer to the other of the first and second body surfaces31aand31b. This allows the thickness of the support portion to be less than that of the sheet body31, and the first vapor passage51and the second vapor passages52can be prevented from being separated each other in the X and Y directions. As illustrated inFIGS.6and7, an alignment hole35may be provided in each of the four corners of the sheet body31of the wick sheet30. As illustrated inFIG.2, the vapor chamber1may include an injection portion4at one edge thereof in the X direction, through which the working liquid2bis injected into the sealed space3. In the configuration illustrated inFIG.2, the injection portion4is disposed at a position closer to the evaporation region SR. The injection portion4protrudes outward from the edge closer to the evaporation region SR. More specifically, the injection portion4may include a lower injection protrusion11, an upper injection protrusion21, and a wick sheet injection protrusion36. As illustrated inFIG.4, the lower injection protrusion11is part of the lower sheet10. As illustrated inFIG.5, the upper injection protrusion21is part of the upper sheet20. As illustrated inFIGS.6and7, the wick sheet injection protrusion36is part of the sheet body31. The wick sheet injection protrusion36has an injection flow channel37formed therein. The injection flow channel37extends from the first body surface31ato the second body surface31bof the sheet body31and penetrates the sheet body31(more specifically, the wick sheet injection protrusion36) in the Z direction. In addition, the injection flow channel37communicates with the vapor flow channel portion50. The working liquid2bis injected into the sealed space3through the injection flow channel37. Note that the injection flow channel37may communicate with the liquid flow channel portions60, depending on the arrangement of the liquid flow channel portions60. The upper and lower surfaces of the wick sheet injection protrusion36are formed in a flat shape. In addition, the upper surface of the lower injection protrusion11and the lower surface of the upper injection protrusion21are formed in a flat shape. The injection protrusions11,21, and38may have the same flat shape. In the present embodiment, an example is illustrated in which the injection portion4is provided at one of two edges in the X direction of the vapor chamber1. However, the position of the injection portion4is not limited thereto, and the injection portion4can be provided at any position. In addition, the injection flow channel37provided in the wick sheet injection protrusion36does not necessarily have to penetrate the sheet body31as long as the working liquid2bcan be injected. In this case, the injection flow channel37that communicates with the vapor flow channel portion50can be formed by the fact that only one of the first body surface31aand the second body surface31bof the sheet body31is etched through an etching process. As illustrated inFIGS.3,6and8A, the liquid flow channel portion60is provided on the second body surface31bof the sheet body31of the wick sheet30. The liquid flow channel portion60may be a channel through which mainly the working liquid2bpasses. The liquid flow channel portion60constitutes part of the sealed space3described above. The liquid flow channel portion60communicates with the vapor flow channel portion50. The liquid flow channel portion60is configured so as to have a capillary structure for delivering the working liquid2bto the evaporation region SR. The liquid flow channel portion60is also referred to as a wick. In the present embodiment, the liquid flow channel portion60is provided on the second body surface31bof each of the land portions33of the wick sheet30. The liquid flow channel portion60may be formed over the entire second body surface31bof each land portion33. The liquid flow channel portion60need not be provided on the first body surface31aof each land portion33. As illustrated inFIG.9, the liquid flow channel portion60is an example of a first groove assembly. More specifically, the liquid flow channel portion60includes a plurality of liquid flow channel mainstream grooves61and a plurality of liquid flow channel communication grooves65. The liquid flow channel mainstream groove61is an example of a first mainstream groove. The liquid flow channel communication groove65is an example of the first communication groove. The liquid flow channel mainstream groove61and the liquid flow channel communication groove65are grooves through which the working liquid2bpasses. The liquid flow channel communication groove65communicates with the liquid flow channel mainstream groove61. As illustrated inFIG.9, each of the liquid flow channel mainstream grooves61extends in the X direction. The liquid flow channel mainstream groove61has such a flow channel cross-sectional area that mainly the working liquid2bflows by capillary action. The flow channel cross-sectional area of the liquid flow channel mainstream groove61is less than that of the vapor passages51and52. In this manner, the liquid flow channel mainstream groove61is configured to deliver, to the evaporation region SR, the working liquid2bcondensed from the working vapor2a. The liquid flow channel mainstream grooves61may be spaced equally apart in the Y direction that is orthogonal to the X direction. The liquid flow channel mainstream grooves61are formed by the fact that the second body surface31bof the sheet body31of the wick sheet30is etched through an etching process (described below). As a result, each of the liquid flow channel mainstream grooves61has a curved wall surface62, as illustrated inFIG.8A. The wall surface62defines the liquid flow channel mainstream groove61and is curved so as to expand towards the first body surface31a. As illustrated inFIGS.8A and9, a width w3of the liquid flow channel mainstream groove61may be, for example, 5 μm to 150 μm. The width w3of the liquid flow channel mainstream groove61refers to the dimension of the liquid flow channel mainstream groove61at the second body surface31b. The width w3corresponds to the dimension in the Y direction. As illustrated inFIG.8A, a depth h1of the liquid flow channel mainstream groove61may be, for example, 3 μm to 150 μm. The depth h1corresponds to the dimension in the Z direction. As illustrated inFIG.9, each of the liquid flow channel communication grooves65extends in a direction that differs from the X direction. In the present embodiment, each of the liquid flow channel communication grooves65extends in the Y direction. The liquid flow channel communication groove65is formed orthogonal to the liquid flow channel mainstream groove61. Some of the liquid flow channel communication grooves65interconnect two neighboring liquid flow channel mainstream grooves61. The other liquid flow channel communication grooves65connect the first vapor passage51or the second vapor passages52to the liquid flow channel mainstream groove61. That is, these liquid flow channel communication grooves65extend from the edge in the Y direction of the land portion33to the liquid flow channel mainstream groove61adjacent to the edge. In this way, the first vapor passage51communicates with the liquid flow channel mainstream groove61, and the second vapor passage52communicates with the liquid flow channel mainstream groove61. The liquid flow channel communication groove65has such a flow channel cross-sectional area that mainly the working liquid2bflows by the capillary action. The flow channel cross-sectional area of the liquid flow channel communication groove65is less than that of the vapor passages51and52. The liquid flow channel communication grooves65may be disposed so as to be spaced equally apart in the X direction. Like the liquid flow channel mainstream grooves61, the liquid flow channel communication grooves65are formed through an etching process. Each of the liquid flow channel communication grooves65has a curved wall (not illustrated) similar to that of the liquid flow channel mainstream grooves61. As illustrated inFIG.9, a width w4of the liquid flow channel communication groove65may be the same as the width w3of the liquid flow channel mainstream groove61. However, the width w4may be greater or less than the width w3. The width w4corresponds to the dimension in the X direction. The depth of the liquid flow channel communication groove65may be the same as the depth h1of the liquid flow channel mainstream groove61. However, the depth of the liquid flow channel communication groove65may be greater or less than the depth h1. As illustrated inFIG.9, a convex-portion row63is provided between two neighboring liquid flow channel mainstream grooves61. Each of the convex-portion rows63includes a plurality of convex portions64arranged in the X direction. The convex portion64is an example of a liquid flow channel protrusion portion. The convex portions64are provided in the liquid flow channel portion60. The convex portions64protrude from the sheet body31and are in contact with the upper sheet20. Each of the convex portions64is formed in a rectangular shape in plan view such that the X direction is the longitudinal direction. Each of the liquid flow channel mainstream grooves61is disposed between two neighboring convex portions64in the Y direction. Each of the liquid flow channel communication grooves65is disposed between two neighboring convex portions64in the X direction. The liquid flow channel communication groove65extends in the Y direction and interconnects two neighboring liquid flow channel mainstream grooves61in the Y direction. This allows the working liquid2bto flow back and forth between the liquid flow channel mainstream grooves61. The convex portion64is a portion where the material of the wick sheet30remains without being etched in the etching process (described below). In the present embodiment, the planar shape of the convex portion64is rectangular, as illustrated inFIG.9. The planar shape of the convex portion64corresponds to the planar shape of the sheet body31at the location of the second body surface31b. In the present embodiment, the convex portions64are arranged in a staggered pattern. More specifically, the convex portions64of two neighboring convex-portion rows63in the Y direction are displaced from each other in the X direction. The amount of displacement may be half the arrangement pitch of the convex portions64in the X direction. A width w5of the convex portion64may be, for example, 5 μm to 500 μm. The width w5of the convex portion64refers to the dimension at the second body surface31b. The width w5corresponds to the dimension in the Y direction. Note that the arrangement of the convex portions64is not limited to a staggered pattern, but may be arranged in parallel. In this case, the convex portions64of the neighboring convex-portion rows63in the Y direction are also aligned in the X direction (refer toFIG.19). The liquid flow channel mainstream groove61includes a liquid flow channel intersection portion66. The liquid flow channel intersection portion66is an example of a first intersection portion. The liquid flow channel intersection portion66is a portion of the liquid flow channel mainstream groove61in which the liquid flow channel intersection portion66communicates with the liquid flow channel communication groove65. In the liquid flow channel intersection portion66, the liquid flow channel mainstream groove61and the liquid flow channel communication groove65communicate with each other in a T-shape. This prevents that in the liquid flow channel intersection portion66in which one liquid flow channel mainstream groove61communicates with the liquid flow channel communication groove65located on one side, the liquid flow channel communication groove65located on the other side communicates with the liquid flow channel mainstream groove61. In this manner, in the liquid flow channel intersection portion66, the wall surface62of the liquid flow channel mainstream groove61is prevented from being cut out on both sides and, thus, the wall surface62on one side is made to remain. For example, in one of the liquid flow channel intersection portions66, the upper liquid flow channel communication groove65and the lower liquid flow channel communication groove65inFIG.9are prevented from communicating with the liquid flow channel mainstream groove61. In this case, both the upper wall surface62and the lower wall surface62inFIG.9can be prevented from being cut out at the liquid flow channel intersection portion66. Thus, even in the liquid flow channel intersection portion66, the capillary action of the working fluid in the liquid flow channel mainstream grooves61can be produced. As a result, a decrease in the propulsive force of the working liquid2btowards the evaporation region SR can be reduced in the liquid flow channel intersection portion66. As illustrated inFIGS.3,7and8A, the liquid storage portions70are provided on the first body surface31aof the sheet body31of the wick sheet30. Each of the liquid storage portions70may be a portion that mainly stores the working liquid2b. The liquid storage portion70constitutes part of the sealed space3described above. The liquid storage portion70communicates with the vapor flow channel portion50and further communicates with the liquid flow channel portion60via the vapor flow channel portion50. In the present embodiment, the liquid storage portions70are provided on the first body surface31aof each of the land portions33of the wick sheet30. As illustrated inFIGS.7and11, the liquid storage portion70according to the present embodiment may be disposed on one side of the land portion33in the X direction. The liquid storage portion70may be formed on the one side of the center of the land portion33in the X direction. The liquid storage portion70may be disposed on the side of the evaporation region SR and may be disposed on the left side of the land portion33as illustrated inFIG.7. More specifically, the liquid storage portion70is formed so as to continuously extend from one edge (in the X direction) of the land portion33closer to the evaporation region SR toward the other edge up to a predetermined position. InFIG.7, the liquid storage portion70is formed from the left edge toward the right edge up to a predetermined position. According to the present embodiment, the liquid storage portion70may be disposed in the evaporation region SR. However, the location of the liquid storage portion70is not limited thereto. The liquid storage portion70may partially extend to the outside of the evaporation region SR. If at least part of the liquid storage portion70is disposed in the evaporation region SR, the working liquid2bstored in the liquid storage portion70easily evaporates upon receiving heat of the electronic device D. The liquid storage portion70may be disposed in a region that overlaps the electronic device D. As illustrated inFIG.10, the liquid storage portion70is an example of a second groove assembly. More specifically, the liquid storage portion70includes a plurality of liquid storage mainstream grooves71and a plurality of liquid storage communication grooves75. The liquid storage mainstream groove71is an example of a second mainstream groove. The liquid storage communication groove75is an example of a second communication groove. The liquid storage mainstream groove71and the liquid storage communication groove75are grooves through which the working liquid2bpasses. The liquid storage communication groove75communicates with the liquid storage mainstream groove71. As illustrated inFIG.10, each of the liquid storage mainstream grooves71extends in the X direction. As illustrated inFIGS.7and11, the liquid storage mainstream grooves71is formed so as to continuously extend from one edge (in the X direction) of the land portion33closer to the evaporation region SR toward the other edge up to a predetermined position. The liquid storage mainstream groove71defines the range of the liquid storage portion70in the X direction. The liquid storage mainstream groove71has such a flow channel cross-sectional area that mainly the working liquid2bflows by capillary action. The flow channel cross-sectional area of the liquid storage mainstream groove71is less than that of the vapor passages51and52. However, the flow channel cross-sectional area of the liquid storage mainstream groove71may be greater than that of the liquid flow channel mainstream groove61described above. The capillary force that acts on the working liquid2bin the liquid storage mainstream groove71may be smaller than the capillary force that acts on the working liquid2bin the liquid flow channel mainstream grooves61. In this way, the liquid storage mainstream groove71can draw the working liquid2binto the liquid storage portion70and ensure the stored volume of the working liquid2b. The liquid storage mainstream grooves71may be disposed so as to be spaced equally apart in the Y direction that is orthogonal to the X direction. The liquid storage mainstream groove71is formed by the fact that the first body surface31aof the sheet body31of the wick sheet30is etched through an etching process (described below). As a result, as illustrated inFIG.8A, the liquid storage mainstream groove71has a curved wall surface72. The wall surface72defines the liquid storage mainstream groove71and is curved so as to expand toward the second body surface31b. As illustrated inFIGS.8A and10, a width w6of the liquid storage mainstream groove71may be greater than a width w3of the liquid flow channel mainstream groove61described above. The width w6may be, for example, 10 μm to 250 μm. Note that the width w6of the liquid storage mainstream groove71refers to the dimension at the first body surface31a. The width w6corresponds to the dimension in the Y direction. In addition, as illustrated inFIG.8A, a depth h2of the liquid storage mainstream groove71may be greater than the depth h1of the liquid flow channel mainstream groove61described above. The depth h2may be, for example, 5 μm to 200 μm. The depth h2corresponds to the dimension in the Z direction. As illustrated inFIG.10, the liquid storage communication grooves75extend in a direction that differs from the X direction. In the present embodiment, the liquid storage communication grooves75extend in the Y direction. The liquid storage communication grooves75are formed orthogonal to the liquid storage mainstream grooves71. Some of the liquid storage communication grooves75interconnect two neighboring liquid storage mainstream grooves71. The other liquid storage communication grooves75connect the first vapor passages51or the second vapor passages52to the liquid storage mainstream groove71. That is, these liquid storage communication grooves75extend from the edge in the Y direction of the land portion33to the liquid storage mainstream groove71adjacent to the edge. In this way, the first vapor passage51communicates with the liquid storage mainstream groove71, and the second vapor passage52communicates with the liquid storage mainstream groove71. The liquid storage communication groove75has such a flow channel cross-sectional area that mainly the working liquid2bflows by capillary action. The flow channel cross-sectional area of the liquid storage communication groove75is less than that of the vapor passages51and52. However, the flow channel cross-sectional area of the liquid storage communication groove may be greater than that of the liquid flow channel communication groove65described above. The capillary force that acts on the working liquid2bin the liquid storage communication groove75may be smaller than the capillary force that acts on the working liquid2bin the liquid flow channel communication groove65. In this way, the liquid storage communication groove75can draw the working liquid2binto the liquid storage portion70and ensure the stored volume of the working liquid2b. The liquid storage communication grooves75may be disposed so as to be spaced equally apart in the X direction. Like the liquid storage mainstream grooves71, the liquid storage communication grooves75are formed through an etching process. Like the liquid storage mainstream grooves71, each of the liquid storage communication grooves75has a curved wall surface (not illustrated). As illustrated inFIG.10, a width w7of the liquid storage communication groove75may be the same as the width w6of the liquid storage mainstream groove71. However, the width w7may be greater or less than the width w6. The width w7corresponds to the dimension in the X direction. The depth of the liquid storage communication groove75may be the same as the depth h2of the liquid storage mainstream groove71. However, the depth of the liquid storage communication groove75may be greater or less than the depth h2. As illustrated inFIG.10, a convex-portion row73is provided between two neighboring liquid storage mainstream grooves71. Each of the convex-portion rows73includes a plurality of convex portions74arranged in the X direction. The convex portion74is an example of a liquid storage protrusion portion. The convex portions74are provided in the liquid storage portion70. The convex portions74protrude from the sheet body31and are in contact with the lower sheet10. Each of the convex portions74having a rectangular shape in plan view is formed such that the X direction is the longitudinal direction. Each of the liquid storage mainstream grooves71is disposed between two neighboring convex portions74in the Y direction. Each of the liquid storage communication grooves75is disposed between two neighboring convex portions74in the X direction. The liquid storage communication groove75extends in the Y direction and enables two neighboring liquid storage mainstream grooves71in the Y direction to communicate with each other. This allows the working liquid2bto flow back and forth between the liquid storage mainstream grooves71. The convex portion74is a portion where the material of the wick sheet30remains without being etched in the etching process (described below). In the present embodiment, the planar shape of the convex portion74is rectangular, as illustrated inFIG.10. The planar shape of the convex portion74corresponds to the planar shape at the position of the first body surface31aof the sheet body31. In the present embodiment, the convex portions74are arranged in a staggered pattern. More specifically, the convex portions74of two neighboring convex-portion rows73are displaced from each other in the X direction. The amount of displacement may be half the arrangement pitch of the convex portions74in the X direction. A width w8of the convex portion74may be, for example, 10 μm to 100 μm. The width w8of the convex portion74refers to the dimension at the first body surface31a. The width w8corresponds to the dimension in the Y direction. Note that the arrangement of the convex portions74is not limited to a staggered pattern, but may be arranged in parallel. In this case, the convex portions74of the neighboring convex-portion rows73in the Y direction are also aligned in the X direction (refer toFIG.19). In this way, the width w6of the liquid storage mainstream groove71may be greater than w3of the liquid flow channel mainstream groove61. The width w6corresponds to a gap between a pair of the convex portions74neighboring each other in the Y direction. The width w6of the liquid storage mainstream groove71may be less than the width w2of the penetration portion34. The width w2corresponds to a gap between a pair of the land portions33neighboring each other in the Y direction. In the present embodiment, as described above, the flow channel cross-sectional area of the liquid storage mainstream groove71of the liquid storage portion70is greater than that of the liquid flow channel mainstream groove61of the liquid flow channel portion60. To satisfy the flow channel cross-sectional area relationship, in the example illustrated inFIG.8A, the width w6of the liquid storage mainstream groove71is greater than the width w3of the liquid flow channel mainstream groove61, and the depth h2of the liquid storage mainstream groove71is greater than the depth h1of the liquid flow channel mainstream groove61. However, the relationship is not limited thereto, and any relationship between the width and depth can be employed as long as the flow channel cross-sectional area of the liquid storage mainstream groove71is greater than that of the liquid flow channel mainstream groove61. For example, as illustrated inFIG.8B, if the width w6is greater than the width w3, the depth h2may be the same as the depth h1. Even in this case, the flow channel cross-sectional area of the liquid storage mainstream groove71can be greater than that of the liquid flow channel mainstream groove61. In addition, as illustrated inFIG.8C, when the depth h2is greater than the depth h1, the width w6may be the same as the width w3. Even in this case, the flow channel cross-sectional area of the liquid storage mainstream groove71can be greater than that of the liquid flow channel mainstream groove61. As used herein, the flow channel cross-sectional area of a groove corresponds to the area occupied by the groove in the cross-section in a direction orthogonal to the direction in which the groove extends. For example, the flow channel cross-sectional area of the liquid flow channel mainstream groove61corresponds to the area occupied by the groove61(or the space defined by the wall surface62of the groove61) in the cross-section in the Y direction of the liquid flow channel mainstream groove61. The number of liquid storage mainstream grooves71provided in the land portion33may be less than the number of liquid flow channel mainstream grooves61provided in the land portion33. In the present embodiment, the land portion33extends in the X direction and has an elongated rectangular shape. In addition, the width of the land portion33at the first body surface31ais the same as the width of the land portion33at the second body surface31b. In this case, the flow channel cross-sectional area of the liquid storage mainstream groove71can be greater than that of the liquid flow channel mainstream groove61. The liquid storage mainstream groove71includes a liquid storage intersection portion76. The liquid storage intersection portion76is an example of a second intersection portion. The liquid storage intersection portion76is a portion of the liquid storage mainstream groove71where the liquid storage mainstream groove71communicates with the liquid storage communication groove75. At the liquid storage intersection portion76, the liquid storage mainstream groove71and the liquid storage communication groove75communicates with each other in a T-shape. This prevents that in the liquid storage intersection portion76where one liquid storage mainstream groove71communicates with the liquid storage communication groove75located on one side, the liquid storage communication groove75located on the other side communicates with the liquid storage mainstream groove71. In this manner, in the liquid storage intersection portion76, the wall surface72of the liquid storage mainstream groove71is prevented from being cut out on both sides and, thus, the wall surface72on one side is made to remain. For example, in one liquid storage intersection portion76, the upper liquid storage communication groove75and the lower liquid storage communication groove75inFIG.10are prevented from communicating with the liquid storage mainstream groove71. In this case, both the upper wall surface72and the lower wall surface72inFIG.10can be prevented from being cut out at the liquid storage intersection portion76. Thus, even in the liquid storage intersection portion76, capillary action of the working fluid in the liquid storage mainstream groove71can be produced. Note that the materials of the lower sheet10, the upper sheet20, and the wick sheet30are not limited to particular materials as long as they have sufficient thermal conductivity. The lower sheet10, the upper sheet20, and the wick sheet30may contain copper or a copper alloy, for example. In this case, the thermal conductivity of each of the sheets10,20, and30can be increased, and the heat dissipation efficiency of the vapor chamber1can be increased. In addition, if pure water is used as the working fluids2aand2b, the occurrence of corrosion can be prevented. Note that other metal materials, such as aluminum and titanium, or other metal alloy materials, such as stainless steel, can be used for the sheets10,20, and30, as long as a desired heat dissipation efficiency can be obtained and the occurrence of corrosion can be prevented. A thickness t1of the vapor chamber1illustrated inFIG.3may be, for example, 100 μm to 1000 μm. By setting the thickness t1of the vapor chamber1to 100 μm or greater, the vapor flow channel portion50can appropriately be provided. Thus, the function of the vapor chamber1can appropriately be performed. In contrast, by setting the thickness t1to 1000 μm or less, the thickness t1of the vapor chamber1can be inhibited from increasing. A thickness t2of the lower sheet10may be, for example, 6 μm to 100 μm. By setting the thickness t2of the lower sheet10to 6 μm or greater, the mechanical strength of the lower sheet10can be ensured. In contrast, by setting the thickness t2of the lower sheet10to 100 μm or less, the thickness t1of the vapor chamber1can be inhibited from increasing. Similarly, a thickness t3of the upper sheet20may be set in the same way as the thickness t2of the lower sheet10. The thickness t3of the upper sheet20may differ from the thickness t2of the lower sheet10. A thickness t4of the wick sheet30may be, for example, 50 μm to 400 μm. By setting the thickness t4of the wick sheet30to 50 μm or greater, the vapor flow channel portion50can appropriately be provided. Therefore, the function of vapor chamber1can appropriately be performed. In contrast, by setting the thickness t4to 400 μm or less, the thickness t1of the vapor chamber1can be inhibited from increasing. The method for manufacturing the vapor chamber1having the above-described configuration according to the present embodiment is described below with reference toFIGS.12to14. Note thatFIGS.12to14illustrate sectional views similar to that inFIG.3. The process of producing the wick sheet30is described first. As illustrated inFIG.12, a flat metal sheet M is prepared first in a preparatory process. The metal sheet M has a first material surface Ma and a second material surface Mb. The metal sheet M may be made of a rolled material having a desired thickness. After the preparation process, an etching process is performed. In the etching process, the metal sheet M is etched from each of the first material surface Ma and the second material surface Mb, as illustrated inFIG.13. In this manner, the vapor flow channel portion50, the liquid flow channel portion60, and the liquid storage portion70are formed in the metal sheet M. More specifically, a patterned resist film (not illustrated) is formed on each of the first material surface Ma and the second material surface Mb of the metal sheet M by photolithography technique. Thereafter, the first material surface Ma and the second material surface Mb of the metal sheet M are etched through the openings of the patterned resist film. As a result, the first material surface Ma and the second material surface Mb of the metal sheet M are etched according to the pattern to form the vapor flow channel portion50, the liquid flow channel portion60, and the liquid storage portion70illustrated inFIG.13. For example, ferric chloride etchant, such as ferric chloride aqueous solution, or copper chloride etchant, such as copper chloride aqueous solution, can be used as the etchant. Etching may be carried out on the first material surface Ma and the second material surface Mb of the metal sheet M at the same time. However, the etching process is not limited thereto. Etching of the first material surface Ma and the second material surface Mb may be carried out as separate processes. The vapor flow channel portion50, the liquid flow channel portion60, and the liquid storage portion70may be formed by etching at the same time or in separate processes. In the etching process, the first material surface Ma and the second material surface Mb of the metal sheet M are etched to obtain the predetermined outline contour shape, as illustrated inFIG.6andFIG.7. That is, the edges of the wick sheet30are formed. In this way, the wick sheet30according to the present embodiment is obtained. In a joining process carried out after the process of producing the wick sheet30, the lower sheet10, the upper sheet20, and the wick sheet30are joined together, as illustrated inFIG.14. The lower sheet10and the upper sheet20may be made of a rolled material having a desired planar shape and a desired thickness. More specifically, first, the lower sheet10, the wick sheet30, and the upper sheet20are stacked in this order. In this case, the first body surface31aof the wick sheet30is stacked on the second lower sheet surface10bof the lower sheet10, and the first upper sheet surface20aof the upper sheet20is stacked on the second body surface31bof the wick sheet30. At this time, the alignment holes12of the lower sheet10, alignment holes35of the wick sheet30, and alignment holes22of the upper sheet20are used to align the sheets10,20, and30. Subsequently, the lower sheet10, wick sheet30, and upper sheet20are temporarily bonded together. For example, spot resistance welding may be used to temporarily bond the sheets10,20, and30together. Alternatively, laser welding may be used to temporarily bond the sheets10,20, and30together. Subsequently, the lower sheet10, the wick sheet30, and the upper sheet20are permanently bonded by diffusion bonding. Diffusion bonding is a technique of bonding by pressurizing and heating the lower sheet10, the wick sheet30, and the upper sheet20in the stacking direction in a controlled atmosphere, such as vacuum or inert gas, and using the diffusion of atoms that occurs on the bonding surface. During pressurization, the lower sheet10is closely contacted with the wick sheet30, and the wick sheet30is closely contact with the upper sheet20. In diffusion bonding, the materials of the sheets10,20, and30are heated to a temperature close to, but lower than, the melting point, thus avoiding melting and deformation of the sheets10,20, and30. More specifically, the first body surface31aat the frame body portion32and each of the land portions33of the wick sheet30are diffusion bonded to the second lower sheet surface10bof the lower sheet10. In addition, the second body surface31bat the frame body portion32and each of the land portions33of the wick sheet30are diffusion bonded to the first upper sheet surface20aof the upper sheet20. In this way, the sheets10,20, and30are diffusion bonded together to form the sealed space3having the vapor flow channel portion50, the liquid flow channel portion60, and the liquid storage portion70between the lower sheet10and the upper sheet20. In this stage, the above-described injection flow channel37is not sealed in the sealed space3. In the above-described injection portion4, the lower injection protrusion11of the lower sheet10and the wick sheet injection protrusion36of the wick sheet30are diffusion bonded. In addition, the wick sheet injection protrusion36is diffusion bonded to the upper injection protrusion21of the upper sheet20. After the joining process, the working liquid2bis injected into the sealed space3through the injection portion4. At this time, the injected volume of the working liquid2bmay be greater than the total volume of a space formed by the liquid flow channel mainstream grooves61and the liquid flow channel communication grooves65of the liquid flow channel portion60. Subsequently, the injection flow channel37described above is sealed. For example, a laser beam may be emitted to the injection portion4to partially melt the injection portion4and seal the injection flow channel37. In this manner, communication of the sealed space3with the outside is blocked, achieving a sealed space3that is filled with the working liquid2b. Thus, leakage of the working liquid2bin the sealed space3to the outside is prevented. Note that to seal the injection flow channel37, the injection portion4may be caulked (or pressed and plastically deformed) or brazed. As described above, the vapor chamber1according to the present embodiment is obtained. A method for operating the vapor chamber1, that is, a method for cooling the electronic device D is described below. The vapor chamber1obtained as described above is mounted in the housing H of a mobile terminal or the like. An electronic device D (e.g., a CPU), which is a device to be cooled, is mounted on the second upper sheet surface20bof the upper sheet20. The working liquid2bin the sealed space3adheres to the wall surface of the sealed space3due to its surface tension. More specifically, the working liquid2badheres to the wall surface53aof the lower vapor flow channel recess53, the wall surface54aof the upper vapor flow channel recess54, and the wall surface62of the liquid flow channel mainstream grooves61and a wall surface of the liquid flow channel communication grooves65of the liquid flow channel portion60. In addition, the working liquid2bcan adhere to portions of the second lower sheet surface10bof the lower sheet10that are exposed to the lower vapor flow channel recess53. Furthermore, the working liquid2bcan adhere to portions of the first upper sheet surface20aof the upper sheet20that are exposed to the upper vapor flow channel recess54, the liquid flow channel mainstream grooves61, and the liquid flow channel communication grooves65. At this time, if the electronic device D generates heat, the working liquid2blocated in the evaporation region SR (refer toFIGS.6and7) receives the heat from the electronic device D. The received heat is absorbed in the form of latent heat, and the working liquid2bevaporates and produce the working vapor2a. Most of the generated working vapor2adiffuses in the lower vapor flow channel recesses53and upper vapor flow channel recesses54, which constitute the sealed space3(refer to solid arrows inFIG.6). The working vapor2ain each of the vapor flow channel recesses53and54leaves the evaporation region SR. Most of the working vapor2ais delivered to the condensation region CR where the temperature is relatively low. InFIGS.6and7, most of the working vapor2ais delivered to the right portion of the vapor flow channel portion50. In the condensation region CR, the working vapor2adissipates heat mainly to the lower sheet10and is cooled. The heat received by the lower sheet10from the working vapor2ais transferred to the outside air via the housing member Ha (refer toFIG.3). The working vapor2adissipates heat to the lower sheet10in the condensation region CR. As a result, the working vapor2aloses the latent heat absorbed in the evaporation region SR and condenses and, thus, the working fluid2bis generated. The generated working liquid2badheres to the wall surface53aof the lower vapor flow channel recess53, the wall surface54aof the vapor flow channel recess54, the second lower sheet surface10bof the lower sheet10, and the first upper sheet surface20aof the upper sheet20. At this time, the working liquid2bcontinues to evaporate in the evaporation region SR. Accordingly, the working liquid2bin a region (i.e., the condensation region CR) of the liquid flow channel portion60other than the evaporation region SR is delivered toward the evaporation region SR by the capillary action in each of the liquid flow channel mainstream grooves61(refer to dashed arrows inFIG.6). As a result, the working liquid2bthat adhered to each of the walls53aand54a, the second lower sheet surface10b, and the first upper sheet surface20amoves to the liquid flow channel portion60. At this time, the working liquid2bpasses through the liquid flow channel communication grooves65and enters the liquid flow channel mainstream grooves61. In this way, the liquid flow channel mainstream grooves61and the liquid flow channel communication grooves65are filled with the working liquid2b. Accordingly, the loaded working liquid2bis propelled toward the evaporation region SR by capillary action of each of the liquid flow channel mainstream grooves61. In this way, the working liquid2bis smoothly delivered toward the evaporation region SR. In the liquid flow channel portion60, each of the liquid flow channel mainstream grooves61communicates with another neighboring liquid flow channel mainstream groove61via the corresponding fluid flow channel communication groove65. This inhibits dry-out from occurring in the liquid flow channel mainstream grooves61due to the working liquid2bflowing back and forth between two neighboring liquid flow channel mainstream grooves61. As a result, capillary action of the working liquid2boccurs in each of the liquid flow channel mainstream grooves61and, thus, the working liquid2bis smoothly delivered toward the evaporation region SR. Note that part of the working liquid2bcondensed in the condensation region CR is delivered to the liquid storage portion70provided on the first body surface31aof the wick sheet30, instead of the liquid flow channel portion60. More specifically, part of the working liquid2bthat adhered to each of the walls53aand54a, the second lower sheet surface10b, and the first upper sheet surface20apasses through the liquid storage communication groove75and enters the liquid storage mainstream groove71. In this way, each of the liquid storage mainstream grooves71and each of the liquid storage communication grooves75is filled with the working liquid2b. As a result, the working liquid2bis propelled by the capillary action in each of the liquid storage mainstream grooves71and each of the liquid storage communication grooves75and, thus, moves smoothly toward the inside of the liquid storage portion70. The working liquid2bthat reaches the evaporation region SR by the liquid flow channel portion60receives heat from the electronic device D again and evaporates. The working vapor2athat evaporated from the working liquid2bmoves through the liquid flow channel communication grooves65in the evaporation region SR to the lower vapor flow channel recess53and the upper vapor flow channel recess54each having a large flow channel cross-sectional area. Thereafter, the working vapor2adisperses in each of the vapor flow channel recesses53and54. In addition, the liquid storage portion70is disposed in the evaporation region SR. This causes the working liquid2bin the liquid storage portion70to evaporate in the same way and disperse in each of the vapor flow channel recesses53and54. In this way, the working fluids2aand2breflux in the sealed space3while repeating the phase change, that is, evaporation and condensation. Thus, the heat of the electronic device D is delivered and dissipated. As a result, the electronic device D is cooled. While the electronic device D stops generating heat, the working liquid2bin the evaporation region SR does not evaporate. The liquid flow channel mainstream grooves61and the liquid flow channel communication grooves65of the liquid flow channel portion60are filled with the working liquid2band the working liquid2bremains there. Therefore, the working liquid2bin the condensation region CR remains without being delivered toward the evaporation region SR. Part of the working liquid2bin the liquid flow channel portion60flows on the wall surface53aof the lower vapor flow channel recess53or the wall surface54aof the upper vapor flow channel recess54and moves to the liquid storage mainstream grooves71and the liquid storage communication grooves75in the liquid storage portion70. As a result, the grooves71and75are filled with the working liquid2b, which remains there. If the volume of the working liquid2benclosed in the sealed space3is greater than the total volume of the space formed by the liquid flow channel mainstream grooves61and the liquid flow channel communication grooves65, part of the working liquid2btends to be filled with the liquid storage mainstream grooves71and the liquid storage communication grooves75. Accordingly, the working liquid2bcan be dispersed and remain in the liquid storage portion70in addition to the liquid flow channel portion60. At this time, even when the electronic apparatus E having the vapor chamber1mounted therein is placed in a temperature environment lower than the freezing point of the working fluids2aand2band, thus, the working liquid2bin the liquid flow channel portion60freezes and expands, the force of expansion of the working fluids2aand2bis decreased. This inhibits the upper sheet20from being deformed by the force of expansion. As a result, a decrease in the flatness of the second upper sheet surface20bof the upper sheet20having the electronic device D mounted thereon can be reduced, and formation of a gap between the second upper sheet surface20band the electronic device D can be inhibited. In this case, blockage of heat transfer from the electronic device D can be inhibited, and a decrease in the performance of the vapor chamber1can be reduced. Similarly, even when the working liquid2bin the liquid storage portion70freezes and expands, the force of expansion is decreased. This inhibits the lower sheet10from being deformed by the force of expansion. As a result, a decrease in the flatness of the first lower sheet surface10aof the lower sheet10can be reduced. As described above, according to the present embodiment, the liquid flow channel portion60is provided on the second body surface31bof the sheet body31of the wick sheet30, and the liquid storage portion70is provided on the first body surface31apositioned on the opposite side from the second body surface31b. The flow channel cross-sectional area of the liquid storage mainstream groove71of the liquid storage portion70is greater than that of the liquid flow channel mainstream groove61of the liquid flow channel portion60. This allows the working liquid2bto be distributed and stored in the liquid storage portion70in addition to the liquid flow channel portion60while the electronic device D stops generating heat. Therefore, even when the working liquid2bin the liquid flow channel portion60freezes and expands in a temperature environment lower than the freezing point of the working liquid2b, the force of expansion exerted on the upper sheet20can be reduced. In this case, deformation of the upper sheet20can be inhibited. In addition, even when the working liquid2bin the liquid storage portion70freezes and expands, the force of expansion exerted on the lower sheet10can be reduced. In this case, deformation of the lower sheet10can be inhibited. As a result, deformation of the vapor chamber1can be inhibited, and a decrease in the performance of the vapor chamber1can be reduced. In addition, while the electronic device D is generating heat, the working liquid2bin the liquid storage portion70can evaporate due to the heat received from the electronic device D. Accordingly, the heat generated by the electronic device D can be dispersed more, and the efficiency for cooling the electronic device D can be increased. In addition, according to the present embodiment, the liquid flow channel portion60is provided on the second body surface31bof the sheet body31of the wick sheet30, and the liquid storage portion70is provided on the first body surface31apositioned on the opposite side from the second body surface31b. The flow channel cross-sectional area of the liquid storage mainstream groove71of the liquid storage portion70is greater than that of the liquid flow channel mainstream grooves61of the liquid flow channel portion60. This allows the capillary force that acts on the working liquid2bin the liquid storage mainstream groove71to be smaller than the capillary force that acts on the working liquid2bin the liquid flow channel mainstream grooves61. While the electronic device D is generating heat, the amount of movement of the working liquid2bdelivered to the liquid storage portion70can be reduced. Therefore, a decrease in the function of delivering the working liquid2bto the evaporation region SR can be reduced, and a decrease in heat transport efficiency can be reduced. In addition, as described above, by making the flow channel cross-sectional area of the liquid storage mainstream groove71greater than that of the liquid flow channel mainstream groove61, the total volume of the spaces formed by the liquid storage mainstream grooves71can be increased. As a result, the stored volume of the working liquid2bin the liquid storage portion70can be increased while the electronic device D stops generating heat. According to the present embodiment, the width of the liquid storage mainstream groove71is greater than the width of the liquid flow channel mainstream groove61. This allows the flow channel cross-sectional area of the liquid storage mainstream groove71to be greater than that of the liquid flow channel mainstream groove61. As a result, a decrease in the heat transport efficiency can be reduced, and the stored volume of the working liquid2bcan be increased. In addition, according to the present embodiment, the depth of the liquid storage mainstream groove71is greater than the depth of the liquid flow channel mainstream groove61. This allows the flow channel cross-sectional area of the liquid storage mainstream groove71to be greater than that of the liquid flow channel mainstream groove61. As a result, a decrease in the heat transport efficiency can be reduced, and the stored volume of the working liquid2bcan be increased. In addition, according to the present embodiment, the liquid flow channel portion60and the liquid storage portion70are provided in each of the land portions33, and the number of liquid storage mainstream grooves71provided in the land portion33is less than the number of liquid flow channel mainstream grooves61provided in the land portion33. This allows the flow channel cross-sectional area of the liquid storage mainstream groove71to be greater than that of the liquid flow channel mainstream groove61. As a result, a decrease in the heat transport efficiency can be reduced, and the stored volume of the working liquid2bcan be increased. In addition, according to the present embodiment, the liquid flow channel portion60through which the working liquid2bpasses is provided on the second body surface31bof the sheet body31of the wick sheet30, and the liquid storage portion70is provided on the first body surface31aprovided on the opposite side from the second body surface31b. The liquid storage portion70is disposed in the evaporation region SR in plan view. This allows the working liquid2bto be distributed and stored in the liquid storage portion70in addition to the liquid flow channel portion60while the electronic device D stops generating heat. Thus, even when the working liquid2bin the liquid flow channel portion60freezes and expands in a temperature environment lower than the freezing point of the working liquid2b, the force of expansion exerted on the upper sheet20can be decreased. In this case, deformation of the upper sheet20can be inhibited. In addition, even when the working liquid2bin the liquid storage portion70freezes and expands, the force of expansion exerted on the lower sheet10can be decreased. In this case, deformation of the lower sheet10can be inhibited. As a result, deformation of the vapor chamber1can be inhibited, and a decrease in the performance of the vapor chamber1can be reduced. In addition, while the electronic device D is generating heat, the working liquid2bin the liquid storage portion70can be evaporated by the heat received from the electronic device D. As a result, the heat of the electronic device D can be dissipated more, and the efficiency for cooling the electronic device D can be increased. In addition, according to the present embodiment, a plurality of convex portions74protruding from the sheet body31of the wick sheet30and being in contact with the lower sheet10are provided in the liquid storage portion70. The gap between a pair of the convex portions74(corresponding to the width w6of the liquid storage mainstream groove71) neighboring each other is greater than the width of the liquid flow channel mainstream groove61of the liquid flow channel portion60. This allows the capillary force that acts on the working liquid2bin the liquid storage portion70to be smaller than the capillary force that acts on the working liquid2bin the liquid flow channel portion60(in the liquid flow channel mainstream groove61). While the electronic device D is generating heat, the amount of movement of the working liquid2bto the liquid storage portion70can be reduced. Therefore, a decrease in the function of delivering the working liquid2bto the evaporation region SR can be reduced, and the decrease in heat transport efficiency can be reduced. In addition, as described above, by making the gap between the convex portions74greater than the width of the liquid flow channel mainstream groove61, the total volume of spaces formed by the liquid storage mainstream grooves71and the liquid storage communication grooves75of the liquid storage portion70can be increased. As a result, the stored volume of the working liquid2bin the liquid storage portion70can be increased while the electronic device D stops generating heat. In addition, according to the present embodiment, the liquid storage portion70has the liquid storage mainstream grooves71each provided between two neighboring convex portions74in the Y direction that is orthogonal to the X direction in which the liquid flow channel mainstream grooves61of the liquid flow channel portion60extend. The liquid storage mainstream grooves71extend in the X direction. After the electronic device D stops generating heat, the working liquid2bflows substantially in the X direction from the condensation region CR to the evaporation region SR, and the working liquid2bthat reaches the liquid storage portion70can easily enter the liquid storage mainstream grooves71. Thereafter, the working liquid2bcan flow smoothly in the liquid storage mainstream grooves71in the X direction and can easily reach the edge of the liquid storage portion70closer to the evaporation area SR. As a result, the working liquid2bcan be rapidly drawn into the liquid storage portion70, and the stored volume of the working liquid2bcan be rapidly increased. If the ambient temperature of the vapor chamber1rapidly drops, the working liquid2bcan be rapidly drawn into the liquid storage portion70. Therefore, the force of expansion exerted on the upper sheet20and the lower sheet10can be effectively decreased when the working liquid2bfreezes. In this manner, deformation of the vapor chamber1can be effectively inhibited. In addition, according to the present embodiment, the gap between a pair of the convex portions74neighboring each other is smaller than the gap between a pair of the land portions33(corresponding to the width w2of the penetration portion34) neighboring each other. This allows the capillary force to act on the working liquid2bin the liquid storage portion70. Therefore, the working liquid2bcan be drawn into the liquid storage portion70, and the working liquid2bcan be stored in the liquid storage portion70while the electronic device D stops generating heat. In addition, according to the present embodiment, the liquid storage portion70is provided on the first body surface31aof each of the land portions33. This allows the working liquid2bto be distributed and stored in the liquid storage portions70. As a result, even when the working liquid2bin the liquid flow channel portion60freezes and expands in a temperature environment lower than the freezing point of the working liquid2b, deformation of the upper sheet20can be inhibited more. In addition, even when the working liquid2bin the liquid storage portion70freezes and expands, deformation of the lower sheet10can be inhibited more. In addition, according to the present embodiment, the liquid storage portion70is disposed on one side of the land portion33in the X direction. This allows the liquid storage portion70to be disposed in the evaporation region SR when the evaporation region SR is formed on one side of the vapor chamber1in the X direction. Therefore, the working liquid2bin the liquid storage portion70can evaporate while the electronic device D is generating heat, and the heat from the electronic device D can be dissipated more. As a result, the efficiency for cooling the electronic device D can be increased. (First Modification) The above embodiment has been described with reference to an example in which part of the working liquid2bin the liquid flow channel portion60is moved to and stored in the liquid storage portion70while the electronic device D stops generating heat. In the example, the working liquid2bflows on the wall surface53aof the lower vapor flow channel recess53or the wall surface54aof the upper vapor flow channel recess54. However, the configuration is not limited thereto. The sheet body31may be provided with a plurality of communication portions80each enabling the liquid flow channel portion60to communicate with the liquid storage portion70. The communication portions80may be located inside of the evaporation region SR. The communication portions80may be located in a region that overlaps the electronic device D in plan view. For example, as in the first modification illustrated inFIGS.15and16, the communication portion80may include communication recesses81provided in the wall of the vapor flow channel portion50. The communication recesses81may extend from the liquid flow channel portion60to the liquid storage portion70. According to the first modification illustrated inFIGS.15and16, the communication recesses81are provided so as to extend in the Z direction along the walls53aof the lower vapor flow channel recesses53and the walls54aof the upper vapor flow channel recesses54. The communication recess81may extend to at least one of the liquid flow channel communication groove65of the liquid flow channel portion60and the liquid storage communication groove75of the liquid storage portion70. According to the first modification illustrated inFIGS.15and16, one end of the communication recess81extends to the liquid flow channel communication groove65, and the other end of the communication recess81extends to the liquid storage communication groove75. Note that the communication recess81does not necessarily have to communicate with the liquid flow channel communication groove65or the liquid storage communication groove75. Furthermore, the communication recess81does not necessarily have to communicate with both the liquid flow channel communication groove65and the liquid storage communication groove75. The flow passage cross-sectional shape of the communication recess81may be rectangular, as illustrated inFIGS.15and16. Alternatively, the flow passage cross-sectional shape may be formed into a curved shape, such as a semicircle or semi-ellipsoid. The flow passage cross-sectional shape of the communication recess81corresponds to the shape in plan view. As illustrated inFIG.15, a width w9of the communication recess81may be greater than the width w4of the liquid flow channel communication groove65(refer toFIG.9). The width w9corresponds to the dimension in the X direction. This allows the capillary force that acts on the working liquid2bin the communication recess81to be smaller than the capillary force that acts on the working liquid2bin the liquid flow channel communication groove65. In this case, the working liquid2bcan be inhibited from remaining in the communication recess81. In addition, in this case, the communication recess81is formed so as to notch the convex portion64. In addition, the width w9of the communication recess81may be less than the width w7of the liquid storage communication groove75(refer toFIG.10). This allows capillary force to act on the working liquid2bin the communication recess81and, thus, allows the working liquid2bto move to the liquid storage portion70. The width w9of the communication recess81may be, for example, 20 μm to 300 μm. Note that the width w9of the communication recess81refers to the dimension at the second body surface31bof the wick sheet30. As described above, according to the first modification, while the electronic device D stops generating heat, part of the working liquid2bin the liquid flow channel portion60can move to the liquid storage portion70through the communication portion80. This increases the amount of movement of the working liquid2bdelivered from the liquid flow channel portion60to the liquid storage portion70, resulting in an increase in the stored volume of working liquid2bin the liquid storage portion70. In addition, according to the first modification, the communication portion80has the communication recess81provided on the wall of the vapor flow channel portion50, and the communication recess81extends from the liquid flow channel portion60to the liquid storage portion70. This can reduce the flow resistance of the working liquid2bflowing from the liquid flow channel portion60to the liquid storage portion70. As a result, the amount of working liquid2bremaining in the liquid flow channel portion60can be reduced. Even when the working liquid2bin the liquid flow channel portion60freezes and expands, the force of expansion can be decreased. In addition, the force of expansion can be decreased even when the working liquid2bin the liquid storage portion70freezes and expands. As a result, the upper sheet20and lower sheet10can be inhibited from being deformed by the force of expansion. Furthermore, according to the first modification, the flow resistance of the working liquid2bflowing from the liquid flow channel portion60to the liquid storage portion70can be reduced more, since the communication recess81extends to the liquid flow channel communication groove65and the liquid storage communication groove75. (Second Modification) Unlike the first modification illustrated inFIGS.15and16, as illustrated in a second modification illustrated inFIGS.17and18, the communication portion80may include a through-hole82that penetrates the sheet body31and extends from the liquid flow channel portion60to the liquid storage portion70. According to the second modification illustrated inFIGS.17and18, the through-hole82is located inside the land portion33in plan view, not on the wall surface53aof the lower vapor flow channel recess53or the wall surface54aof the upper vapor flow channel recess54. The through-hole82is formed at such a position that it is not cut out by the wall surface53aof the lower vapor flow channel recess53and the wall surface54aof the upper vapor flow channel recess54. That is, the through-hole82has a closed contour shape in plan view. InFIGS.17and18, an example in which the through-hole82is formed in a rectangular shape is illustrated. However, the planar shape of the through-hole82may be any shape, such as a circular shape. The through-hole82may extend to at least one of the liquid flow channel intersection portion66of the liquid flow channel portion60and the liquid storage intersection portion76of the liquid storage portion70. According to the second modification illustrated inFIGS.17and18, one end of the through-hole82extends to the liquid flow channel intersection portion66and is located at the liquid flow channel intersection portion66described above. The other end of the through-hole82extends to the liquid storage intersection portion76. Note that the through-hole82does not necessarily have to communicate with the liquid flow channel intersection portion66as long as it communicates with the liquid flow channel mainstream groove61or the liquid flow channel communication groove65. Alternatively, the through-hole82does not have to communicate with the liquid storage intersection portion76, as long as it communicates with the liquid storage mainstream groove71or the liquid storage communication groove75. The flow passage cross-sectional shape of the through-hole82may be rectangular, as illustrated inFIGS.17and18, or may be a curved shape, such as circular shape or elliptical shape. The flow passage cross-sectional shape of the through-hole82corresponds to the shape in plan view. As illustrated inFIG.17, a width w10of the through-hole82may be greater than the width w4of the liquid flow channel communication groove65(refer toFIG.9). The width w10corresponds to the dimension in the X direction. This allows the capillary force that acts on the working liquid2bin the through-hole82to be smaller than the capillary force that acts on the working liquid2bin the liquid flow channel communication groove65. In this case, the working liquid2bcan be inhibited from remaining in the through-hole82. In this case, the through-hole82is formed so as to notch the convex portion64. In addition, the width w10of the through-hole82may be less than the width w7of the liquid storage communication groove75(refer toFIG.10). This allows capillary force to act on the working liquid2bin the through-hole82and, thus, allows the working liquid2bto move to the liquid storage portion70. The width w10of the through-hole82may be, for example, 10 μm to 100 μm. Note that the width w10of the through-hole82refers to the dimension at the second body surface31bof the wick sheet30.FIG.18illustrates an example in which the through-hole82is formed to protrude from the liquid storage intersection portion76due to the relationship between the alignment pitch in the Y direction of the liquid flow channel mainstream grooves61and the alignment pitch in the Y direction of the liquid storage mainstream grooves71. However, the configuration is not limited thereto. The through-hole82may not protrude from the liquid storage intersection portion76, depending on the alignment pitches of the grooves61and71. As described above, according to the second modification, while the electronic device D stops generating heat, part of the working liquid2bin the liquid flow channel portion60can move to the liquid storage portion70through the through-hole82. This increases the amount of movement of the working liquid2bdelivered from the liquid flow channel portion60to the liquid storage portion70, resulting in an increase in the stored volume of working liquid2bin the liquid storage portion70. In particular, since the through-hole82is located inside of the land portion33in plan view, the flow resistance of the working liquid2bfrom the liquid flow channel portion60to the liquid storage portion70can be decreased. As a result, the amount of the working liquid2bremaining in the liquid flow channel portion60can be reduced. Even when the working liquid2bin the liquid flow channel portion60freezes and expands, the force of expansion can be decreased. As a result, the upper sheet20can be inhibited from being deformed by the force of expansion. In addition, even when the working liquid2bin the liquid storage portion70freezes and expands, the force of expansion can be decreased. As a result, the lower sheet10can be inhibited from being deformed by the force of expansion. According to the second modification, the communication portion80includes a through-hole82that penetrates the sheet body31and extends from the liquid flow channel portion60to the liquid storage portion70. This can further reduce the flow resistance of the working liquid2bfrom the liquid flow channel portion60to the liquid storage portion70. As a result, the amount of working liquid2bremaining in the liquid flow channel portion60can be reduced more. Even when the working liquid2bin the liquid flow channel portion60freezes and expands, the force of expansion can be decreased more. Furthermore, according to the second modification, the through-hole82extends to the liquid flow channel intersection portion66and the liquid storage intersection portion76, resulting in a further decrease in the flow resistance of the working liquid2bfrom the liquid flow channel portion60to the liquid storage portion70. (Third Modification) The present embodiment has been described above with reference to an example in which the convex portion74provided in the liquid storage portion70is formed in a rectangular shape in plan view such that the X direction is the longitudinal direction. However, the configuration is not limited thereto, and the convex portion74may have any planar shape. For example, as illustrated inFIG.19, the convex portion74may be formed in a circular shape in plan view or may be formed in an elliptical shape (not illustrated). In addition, in the example illustrated inFIG.19, an example in which the convex portions74are arranged in parallel is illustrated. More specifically, the convex portions74in two neighboring convex-portion rows73in the Y direction are also aligned in the X direction. In addition, for example, as illustrated inFIG.20, each of the convex portions74may be formed in a square shape in plan view. In the example illustrated inFIG.20, the convex portions74are arranged in a staggered manner. However, the convex portions74may be arranged in parallel. In addition, for example, as illustrated inFIG.21, the convex portion74may be formed in the shape of a cross in plan view. In the example illustrated inFIG.21, the planar shape of the convex portion74is formed in the shape of a rounded cross. In addition, in the example illustrated inFIG.21, the convex portions74are arranged in a staggered manner. However, the convex portions74may be arranged in parallel. Alternatively, the convex portion74may be formed in a shape of a star polygon in plan view. (Fourth Modification) The present embodiment has been described above with reference to an example in which the liquid storage portion70is provided on the first body surface31aof each of the land portions33of the wick sheet30. However, the configuration is not limited thereto. The liquid storage portions70need not be provided in all of the land portions33. For example, the liquid storage portion70may be provided in any one of the land portions33only or in some land portions33. For example, if the planar shape of the electronic device D is small, the liquid storage portion70may be selectively provided in the land portions33in accordance with the region to be covered by the electronic device D. The same applies to the case where the vapor chamber1does not have a simple rectangular shape. (Fifth Modification) As illustrated inFIG.22, the liquid storage portion70may be disposed in a region of the vapor chamber1that overlaps the electronic device D in plan view. In the example illustrated inFIG.22, one or more of the plurality of land portions33are provided with the liquid storage portions70. The electronic device D overlaps a plurality of the land portions33. The electronic device D is disposed across the plurality of land portions33. InFIG.22, seven land portions33are illustrated, and electronic device D overlaps three of the land portions33. The electronic device D does not overlap the remaining four land portions33. The three land portions33that overlap the electronic device D are referred to as overlapped land portions91and92. Among the four land portions33that do not overlap the electronic device D, the land portions33that neighbor the overlapped land portions91and92are referred to as first non-overlapped land portions93. Among the four land portions33that do not overlap the electronic device D, the land portions33that do not neighbor the overlapped land portions91and92are referred to as second non-overlapped land portions94. The first non-overlapped land portions93are disposed on either side of a set of the three overlapped land portions91and92in the Y direction. Each of the second non-overlapped land portions94is disposed opposite to the overlapped land portions91and92with respect to the first non-overlapped land portion93. InFIG.22, the second non-overlapped land portions94are disposed at the uppermost and lowermost positions, and the two first non-overlapped land portions93are disposed between these two second non-overlapped land portions94. In addition, three overlapped land portions91and92are disposed between the two first non-overlapped land portions93. InFIG.22, among the three overlapped land portions91and92, the lowermost overlapped land portion (the second overlapped land portion92described below) neighbors the lower first non-overlapped land portion93in the Y direction. Similarly, inFIG.22, among the three overlapped land portions91and92, the uppermost overlapped land portion (the second overlapped land portion92described below) neighbors the upper first non-overlapped land portion93in the Y direction. A liquid storage portion70is provided in each of the overlapped land portions91and92. The liquid storage portions70provided in the overlapped land portions91and92are located in the region that overlaps the electronic device D in plan view. These liquid storage portions70may extend to the outside of the electronic device D in the X direction. The liquid storage portions70provided in the overlapped land portions91and92extend to the outside of both the sides of the electronic device D in the X direction. In the example illustrated inFIG.22, the liquid storage portions70provided in the overlapped land portions91and92extend to the outside of the left side and the right side of the electronic device D. The three overlapped land portions91and92include one first overlapped land portion91and two second overlapped land portions92. The second overlapped land portions92are disposed on either side in the Y direction of the first overlapped land portion91. Each of the second overlapped land portions92neighbors the first overlapped land portion91in the Y direction. The liquid storage portion70provided in the first overlapped land portion91neighbors the liquid storage portions70provided in the second overlapped land portions92in the Y direction. In plan view, the liquid storage portion70provided in the first overlapped land portion91is located closer to the center of the electronic device D in the Y direction than the liquid storage portions70provided in the second overlapped land portions92. That is, the liquid storage portions70provided in the second overlapped land portions92are farther away from the center of the electronic device D than the liquid storage portion70provided in the first overlapped land portion91. InFIG.22, the liquid storage portion70provided in the first overlapped land portion91overlaps the center of the electronic device D. A length L1in the X direction of the liquid storage portion70provided in the first overlapped land portion91is greater than a length L2in the X direction of the liquid storage portion70provided in the second overlapped land portions92. The liquid storage portion70provided in the first overlapped land portion91has a larger amount of extension to the outside of the electronic device D than the liquid storage portion70provided in the second overlapped land portions92. The lengths L1and L2may be the lengths in the X direction of the liquid storage mainstream groove71of the liquid storage portion70. If the liquid storage portion70includes a plurality of liquid storage mainstream grooves71, the length may be the longest length of the liquid storage mainstream grooves71. The liquid storage portion70is provided in each of the first non-overlapped land portions93. The liquid storage portions70provided in the first non-overlapped land portions93are disposed in a region different from a region that overlaps the electronic device D in plan view. That is, the liquid storage portions70do not overlap the electronic device D. The liquid storage portion70provided in the second overlapped land portion92neighbors the liquid storage portion70provided in the first non-overlapped land portion93in the Y direction. The length L2in the X direction of the liquid storage portion70provided in the second overlapped land portion92is greater than a length L3in the X direction of the liquid storage portion70provided in the first non-overlapped land portion93. Note thatFIG.22illustrates an example in which the length L3is the same as the length of the electronic device D. However, the length is not limited thereto, and the liquid storage portion70may extend to the outside of the electronic device D in the X direction. Alternatively, the length of the liquid storage portion70in the X direction may be less than the length of the electronic device D in the X direction. Like the lengths L1and L2, the length L3may be the length in the X direction of the liquid storage mainstream groove71of the liquid storage portion70. As illustrated inFIG.22, the liquid storage portion70need not be provided in the second non-overlapped land portion94. However, the configuration is not limited thereto, and the liquid storage portion70may be provided in the second non-overlapped land portion94. As described above, according to the fifth modification, the liquid storage portion70is disposed in a region of the vapor chamber1that overlaps the electronic device D in plan view. This allows the liquid storage portion70to be disposed in a region where heat is easily received from the electronic device D. Accordingly, while the electronic device D is generating heat, the working liquid2bin the liquid storage portion70can evaporate upon receiving the heat from the electronic device D. As a result, the heat of the electronic device D can be dispersed more and, thus, the efficiency for cooling the electronic device D can be increased. In addition, according to the fifth modification, the liquid storage portion70extends to the outside of the electronic device D in the X direction. This allows the working liquid2bin the liquid storage portion70to evaporate using the heat transferred from the electronic device D in the vicinity of the region that overlaps the electronic device D. More specifically, in the region that neighbors the region that overlaps the electronic device D in the X direction, the working liquid2bin the liquid storage portion70can evaporate using the heat of the electronic device D. As a result, the amount of evaporation of the working liquid2bcan be increased. As a result, the heat of the electronic device D can be dispersed more and, thus, the efficiency for cooling the electronic device D can be increased more. In addition, according to the fifth modification, the liquid storage portion70provided in the first overlapped land portion91is located closer to the center of the electronic device D in the Y direction in plan view than the liquid storage portion70provided in the second overlapped land portion92. The length L1in the X direction of the liquid storage portion70provided in the first overlapped land portion91is greater than the length L2in the X direction of the liquid storage portion70provided in the second overlapped land portion92. Accordingly, the length in the X direction of the liquid storage portion70located closer to the center of the electronic device D can be increased. As a result, the volume of the working liquid2bloaded into the liquid storage mainstream groove71that overlaps the vicinity of the center of the electronic device D can be increased. As a result, the amount of evaporation of the working liquid2bin the vicinity of the center of the electronic device D can be increased, and the region in the vicinity of the center of the electronic device D can be efficiently cooled. In addition, according to the fifth modification, a liquid storage portion70is provided in each of the second overlapped land portions92constituting a pair and the first non-overlapped land portions93that neighbor each other. The liquid storage portion70provided in the second overlapped land portion92is disposed in a region that overlaps the electronic device D, and the liquid storage portion70provided in the first non-overlapped land portion93is disposed in a region different from the region that overlaps the electronic device D. This allows the working liquid2bin the liquid storage portion70to evaporate using the heat transferred from the electronic device D in the vicinity of the region that overlaps the electronic device D. More specifically, in the region that neighbors the region that overlaps the electronic device D in the Y direction, the working liquid2bin the liquid storage portion70can evaporate using the heat of the electronic device D. Accordingly, the amount of evaporation of the working liquid2bcan be increased. As a result, the heat of the electronic device D can be dispersed more, and the efficiency for cooling the electronic device D can be increased more. In addition, according to the fifth modification, the length L2in the X direction of the liquid storage portion70provided in the second overlapped land portion92is greater than the length L3in the X direction of the liquid storage portion70provided in the first non-overlapped land portion93. This configuration can increase the length in the X direction of the liquid storage portion70that overlaps the electronic device D. As a result, the volume of the working liquid2bloaded into the liquid storage mainstream groove71that overlaps the electronic device D can be increased. As a result, the amount of evaporation of the working liquid2bin the region that overlaps the electronic device D can be increased, and the electronic device D can be cooled efficiently. The fifth modification has been described above with reference to an example in which the entire liquid storage portion70provided in the second overlapped land portion92overlaps the electronic device D in the Y direction. However, the configuration is not limited thereto. The liquid storage portion70provided in the second overlapped land portion92may overlap the electronic device D in part of the range in the Y direction (refer toFIG.23). In this case, the liquid storage portion70does not overlap the electronic device D in the remaining part of the range in the Y direction. InFIG.22, an example is illustrated in which the electronic device D extends to the outside of the liquid storage portion70provided in the second overlapped land portion92in the Y direction. However, the configuration is not limited thereto, and the electronic device D may be aligned with the edge of the second overlapped land portion92. In this case, the edge of the electronic device D overlaps the edge of the wall surface54aof the upper vapor flow channel recess54on the second body surface31b. In addition, the fifth modification has been described above with reference to an example in which the electronic device D overlaps the liquid storage portions70provided in the three overlapped land portions91and92. However, the configuration is not limited thereto, and the number of overlapped land portions91and92in which the liquid storage portions70where the electronic device D overlaps are provided is any number. In addition, the example has been described in which each of the non-overlapped land portions93and94is provided on both sides in the Y direction of the set of the three overlapped land portions91and92. However, the configuration is not limited thereto, and the number of each of the non-overlapped land portions93and94provided on both sides in the Y direction of the set of the three overlapped land portions91and92may be one or three or more. (Sixth Modification) In addition, as illustrated inFIG.23, the vapor chamber1may be in thermal contact with a plurality of electronic devices D. More specifically, as illustrated inFIG.23, a plurality of electronic devices D are attached to the second body surface31b. InFIG.23, an example in which two electronic devices D1and D2are attached to the second body surface31bis illustrated. However, the number of electronic devices D may be three or more. The two electronic devices D are disposed in different regions in the X direction. The electronic device D on the left inFIG.23is referred to as the first electronic device D1, and the electronic device D on the right is referred to as the second electronic device D2. A plurality of liquid storage portions70each corresponding to the electronic devices D1or D2may be provided on the first body surface31a. In this case, the liquid storage portions70may be disposed in regions that overlap the corresponding electronic devices D1and D2in plan view. One or more of the land portions33is provided with a liquid storage portion70. As in the example illustrated inFIG.22, each of the electronic devices D1and D2overlaps the plurality of land portions33. As in the example illustrated inFIG.22, the plurality of land portions33include three overlapped land portions91and92, two first non-overlapped land portions93, and two second non-overlapped land portions94. Each of the three overlapped land portions91and92is provided with the liquid storage portion70that overlaps the first electronic device D1and the liquid storage portion70that overlaps the second electronic device D2. As in the example illustrated inFIG.22, the first non-overlapped land portion93is provided with the liquid storage portion70that overlaps the first electronic device D1. However, the first non-overlapped land portion93is not provided with the liquid storage portion70that overlaps the second electronic device D2. The second non-overlapped land portions94is not provided with the liquid storage portion70. The dimensions in the X direction of the two electronic devices D1and D2may differ from each other. In this case, the lengths in the X direction of the two liquid storage portions70may differ from each other. Note that inFIG.23, an example is illustrated in which the lengths in the X direction of the liquid storage portions70are the same in each of the electronic devices D. The liquid storage portions70that overlap the first electronic device D1are described in more detail below. The lengths L1and L2in the X direction of the liquid storage portions70provided in the overlapped land portions91and92are the same. In addition, the lengths L1and L2and the length L3in the X direction of the liquid storage portion70provided in the first non-overlapped land portion93are the same. However, the lengths are not limited thereto. The lengths L1, L2, and L3in the X direction of the liquid storage portions70may differ among land portions, as illustrated inFIG.22. The same applies to the liquid storage portions70that overlap the second electronic device D2. As described above, according to the sixth modification, the vapor chamber1is in thermal contact with a plurality of electronic devices D1and D2, and a plurality of liquid storage portions70each corresponding to one of the electronic devices D1and D2are provided on the first body surface31a. Each of the liquid storage portions70is disposed in a region of the vapor chamber1that overlaps the corresponding one of the electronic devices D1and D2in plan view. Thus, each of the liquid storage portions70can be disposed in a region where heat is easily received from the corresponding one of the electronic devices D1and D2. Accordingly, while each of the electronic devices D1and D2is generating heat, the working liquid2bin each of the liquid storage portions70can evaporate by receiving heat from a corresponding one of the electronic device D1and D2. As a result, the heat of each of the electronic devices D1and D2can be dissipated more, and the efficiency for cooling each of the electronic devices D1and D2can be increased more. In addition, according to the sixth modification, at least one of the plurality of land portions33is provided with a plurality of liquid storage portions70that overlap the corresponding one of the electronic devices D1and D2. In this manner, the liquid storage portion70can be provided in the land portion33in a region that overlaps the corresponding one of the electronic devices D1and D2. As a result, each of the liquid storage portions70can be disposed in a region where heat is easily received from the corresponding one of the electronic devices D1and D2. Note that in the sixth modification, the two electronic devices D1and D2do not have to be generating heat at the same time. For example, when the first electronic device D1is generating heat and the second electronic device D2has stopped generating heat, the working liquid2bin the liquid storage portion70that overlaps the first electronic device D1can evaporate due to the heat received from the first electronic device D1. The working liquid2bin the liquid storage portion70that overlaps the second electronic device D2can be continuously stored. The sixth modification has been described above with reference to an example in which two liquid storage portions70are provided in each of the three overlapped land portions91and92. However, the configuration is not limited thereto, and the number of overlapped land portions91and92each having two liquid storage portions70provided therein is not limited to three, but can be any number. For example, the number of such overlapped land portions91and92may be one. For example, two liquid storage portions70may be provided in the first overlapped land portion91, and one liquid storage portion70may be provided in the second overlapped land portion92. In this case, a liquid storage portion70that overlaps the first electronic device D1and a liquid storage portion70that overlaps the second electronic device D2may be provided in the first overlapped land portion91. One second overlapped land portion92may be provided with a liquid storage portion70that overlaps the first electronic device D1without being provided with a liquid storage portion70that overlaps the second electronic device D2. The other second overlapped land portion92may be provided with a liquid storage portion70that overlaps the second electronic device D2without being provided with a liquid storage portion70that overlaps the first electronic device D1. That is, at least one of the liquid storage portion70that overlaps the first electronic device D1and the liquid storage portion70that overlaps the second electronic device D2may be provided in each of the overlapped land portions91and92. In addition, the sixth modification has been described above with reference to an example in which a plurality of liquid storage portions70are provided on the first body surface31aso as to overlap a corresponding one of the electronic device D1and D2in plan view. However, the configuration is not limited thereto. For example, if the first body surface31ais provided with a liquid storage portion70that overlaps one of the two electronic devices D1and D2, a liquid storage portion70that overlaps the other need not be provided. The same applies to the case in which the number of electronic devices D is three or greater. That is, a plurality of liquid storage portions70may be provided on the first body surface31aso as to overlap all of the electronic devices D. However, the first body surface31amay be provided with liquid storage portions70that overlap one or more of the electronic devices D without being provided with the liquid storage portions70that overlap other of the electronic devices D. Second Embodiment A wick sheet for a vapor chamber, a vapor chamber, and an electronic apparatus according to the second embodiment of the present invention are described below with reference toFIGS.24to27. According to the second embodiment illustrated inFIGS.24to27, the main difference is that a liquid storage portion is disposed in a region different from an evaporation region in plan view, and the other configurations are substantially the same as those of the first embodiment illustrated inFIGS.1to23. Note that inFIGS.24to27, parts similar to those of the first embodiment illustrated inFIGS.1to23are identified by the same reference numerals, and detailed description of the parts is not repeated. In the present embodiment, as illustrated inFIG.24, the liquid storage portion70according to the present embodiment may be disposed on one side of the land portion33in the X direction. The liquid storage portion70may be formed on the one side of the center of the land portion33in the X direction. The liquid storage portion70may be disposed on the opposite side from an evaporation region SR. As illustrated inFIG.24, the liquid storage portion70may be disposed on the right side of the land portion33. The liquid storage portion70is disposed in a region different from the evaporation region SR in plan view. The liquid storage portion70is disposed in a condensation region CR. In this case, the liquid storage portion70is disposed in a region different from a region that overlaps an electronic device D. More specifically, as illustrated inFIGS.24and25, the liquid storage portion70is disposed in a portion opposite to the evaporation region SR of the land portion33in the X direction. The liquid storage mainstream grooves71of the liquid storage portion70are formed so as to continuously extend from an opposite edge from the evaporation area SR of the land portion33toward an edge closer to the evaporation area SR in the X direction up to a predetermined position. InFIG.24, the liquid storage portion70is formed from the right edge toward the left edge up to a predetermined position. In this way, the range of the liquid storage portion70in the X direction is defined. Since the other configurations of the liquid storage portion70are similar to those of the liquid storage portion70according to the first embodiment, a detailed description of the configurations is not repeated. As described above, in a typical vapor chamber1, the working fluids2aand2breflux in the sealed space3while repeating the phase change, that is, evaporation and condensation. Thus, the working fluids2aand2btransfer and dissipate the heat from the electronic device D. The reflux of the working fluids2aand2bcan be formed in the entire range of the vapor chamber1. This allows the working vapor2ato dissipate the heat in the entire range of the vapor chamber1. That is, the region that dissipates the heat can be increased. As a result, the heat dissipation efficiency of the vapor chamber1can be increased, and the electronic device D can be cooled efficiently. In this case, the temperature difference in the vapor chamber1can be reduced, and the temperature can be equalized. However, if the amount of heat generated by the electronic device D is large, the working liquid2bcondensed in the condensation region CR is difficult to be delivered to the center of the evaporation region SR, as illustrated inFIG.26. That is, due to the large amount of heat generated by the electronic device D, the working liquid2btends to evaporate before it reaches the center of the evaporation region SR. Therefore, the reflux of the working fluids2aand2bis formed in a region other than a region around the center of the evaporation region SR and, thus, the temperature at the center of the evaporation region SR may rise. Accordingly, the efficiency for cooling the electronic device D may be decreased. As a result, a region TH having a high temperature and a region TL having a low temperature appear in the vapor chamber1, and the temperature difference may increase. In contrast, if the amount of heat generated by the electronic device D is small, part of the working liquid2bcondensed in the condensation region CR tends to remain in the liquid flow channel portion60in the evaporation region SR, as illustrated inFIG.27. That is, since the amount of heat generated by the electronic device D is small, the amount of evaporation of the working liquid2bin the evaporation region SR decreases. Accordingly, the amount of working liquid2bdelivered to the evaporation region SR decreases, and the working liquid2btends to remain in the liquid flow channel portion60in the condensation region CR. As a result, the reflux of the working fluids2aand2bis formed within the range excluding the vicinity of the edge closer to the evaporation region SR (the vicinity of the right edge inFIG.27), and the working liquid2bin the vicinity of the edge may remain in the liquid flow channel portion60. As a result, the region where the working vapor2adissipates heat decreases, and the heat dissipation efficiency of the vapor chamber1may be decreased. As a result, the region TH having a high temperature and the region TL having a low temperature appear in the vapor chamber1and, thus, the temperature difference may increase. However, in the vapor chamber1according to the present embodiment, while the electronic device D is generating heat, part of the working liquid2bcondensed in the condensation region CR is delivered not to the evaporation region SR, but to the liquid storage portion70provided on the first body surface31aof the wick sheet30. Thereafter, the working liquid2bis stored in the liquid storage portion70. Since the liquid storage portion70according to the present embodiment is disposed in the condensation region CR, the working liquid2bin the liquid storage portion70does not evaporate easily and is stored in the liquid storage portion70. When the amount of heat generated by the electronic device D is large, the working liquid2bcondensed in the condensation region CR can be delivered to the center of the evaporation region SR. That is, even when the amount of heat generated by the electronic device D is large, not only the working liquid2bin the liquid flow channel portion60but also the working liquid2bstored in the liquid storage portion70can be delivered toward the center of the evaporation region SR, thus increasing the amount of working liquid2bto be delivered to the evaporation region SR. This enables the working liquid2bto reach even the center of the evaporation region SR, and the reflux of the working fluids2aand2bcan be formed in the entire range of the vapor chamber1. Thus, the temperature at the center of the evaporation region SR can be decreased, and the efficiency for cooling the electronic device D can be increased. As a result, the temperature difference in the vapor chamber1can be decreased, and the temperature can be equalized. In contrast, when the amount of heat generated by the electronic device D is small, part of the working liquid2bcondensed in the condensation region CR can be stored in the liquid storage portion70, and the working liquid2bcan be inhibited from remaining in the liquid flow channel portion60in the evaporation region SR. This allows the reflux of the working fluids2aand2bto be formed in the entire range of the vapor chamber1. Accordingly, the region where the working vapor2adissipates heat can be increased, thus increasing the heat dissipation efficiency of the vapor chamber1. As a result, the temperature difference in the vapor chamber1can be decreased. As described above, according to the present embodiment, the liquid flow channel portion60through which the working liquid2bflows is provided on the second body surface31bof the sheet body31of the wick sheet30, and the liquid storage portion70is provided on the first body surface31apositioned on the opposite side from the second body surface31b. The liquid storage portion70is disposed in a region different from the evaporation region SR in plan view. This allows the working liquid2bto be distributed and stored in the liquid storage portion70in addition to the liquid flow channel portion60. When the amount of heat generated by the electronic device D is large, the working liquid2bstored in the liquid storage portion70can be delivered to the evaporation region SR, thus increasing the range in which the reflux of the working fluids2aand2bis formed. In this manner, the efficiency for cooling the electronic device D can be increased. When the amount of heat generated by the electronic device D is small, the working liquid2bcan be inhibited from remaining in the liquid flow channel portion60in the evaporation region SR, and the range in which the reflux of the working fluids2aand2bis formed can be increased. Thus, the region where the working vapor2adissipates heat can be increased, and the heat dissipation efficiency of the vapor chamber1can be increased. As a result, a decrease in the performance of the vapor chamber1can be reduced regardless of the amount of heat generated by the electronic device D. In addition, according to the present embodiment, the working liquid2bcan be stored in the liquid storage portion70, as described above. This allows the working liquid2bto be distributed and stored in not only the liquid flow channel portion60but the liquid storage portion70while the electronic device D stops generating heat. Therefore, even if the working liquid2bin the liquid flow channel portion60freezes and expands in a temperature environment lower than the freezing point of the working liquid2b, the force of expansion exerted on the upper sheet20can be decreased, and deformation of the upper sheet20can be inhibited. In addition, even if the working liquid2bin the liquid storage portion70freezes and expands, the force of expansion exerted on the lower sheet10can be decreased and, thus, deformation of the lower sheet10can be inhibited. As a result, deformation of the vapor chamber1can be inhibited. In addition, according to the present embodiment, the plurality of convex portions74protruding from the sheet body31of the wick sheet30and being in contact with the lower sheet10are provided in the liquid storage portion70. The gap between a pair of the convex portions74(corresponding to the width w6of the liquid storage mainstream groove71) neighboring each other is greater than the width w3of the liquid flow channel mainstream groove61of the liquid flow channel portion60. This allows the capillary force that acts on the working liquid2bin the liquid storage portion70to be smaller than the capillary force that acts on the working liquid2bin the liquid flow channel portion60(in the liquid flow channel mainstream groove61). While the electronic device D is generating heat, the amount of movement of the working liquid2bdelivered to the liquid storage portion70can be reduced. Accordingly, a decrease in the function of delivering the working liquid2bto the evaporation region SR can be reduced, and a decrease in heat transport efficiency can be reduced. In addition, as described above, by making the gap between the convex portions74greater than the width w3of the liquid flow channel mainstream groove61, the total volume of a space formed by the liquid storage mainstream grooves71and a space formed by the liquid storage communication grooves75of the liquid storage portion70can be increased. As a result, the volume of the working liquid2bstored by the liquid storage portion70can be increased. In addition, when the amount of heat generated by the electronic device D is small, the working liquid2bcan be further inhibited from remaining in the liquid flow channel portion60in the condensation region CR. In addition, according to the present embodiment, the liquid storage portion70has the liquid storage mainstream grooves71each provided between two neighboring convex portions74in the Y direction that is orthogonal to the X direction in which the liquid flow channel mainstream grooves61of the liquid flow channel portion60extend. The liquid storage mainstream grooves71extend in the X direction. This enables the working liquid2bin the liquid storage portion70to flow in the X direction, and the working liquid2bflowing out of the liquid storage portion70can have a propulsive force in the X direction. Thus, the working liquid2bflowing out of the liquid storage portion70can be smoothly delivered to the evaporation region SR. In addition, according to the present embodiment, the gap between a pair of the convex portions74neighboring each other is less than the gap between the pair of the land portions33(corresponding to the width w2of the penetration portion34) neighboring each other. This enables the capillary force to act on the working liquid2bin the liquid storage portion70. As a result, the working liquid2bcan be drawn into the liquid storage portion70, and the working liquid2bcan be stored in the liquid storage portion70. In addition, according to the present embodiment, the liquid storage portions70are provided on the first body surface31aof each of the land portions33. This enables the working liquid2bto be distributed and stored in the liquid storage portions70. Accordingly, if the amount of heat generated by the electronic device D is large, the amount of the working liquid2bdelivered to the evaporation region SR can be increased, and the efficiency for cooling the electronic device D can be increased more. If the amount of heat generated by the electronic device D is small, the working liquid2bcan be inhibited from remaining in the liquid flow channel portion60more, and the heat dissipation efficiency of the vapor chamber1can be increased more. In addition, according to the present embodiment, the liquid storage portion70is disposed on one side of the land portion33in the X direction. This enables the liquid storage portion70to be disposed in a region different from the evaporation region SR when the evaporation region SR is formed on one side of the vapor chamber1in the X direction. Accordingly, if the amount of heat generated by the electronic device D is large, the amount of the working liquid2bdelivered to the evaporation region SR can be increased, and the efficiency for cooling the electronic device D can be increased more. If the amount of heat generated by the electronic device D is small, the working liquid2bcan be inhibited from remaining in the liquid flow channel portion60more, and the heat dissipation efficiency of the vapor chamber1can be increased more. Note that the first, second, third and fourth modifications, which are described as modifications of the first embodiment, are applicable to the present embodiment described above in the same way as in the first embodiment. For example, according to the second embodiment, by providing a communication portion80as in the first modification, the working liquid2bcan move smoothly between the liquid flow channel portion60and the liquid storage portion70. This configuration can increase the amount of movement of the working liquid2bdelivered from the liquid flow channel portion60to the liquid storage portion70, resulting in an increase in the amount of working liquid2bstored in the liquid storage portion70. In addition, if the amount of heat generated by the electronic device D is large, the working liquid2bstored in the liquid storage portion70can be smoothly delivered to the evaporation region SR. The range in which the reflux of the working fluids2aand2bis formed can be effectively increased. As a result, the efficiency for cooling the electronic device D can be increased more. If the amount of heat generated by the electronic device D is small, the working liquid2bcan be inhibited from remaining in the liquid flow channel portion60in the evaporation region SR more, and the range in which the reflux of the working fluids2aand2bis formed can be increased. As a result, the heat dissipation efficiency of the vapor chamber1can be increased more. In addition, as in the first modification, the flow resistance of the working liquid2bbetween the liquid flow channel portion60and the liquid storage portion70can be reduced by the communication portion80including a communication recess81. This configuration can increase the efficiency for cooling the electronic device D more if the amount of heat generated by the electronic device D is large. If the amount of heat generated by the electronic device D is small, the heat dissipation efficiency of the vapor chamber1can be increased more. Furthermore, according to the first modification, the communication recess81extends to the liquid flow channel communication groove65and the liquid storage communication groove75and, thus, the flow resistance of the working liquid2bbetween the liquid flow channel portion60and the liquid storage portion70can be reduced more. For example, according to the second embodiment, the flow resistance of the working liquid2bbetween the liquid flow channel portion60and the liquid storage portion70can be reduced by the communication portion80including a through-hole82, as in the second modification. This configuration can increase the efficiency for cooling the electronic device D more if the amount of heat generated by the electronic device D is large. In contrast, this configuration can further increase the heat dissipation efficiency of the vapor chamber1more if the amount of heat generated by the electronic device D is small. Furthermore, according to the second modification, the through-hole82extends to the liquid flow channel intersection portion66and the liquid storage intersection portion76, resulting in a further decrease in the flow resistance of the working liquid2bbetween the liquid flow channel portion60and the liquid storage portion70. The liquid storage portion70according to the present embodiment described above may be combined with the liquid storage portion70according to the first embodiment. In this case, two liquid storage portions70are provided in each of the land portions33of the wick sheet30. One liquid storage portion70is disposed in the evaporation region SR in plan view, and the other is disposed in the condensation region CR in plan view. The liquid storage portion70in the evaporation region SR and the liquid storage portion70in the condensation region CR may be spaced apart from each other in the X direction. In this case, both the effect obtained by the liquid storage portion70of the first embodiment and the effect obtained by the liquid storage portion70of the second embodiment can be obtained. The present invention is not limited to the above-described embodiments and modifications without any change, but can be embodied by changing the shapes of the components in the implementation stage without departing from the spirit and scope of the invention. In addition, various inventions can be discerned by appropriately combining the multiple components described in the above embodiments and modifications. Some components may be removed from all of the components described in the embodiments and the modifications. | 131,921 |
11859914 | It should be understood that the drawings are not necessarily to scale. DETAILED DESCRIPTION OF INVENTION In the following description of the preferred embodiment, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration a specific embodiment in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. In some embodiments, the thermal ground planes disclosed here could be used to provide efficient space utilization for cooling semiconductor devices in a large range of applications, including but not limited to aircraft, satellites, laptop computers, desktop computers, mobile devices, automobiles, motor vehicles, heating air conditioning and ventilation systems, and data centers. Microfabricated substrates can be used to make more robust, shock resistant two-phase cooling devices, which may be in the form of Thermal Ground Planes (TGPs). Although a variety of materials for these substrates may be employed, as described in the incorporated references, metal, such as but not limited to titanium, aluminum, copper, or stainless-steel substrates have been found suitable for TGPs. The choice of metal can depend upon the various applications and cost considerations. There are advantages to various metals. For example, copper offers the highest thermal conductivity of all the metals. Aluminum can be advantageous for applications where high thermal conductivity is important and weight might be important. Stainless steel could have advantageous in certain harsh environments. Titanium has many advantages. For example, titanium has a high fracture toughness, can be microfabricated and micromachined, can resist high temperatures, can resist harsh environments, and can be bio-compatible. In addition, titanium-based thermal ground planes can be made light weight, relatively thin, and have high heat transfer performance. Titanium can be pulse laser welded. Since titanium has a high fracture toughness, it can be formed into thin substrates that resist crack and defect propagation. Titanium has a relatively low coefficient of thermal expansion of approximately 8.6×10−6/K. The low coefficient of thermal expansion, coupled with thin substrates can help to substantially reduce stresses due to thermal mismatch. Titanium can be oxidized to form Nano Structured Titania (NST), which forms stable and super hydrophilic surfaces. In some embodiments, titanium (Ti) substrates with integrated Nano Structured Titania (NST) have been found suitable for TGP's. Metals, such as but not limited to titanium, aluminum, copper, or stainless steel, can be microfabricated with controlled characteristic dimensions (depth, width, and spacing) ranging from 1-1000 micrometers, to engineer the wicking structure and intermediate substrate for optimal performance and customized for specific applications. In some embodiments, the controlled characteristic dimensions (depth, width, and spacing) could range from 10-500 micrometers, to engineer the wicking structure for optimal performance and customized for specific applications. In some embodiments, titanium can be oxidized to form nanostructured titania (NST), which could provide super hydrophilic surfaces and thereby increase capillary forces, and enhance heat transfer. In some embodiments, the NST can be comprised of hair-like patterns with a nominal roughness of 200 nanometers (nm). In some embodiments, NST can have a nominal roughness of 1-1000 nm. In some embodiments aluminum can be oxidized to form hydrophilic nanostructures, to provide super hydrophilic coatings. In some embodiments, sintered nanoparticles and/or microparticles could be used to provide super hydrophilic surfaces and thereby increase capillary forces, and enhance heat transfer. Such a wicking structure22is shown inFIG.1. In some embodiments, titanium can be coated on another type of substrate forming a titanium film. The titanium film can be oxidized to form nano-structured titania (NST), and thereby provide super hydrophilic surfaces. Titanium is a material that can be microfabricated using cleanroom processing techniques, macro-machined in a machine shop, and hermetically packaged using a pulsed laser micro-welding technique. When the thermal ground plane is comprised of only titanium or titania as the structural material, the various components can be laser welded in place, without introducing contaminants, which could possibly produce non-condensable gasses, contribute to poor performance, and possibly lead to failure. In addition, titanium and titania have been shown to be compatible with water, which can contribute to long lifetimes and minimal non-condensable gas generation. Accordingly, the titanium substrate may be connected to the titanium backplane by a laser weld, to form a hermetically-sealed vapor chamber. Metals can be bonded to form hermetic seals. In some embodiments, titanium substrates can be pulsed laser micro-welded together to form a hermetic seal270. In other embodiments, copper, aluminum, and stainless-steel substrates could be welded using a variety of techniques, such as but not limited to, soldering, brazing, vacuum brazing, TIG, MIG, and many other well-known welding techniques. The present application describes the fabrication of metal-based Thermal Ground Planes (TGPs). Without loss of generality, the present application discloses thermal ground plane embodiments that could be comprised of three or more metal substrates. An embodiment can comprise of three substrates (of which one or more can be constructed using a metal, such as, but not limited to, titanium, aluminum, copper, or stainless-steel) to form a thermal ground plane. In some embodiments, titanium substrates could be used to form a thermal ground plane. In some embodiments, one substrate supports an integrated super-hydrophilic wicking structure220, a second substrate consists of a deep-etched (or macro-machined) vapor chamber, and a third intermediate substrate110may consist of microstructures112and are in communication with the wicking structure220and the vapor chamber300. The substrates could be laser micro-welded together to form the thermal ground plane. The working fluid can be chosen based upon desired performance characteristics, operating temperature, material compatibility, or other desirable features. In some embodiments, and without loss of generality, water could be used as the working fluid. In some embodiments, and without loss of generality, helium, nitrogen, ammonia, high-temperature organics, mercury, acetone, methanol, Flutec PP2, ethanol, heptane, Flutec PP9, pentane, caesium, potassium, sodium, lithium, or other materials, could be used as the working fluid. The current TGP can provide significant improvement over earlier titanium-based thermal ground planes. For example, the present invention could provide significantly higher heat transfer, thinner thermal ground planes, thermal ground planes that are less susceptible to the effects of gravity, and many other advantages. The following co-pending and commonly-assigned U.S. patent applications are related to the instant application, and are incorporated by reference in their entirety: U.S. Pat. No. 7,718,552 B2, issued May 18, 2010, by Samah, et al, entitled “NANOSTRUCTURED TITANIA,” which application is incorporated by reference herein. U.S. Patent Application Ser. No. 61/082,437, filed on Jul. 21, 2008, by Noel C. MacDonald et al., entitled “TITANIUM-BASED THERMAL GROUND PLANE,” which application is incorporated by reference herein. U.S. patent application Ser. No. 13/685,579, filed on Nov. 26, 2012, by Payam Bozorgi et al., entitled “TITANIUM-BASED THERMAL GROUND PLANE,” which application is incorporated by reference herein. PCT Application No. PCT/US2012/023303, filed on Jan. 31, 2012, by Payam Bozorgi and Noel C. MacDonald, entitled “USING MILLISECOND PULSED LASER WELDING IN MEMS PACKAGING,” which application is incorporated by reference herein. U.S. Patent Provisional Application Ser. No. 62/017,455, filed on Jun. 26, 2014, by Payam Bozorgi and Carl Meinhart, entitled “TWO-PHASE COOLING DEVICES WITH LOW-PROFILE CHARGING PORTS” which application is incorporated by reference herein. FIG.1illustrates a thermal ground plane, which in some embodiments may be a titanium-based thermal ground plane, comprising a titanium substrate with a wicking structure, a backplane, and a vapor chamber described in the incorporated references. The device may be pulsed micro-welded to form a hermetic seal. The thermal ground plane can be charged with a working fluid, such as water in a thermodynamically saturated state, where the liquid phase resides predominantly in the wicking structure, and the vapor phase resides predominantly in the vapor chamber. FIG.1illustrates a thermal ground plane, which in some embodiments may be a titanium-based thermal ground plane, comprising a titanium substrate with a wicking structure, a backplane, and a vapor chamber described in the incorporated references. The device may be pulsed micro-welded to form a hermetic seal. The thermal ground plane can be charged with a working fluid, such as water in a thermodynamically saturated state, where the liquid phase resides predominantly in the wicking structure, and the vapor phase resides predominantly in the vapor chamber. FIG.3illustrates an embodiment of a novel metal-based thermal ground plane with an intermediate substrate110in communication with a wicking structure220and a vapor chamber300. The intermediate layer could comprise microstructures112.FIG.3(A)shows a profile view depicting components of an embodiment, while FIG.3(B) shows an exploded view of structural components of an embodiment. The metal substrate210could be bonded to a metal backplane120to form a hermetically-sealed vapor chamber300. The vapor chamber300may therefore be enclosed by the metal substrate210and the metal backplane120. For example, in an embodiment, a titanium substrate could be pulsed laser micro-welded to a titanium backplane120to form a hermetically sealed vapor chamber. In some embodiments, a plurality of intermediate substrates110could be used, where at least one different intermediate substrate110could be used for each different region of the thermal ground plane. The plurality of intermediate substrates110could be positioned in close proximity to each other to collectively provide overall benefit to the functionality of the thermal ground plane. In some embodiments, the intermediate substrate110could contain regions that are comprised of a plurality of microstructures112, with characteristic dimensions (depth, width, and spacing) ranging from 1-1000 micrometers. Some areas of the ground plane may include pillars24to enhance wicking within the wicking structure22. In some embodiments, the intermediate substrate110could contain regions that are comprised of a plurality of microstructures112, with dimensions (depth, width, and spacing) ranging from 10-500 micrometers. The at least one intermediate substrate110may contain regions that are comprised of a plurality of microstructures112, regions that are comprised of solid substrates, and regions that are comprised of at least one opening in the at least one intermediate substrate110(that is large compared to the microstructures112, and for example openings could range in dimension of 1 millimeter-100 millimeters, or 1 millimeter-1000 millimeters. In some embodiments, the opening in the intermediate substrate110for chosen regions of the thermal ground plane could be achieved by simply not providing an intermediate substrate110in those regions. Thermal energy can be supplied by a heat source250and removed by a heat sink260. Thermal energy can be transferred from one region (evaporator region) of the metal substrate210to another region (condenser region) of the metal substrate210. In the evaporator region, the local temperature is higher than the saturation temperature of the liquid/vapor mixture, causing the liquid140to evaporate into vapor, thereby absorbing thermal energy due to the latent heat of vaporization. The vapor residing in the vapor chamber300can flow from the evaporator region through the adiabatic region to the condenser region. The heat sink260could absorb heat from the condenser region causing the local temperature to be lower than the saturation temperature of the liquid/vapor mixture, causing the vapor to condense into the liquid phase, and thereby releasing thermal energy due to the latent heat of vaporization. The condensed liquid140could predominantly reside in the wicking structure220and could flow from the condenser region through the adiabatic region to the evaporator region as a result of capillary forces. As a result, it could be advantageous for high-performance heat pipes to: (1) exhibit minimal viscous losses for the liquid140flowing through the wicking structure220, and to (2) exhibit maximal capillary forces in the evaporator region. In many practical thermal ground plane embodiments, minimal viscous losses and maximal capillary forces are difficult to achieve simultaneously. Introducing an intermediate substrate110with a plurality of microstructures112, configured as appropriate in each of the three regions could provide a means in which the thermal ground plane could have reduced viscous losses in some regions, while exhibiting increased capillary forces in other regions, compared to earlier TGP's with more or less the same structure over a majority of the interior. In some embodiments, supporting pillars (standoffs) are used to mechanically support the spacing between the backplane120and the wicking structure220and/or intermediate substrate110. In some embodiments, the supporting pillars (standoffs) provide controlled spacing for the vapor chamber300. The supporting pillars (standoffs) could be microfabricated using chemical wet etching techniques or other fabrication techniques (as described above). Accordingly, the backplane may include standoffs that are in communication with the intermediate substrate and/or the metal substrate, for structurally supporting the thermal ground plane. FIG.4depicts structural components of an embodiment where the different structural components are located in an evaporator region, an adiabatic region, and a condenser region: (A) shows an evaporator region of an embodiment where the intermediate substrate110comprises a plurality of microstructures112that are positioned to increase the effective aspect ratio of the wicking structure220. The fingers (microstructures112) from the intermediate substrate110are interleaved with channels in the wicking structure220, thereby creating double the number of higher aspect ratio features, compared to the lower aspect ratio features of the wicking structure220without the intermediate substrate110. The term interleaved should be considered to mean that the microstructures112occupy the interstitials between channels in the wicking structure220. (B) shows an adiabatic region of an embodiment where the intermediate substrate110is positioned in close proximity to the wicking structure220, and (C) shows a condenser region of an embodiment, where the wicking structure220is in direct communication with the vapor chamber300. (D) shows the intermediate substrate110as a whole. Accordingly, the thermal ground plane may have an evaporator region, an adiabatic region, and a condenser region. The intermediate substrate, in turn, may have a different topography in the different regions, and in particular in the evaporator region relative to an adiabatic region. FIG.4(A)depicts an embodiment where the intermediate substrate110comprises a plurality of microstructures112that are interleaved with the wicking structure220of the metal substrate210. By interleaving the microstructures112of the intermediate region with the wicking structure220of the metal substrate210, the interface between the solid and liquid can be substantially increased. This could increase the capillary forces that are applied to the liquid, and could increase the amount of heat transferred from the metal solid to the liquid. FIG.4(B)shows an adiabatic region of an embodiment where the intermediate substrate110is positioned in close proximity to the wicking structure220. A solid intermediate substrate110could be used to isolate the vapor chamber300from the wicking structure220. By isolating the vapor chamber300from the wicking structure220, the solid-liquid interface area could be increased, and the liquid could fill substantially the wicking structure220, without a meniscus occupying the channel, and which could provide a higher mass flow rate for the liquid with less viscous pressure drop, compared to the earlier TGP's where the liquid in the wicking structure220could be exposed directly to the vapor in the vapor chamber300with a meniscus residing at the liquid/vapor interface. FIG.4(C)shows a condenser region of an embodiment where the wicking structure220is in direct communication with the vapor chamber300. When the wicking structure220is in direct communication with the vapor chamber300, vapor could more easily condense onto the wicking structure220. Furthermore, in regions, such as the condenser, there might not be significant differences in pressure between the liquid and vapor phases, and an intermediate substrate110may not provide significant advantages. However, in other embodiments, if the condenser region was relatively large and there was significant pressure difference between the liquid and vapor phases, an intermediate substrate110could provide advantages in the condenser region as well. FIG.4(D) shows an illustrative embodiment of an implementation of an intermediate substrate110as described above. The evaporator region of the intermediate substrate110includes rows of wedge-shaped fingers supported across each end, such that when the TGP is assembled, the fingers interleave with the substrate wicking microstructures112as shown inFIG.4(A), where the interleaved structures are exposed to the vapor chamber300. The adiabatic region of the intermediate substrate110is a cover that overlays a portion of the wicking microstructures112, as shown inFIG.4(B). The condenser region may not require an intermediate substrate110component in some embodiments, as shown inFIG.4(C). FIG.4(D) shows an illustrative embodiment of an implementation of an intermediate substrate110as described above. The evaporator region of the intermediate substrate110includes rows of wedge-shaped fingers supported across each end, such that when the TGP is assembled, the fingers interleave with the substrate wicking microstructures112as shown inFIG.4(A), where the interleaved structures are exposed to the vapor chamber300. The adiabatic region of the intermediate substrate110is a cover that overlays a portion of the wicking microstructures112, as shown inFIG.4(B). The condenser region may not require an intermediate substrate110component in some embodiments, as shown inFIG.4(C). FIG.4(D) shows an illustrative embodiment of an implementation of an intermediate substrate110as described above. The evaporator region of the intermediate substrate110includes rows of wedge-shaped fingers supported across each end, such that when the TGP is assembled, the fingers interleave with the substrate wicking microstructures112as shown inFIG.4(A), where the interleaved structures are exposed to the vapor chamber300. The adiabatic region of the intermediate substrate110is a cover that overlays a portion of the wicking microstructures112, as shown inFIG.4(B). The condenser region may not require an intermediate substrate110component in some embodiments, as shown inFIG.4(C). Thus, the addition of the intermediate substrate110allows for optimization of the wicking structure220in each of the three operational regions of the cooling device, and in a way that could be compatible with micromachining processes, such as wet etching techniques, and assembly techniques. Without loss of generality, the wicking structure220could be formed by dry etching, wet chemical etching, other forms of micromachining, macromachining, sawing with a dicing saw, and many other types of processes. In some embodiments, dry etching could provide high aspect ratio channels, where the depth is comparable or perhaps even larger than the width of the channels. However, dry etching may be limited to smaller regions and may not be desirable for large-scale manufacturing, compared to wet etching processes. Mask-based wet etching could be desirable as it could be applicable to relatively large etch regions, could be cost effective, and could be compatible with high-volume manufacturing. In some embodiments, photolithography-based methods could be used to dry or wet etching. In some embodiments the wicking structure220could be formed by standard wet chemical etching techniques. In some embodiments, wet chemical etching can limit the aspect ratio, which is the ratio of the wicking channel depth to the wicking channel width. In some embodiments that use wet etching, the wicking channel width can be at least 2 to 2.5 times wider than the wicking channel etch depth. In some embodiments, where the wicking channel width is at least 2 to 2.5 times wider than the wicking channel etch depth, there could be significant disadvantages to low aspect ratio wicking channels. The pressure between the vapor and liquid phases can be described by the Laplace pressure, P=Pv−Pl=2/R, where Pv is the vapor pressure, Pl is the liquid pressure, is the surface tension, and R is the radius of curvature of the surface. A high-pressure difference between the liquid and vapor phases could be obtained by decreasing the radius of curvature, R. Generally, a smaller radius of curvature can be achieved by having material surfaces that exhibit low contact angles, and by forming geometries with relatively small geometric dimensions. In many embodiments, it may be desirable to have low viscous losses for the liquid flowing through the wicking structure220. Small geometric dimensions in the wicking structure220can significantly increase the viscous losses of liquid flowing through the wicking structure220. Therefore, in some embodiments, it may be difficult to achieve low viscous losses, and have a meniscus with a small radius of curvature that can support a high-pressure difference between the vapor and liquid phases. The current application discloses a means in which some embodiments can be configured for maximum capillary forces, support large pressure differences between the liquid and vapor phases, for example in the evaporator region. The current application discloses a means in which some embodiments can be configured to minimize viscous losses of the liquid flowing in the wicking structure220, by using different structures in the different regions. FIG.5shows profile views of structural components of an illustrative embodiment where the structures are non-wetted (i.e. dry) and are wetted by a liquid: (A) non-wetted structural components in the evaporator region, (B) wetted structural components in the evaporator region, (C) non-wetted structural components in the adiabatic region, (D) wetted structural components in the adiabatic region, (E) non-wetted structural components in the condenser region, (F) wetted structural components in the condenser region. FIG.5(A)shows a profile view of an illustrative embodiment where the intermediate substrate110comprises a plurality of microstructures112that are interleaved with the wicking structure220of the metal substrate210. FIG.5(B)shows a profile view of an illustrative embodiment where the intermediate substrate110comprises a plurality of microstructures112that are interleaved with the wicking structure220of the metal substrate210, and where the microstructures112and wicking structure220are wetted by a liquid140. By interleaving the microstructures112of the intermediate substrate110with the wicking structure220of the metal substrate210, the interface area between the solid and liquid140could be substantially increased. This could increase the capillary forces that are applied to liquid140, and could increase the amount of heat transferred from the metal solid to liquid140. FIG.5(B)shows the meniscus180at the liquid-vapor interface. In some embodiments, gaps between the plurality of microstructures112contained in the intermediate substrate110and the wicking structure220could be formed so that they are substantially smaller than the depth of the wicking structure220. In some embodiments the relatively small gaps between the plurality of microstructures112contained in the intermediate substrate110and the wicking structure220could provide effectively higher aspect ratio wicking channels, compared to some embodiments where the wicking structure220is formed by wet etching a single metal substrate210(as is common, and depicted inFIG.4(C)). In some embodiments, titanium could be used as a substrate material. The thermal conductivity of titanium is approximately kTi=20 W/(m K), and liquid water is approximately, kW=0.6 W/(m K). Since the thermal conductivity of titanium is approximately 30 times higher than liquid water, the intermediate substrate110can provide additional thermal conduction pathways, which can decrease the thermal resistance between the outside surface of the thermal ground plane and liquid140located in the wicking structure220. Furthermore, the microstructures112contained within the intermediate substrate110could increase the solid-liquid interface area, which could decrease the thermal resistance, and increase the critical heat flux that can occur, between titanium solid and liquid140. In some embodiments, the combination of the wicking structure220and the intermediate substrate110can effectively increase the aspect ratio of the channels in the wicking structure220. Under very large pressure differences between the liquid and vapor phases, the meniscus180may be pushed down and not wet the top of the wicking structure220. However, in some embodiments, the shape of the composite wicking structure220formed by interleaving the microstructures112of the intermediate substrate110with the wicking structure220may be chosen such that under large pressure differences across the meniscus180, there is only partial dry out (or at least dry out could be substantially delayed) of the wicking structure220(so that the TGP continues to function), and the thermal ground plane does not undergo catastrophic dry out. In previous two-phase heat transfer devices, instabilities can occur due to evaporation and/or boiling as the liquid phase is converted to the vapor phase. These instabilities can cause local dry out of the wicking structure220and can degrade the performance of the thermal ground plane. These instabilities can be substantially decreased in some of the current embodiments. For example, in some embodiments, the shape of the wicking structure220formed by interleaving the microstructures112of the intermediate substrate110with the wicking structure220may be chosen such that there can be substantial viscous resistance to liquid flow in the wicking structure220. This viscous resistance can be advantageous as it can increase the stability of the evaporation and/or boiling process that may occur in the evaporator. FIG.5(C)shows a profile view an adiabatic region of an illustrative embodiment, where the intermediate substrate110is positioned in close proximity to the wicking structure220. In some embodiments, the intermediate substrate110could be placed directly above the wicking structure220. In some embodiments, the intermediate substrate110could be comprised of microstructures112. In some embodiments, a solid intermediate substrate110could be used to isolate the vapor chamber300from the wicking structure220. By isolating the vapor chamber300from the wicking structure220, the solid-liquid interface area could be increased, and the liquid140could substantially fill the wicking structure220, which could provide a higher mass flow rate of the liquid with less viscous pressure drop, compared to earlier wicking structures220. FIG.5(D)shows a profile view an adiabatic region of an illustrative embodiment, where the intermediate substrate110is positioned in close proximity to the wicking, and where liquid140is wetted in the wicking structure220. A solid intermediate substrate110could be used to isolate the vapor chamber300from the wicking structure220. By isolating the vapor chamber300from the wicking structure220, the solid-liquid interface area could be increased, and the liquid140could fill substantially the wicking structure220, which could provide a higher mass flow rate for the liquid with less viscous pressure drop, compared to earlier wicking structures220. In some embodiments, where high-performance thermal energy transfer is desired, it may be important to decrease viscous losses of the liquid in the adiabatic region. In some embodiments, an intermediate substrate110could be used to isolate the vapor chamber300from the liquid140in the wicking structure220. In some embodiments, where there is a large difference in pressure between the vapor and the liquid in the wicking structure220, the vapor chamber300can be isolated from the liquid in the wicking structure220by a solid intermediate substrate110, which could prevent the high difference in pressure from negatively affecting flow liquid in the wicking structure220. In some embodiments, where high-performance thermal energy transfer is desired, it may be important to decrease viscous losses of the liquid in the adiabatic region. In some embodiments, an intermediate substrate110could be used to isolate the vapor chamber300from the liquid140in the wicking structure220. In some embodiments, where there is a large difference in pressure between the vapor and the liquid in the wicking structure220, the vapor chamber300can be isolated from the liquid in the wicking structure220by a solid intermediate substrate110, which could prevent the high difference in pressure from negatively affecting flow liquid in the wicking structure220. FIG.5(E)shows a profile view of a condenser region of an illustrative embodiment, where the wicking structure220is in direct communication with the vapor chamber300. When the wicking structure220is in direct communication with the vapor chamber300, vapor could condense more readily onto the wicking structure220. Furthermore, in regions, such as the condenser, there might not be significant differences in pressure between the liquid and vapor phases, and an intermediate substrate110may not provide significant advantages. However, for a case where the condenser region is large, significant differences in pressure between the liquid phase and the vapor phase could exist and accordingly, the condenser region could conceivably benefit from at least one intermediate substrate110with microstructures112, whose effect is to increase the aspect ratio of the wicking structure220, thereby shortening the meniscus180length and thus increasing the amount of pressure that the meniscus180can support, as described above for the evaporation region. FIG.5(F)shows a profile view of a condenser region of an illustrative embodiment, where the wicking structure220is in direct communication with the vapor chamber300, where the wicking structure220is wetted by a liquid140. In some embodiments, there may not be a significant difference in pressure between the vapor chamber300and the liquid140in the wicking structure220, and an intermediate substrate110may not provide significant advantages. However, for a case where the condenser region is large, a significant pressure difference between the liquid phase and the vapor phase could exist and accordingly, the condenser region could conceivably benefit from microstructures112whose effect is to increase the aspect ratio of the wicking structure220and increase the amount of pressure that the meniscus180can support, as described above for the evaporation region. FIG.6shows pressure profiles as a function of axial location for an illustrative embodiment of a thermal ground plane. The curves show the pressure of the vapor phase in the vapor chamber300and the liquid phase in the wicking structure220. In an illustrative embodiment, the maximum pressure difference between the liquid and vapor phases could occur in the evaporator region. In an illustrative embodiment, the minimum pressure difference between the vapor and liquid phases could occur in the condenser region. Wicking structures220may be comprised of channels, pillars, or other structures. If these structures are formed by wet etching or other fabrication processes, they may be comprised of features with low aspect ratios. Earlier wicking structures220could be comprised of low-aspect ratio channels or pillars, and did not include an intermediate structure. In these earlier low-aspect ratio wicking structures220, a large pressure difference between the liquid phase and the vapor phase could cause the meniscus180between the two phases to extend towards the bottom of the channel, thereby decreasing the amount of liquid140occupying the channel and significantly decreasing the mass flow of the liquid. This in turn could cause poor heat transfer performance and possible dry out of the wicking structure220. As shown inFIG.6, the highest vapor pressure typically occurs in the evaporator region, and the vapor pressure, due to viscous losses, increases with the amount of heat transferred by the TGP. Further, it may be desirable to make the overall thickness of the thermal ground plane as thin as practically possible, which might be accomplished by making the vapor chamber300relatively thin. A relatively thin vapor chamber300could cause substantial viscous losses of the vapor flowing in the vapor chamber300from the evaporator through the adiabatic region to the condenser. High viscous losses of vapor flowing in the vapor chamber300can also contribute to a large difference in pressure between the liquid and vapor phases in the evaporator. An intermediate substrate110structure, which increases the aspect ratio of the wicking structure220, as described above, has the effect of decreasing the meniscus180length of the liquid/vapor interface, making the radius of curvature smaller, in this part of the wicking structure220, thereby making the meniscus180more resistant to high meniscus180pressure (FIG.5(B)) and making the TGP capable of supporting much higher pressures than previous implementations while minimizing viscous losses. Accordingly, the region of the intermediate substrate may have a plurality of microstructures that are interleaved with at least one region of the wicking structure to form high aspect ratio wicking structures, in at least one region of the thermal ground plane. Furthermore, at least one intermediate substrate may be in close proximity to the wicking structure, to isolate the liquid phase and vapor phase, in at least one region of the thermal ground plane. Supporting higher pressure differences between the liquid phase and the vapor phase allows for more heat to be transferred without drying out the wicking structure220as well as making the TGP more resistant to viscous losses resulting from thinner designs. Thus, the addition of the intermediate substrate110may achieve both higher heat transfer and thinner ground planes, simultaneously. In some embodiments, the thermal ground plane could be filled with a specified mass of saturated liquid/vapor mixture such that difference in pressure between the vapor and liquid phases in the condenser is well controlled. In some embodiments the mass of the liquid/vapor mixture could be chosen so that part of the condenser region could contain liquid at a higher pressure than adjacent vapor. FIG.7shows temperature profiles as a function of axial location for an illustrative embodiment of a thermal ground plane, under heat transfer rates of Q=10, 20, and 30 W. In this illustrative embodiment, the evaporator is in the center, and there are is an adiabatic and condenser region on each side. The results demonstrate the utility of an embodiment of a titanium thermal ground plane with an intermediate substrate110. FIG.8compares maximum heat transfer for titanium-based thermal ground planes for different vapor temperatures. The comparison is between an earlier titanium thermal ground plane, and an illustrative embodiment of the current thermal ground plane using an intermediate substrate110. An earlier titanium thermal ground plane with similar dimensions to embodiments tested forFIG.7might only be capable of transferring about 10 W of thermal energy before the wicking structure220exhibits dry out at an operating vapor temperature of 30° C., compared to 30 W for an illustrative embodiment of the current thermal ground plane using an intermediate substrate110. Similarly, as vapor temperature is increased, the maximum thermal energy transferred for an illustrative embodiment of the current thermal ground plane is increased to 35 W and 40 W, for operating vapor temperatures of 50° C. and 70° C., respectively. In all cases, the maximum thermal energy transferred for an illustrative embodiment of the current thermal ground plane is 15-20 W more than what is observed from an earlier thermal ground plane. FIG.9illustrates a flow chart of the formation of one or more embodiments of the current Ti-based TGP in accordance with one or more embodiments of the present invention. In some embodiments, thermal energy can be transported by (1) forming a plurality of metal micro structures in a metal substrate of the thermal ground plane to form a wicking structure in step S100. In step S110, a vapor chamber may be formed. In step S120, at least one structure and/or at least one microstructure is formed in an intermediate substrate that is communication with the wicking structure and vapor chamber, wherein the intermediate substrate is shaped and positioned to increase the effective aspect ratio of the wicking structure in at least one region of the wicking structure. In step S130, a fluid may be contained within the thermal ground plane. In step S140, thermal energy may be transported from at least one region of the metal substrate to at least one other region of the metal substrate by fluid motion driven by capillary forces, resulting from the plurality of microstructures. FIG.10illustrates a flow chart of the formation of one or more embodiments of the current Ti-based TGP in accordance with one or more embodiments of the present invention. In some embodiments a metal-based thermal ground plane can be formed by the following process. In step S200, the first substrate is formed. In step S210, a second substrate is formed. In step S220, at least one intermediate substrate is formed. In step S230, the substrates are attached. In step S240, the thermal ground plane is formed. FIG.11shows illustrative embodiments of a wicking structure220in communication with an intermediate substrate110. The effective aspect ratio is defined as the ratio of the effective height, h, to the effective channel width w: (A) shows an illustrative embodiment where the microstructures112of the intermediate substrate110are interleaved with the wicking structure220, (B) shows an alternative embodiment where the microstructures112of the intermediate substrate110are positioned above the wicking structure220. The illustrative embodiments shown inFIG.11could provide effective aspect ratios that are higher than what might be obtained by the wicking structure220without including an intermediate substrate110. For example, if the wicking structure220is formed by a wet etching or other isotropic etching process, the aspect ratio h/w may be less than unity, or substantially less than unity. Using an intermediate substrate110, higher effective aspect ratios of the fluid channel between the wicking structure220and the intermediate substrate110, may be achieved. For example, in some embodiments, h/w>1 wherein h is the effective height (or depth) of the fluid channel and w is the width. FIG.11(B)shows an alternative embodiment, which could have advantages when relatively low viscous losses are desirable. FIG.12shows an illustrative embodiment where the intermediate substrate310comprises a plurality of microstructures312that are interleaved with the wicking structure320. The interleaved microstructures312are mechanically connected to cross-members330. In some embodiments, the interleaving microstructures312and the cross-members330are formed from a single substrate. The cross-members330can be formed from a metal or other material. In some embodiments, metal cross-members330could be comprised of titanium, copper, aluminum, stainless steel, or other metal. In some embodiments, the interleaving microstructures312and cross-members330can be formed by chemical etching metal foil, such as a titanium metal foil, copper metal foil, stainless steel metal foil, aluminum metal foil, and the like. In some embodiments, cross-members330can provide mechanical support to the interleaved microstructures312. In some embodiments, cross-members330can transfer thermal energy through thermal conduction between interleaving microstructures312or throughout the thermal ground plane. In some embodiments, the cross-members330can provide a wetting surface so that liquid can be transported through capillary forces along cross-members. This can provide fluid communication between interleaving microstructures. In some embodiments, cross-members330can provide surface area to facilitate condensation of vapor. FIG.13shows an illustrative embodiment where the intermediate substrate410comprises a plurality of cross-members430. Wicking structure412is formed from metal substrate420.FIG.13(A)shows an illustrative embodiment wherein microstructures414are in communication with cross-members430. In an illustrative embodiment, microstructures414and cross-members430can be positioned directly above the wicking structure412.FIG.13(B)shows an illustrative embodiment where cross-members430are positioned directly above the wicking structure412. In some embodiments, an intermediate substrate410could be configured with cross-members430and could be positioned in the condenser region of the thermal ground plane. In some embodiments, an intermediate substrate410could be configured with cross-members430and could be positioned in the adiabatic region of the thermal ground plane. In some embodiments, an intermediate substrate410could be configured with cross-members430and could be positioned in the evaporator region of the thermal ground plane. FIG.14shows a profile view an illustrative embodiment where a vapor chamber can be comprised of one or more recessed regions540,542and544. Viscous flow of vapor in the vapor chamber can be described by Poiseuille flow, where for a given pressure drop, density and viscosity, the mass flow rate of vapor scales with the cube of the vapor chamber height ˜h3. For very thin vapor chambers, viscous losses can be substantial and limit the overall performance of the thermal ground plane. In some embodiments, vapor chambers300can be configured with one or more recessed regions540, thereby increasing the effective height of the vapor chamber, h, in chosen regions of the thermal ground plane. Since the mass flow rate of vapor can vary with h3, increasing the height of the vapor chamber in chosen regions can substantially increase the mass flow rate of vapor through the chamber, for a given pressure drop. In some embodiments, the one or more recessed regions544can be formed in the metal substrate and located adjacent to the wicking structure. In some embodiments, the one or more recessed regions540and542can be formed in the backplane530. In some embodiments, the one or more recessed regions can be formed in a combination of the metal substrate and backplane. In some embodiments, recessed regions can be configured to be in communication with other recessed regions, in order to minimize viscous losses in the vapor chamber. In some embodiments, recessed region540could be aligned with recessed region544, so that the overall depth of the vapor chamber in that region is increased by the combination of recessed region540and recessed region544. Vapor mass flow rate can vary with the vapor chamber height cubed, ˜h3. Therefore, the combination of recessed region540and recessed region544can have a non-linear effect on reducing viscous losses, and thereby increase overall mass flow rate. FIGS.15and16provide an example of an intermediate substrate600that is an orthogonal lattice having apertures606formed within the intermediate substrate. The intermediate substrate600may be formed by standard wet chemical etching techniques. Other processes that can be used may be etching, dry etching, micromachining, sawing or other types of processes that allows for formation of the orthogonal lattice type of intermediate substrate600. The orthogonal lattice of the intermediate substrate600is formed by cross members610that are orthogonal with respect to each other. The cross members610of the intermediate substrate are formed out of a monolithic material. The material of the intermediate substrate may be titanium, aluminum, copper, or stainless steel. The cross members610are formed by taking a continuous sheet of material and selectively removing the material to a depth D1or D2which represents a distance inward from the outer surfaces620that define the thickness of the material. As viewed inFIGS.15and16, the removed material from between the cross members610on the upper side of the intermediate substrate600extends to a consistent depth D1across the entire intermediate substrate600. Likewise, the removed material from between the cross members610on the lower side of the intermediate substrate600extends to a consistent depth D2across the entire intermediate substrate600. Frequently, the depth D1and D2are the same. D1and D2may be as small as 50 μm. In the case that D1is the same as D2, those distances must be at least half of the thickness T1of the intermediate substrate600so that the removed material areas between the cross members610intersects to form the apertures606bounded by the cross members610. The removed areas between adjacent cross members610forms channels611,613in the location of the removed material and those channels611,613intersect to form the apertures606. When D1and D2are more than half of the thickness T1of the intermediate substrate600, that will decrease the thickness of the cross members610. As shown inFIGS.15and16, the cross members610are half of the thickness T1of the intermediate substrate600. The intermediate substrate600being formed as described has one set of cross members610in one plane running one direction and another set of cross members610opposite to the first set in another plane and running orthogonal to the first set of cross members610. The distances between the cross members610are indicated as W1for the upper cross members610and W2for the lower cross members inFIG.16. These distances W1and W2define the lateral widths of the apertures606in orthogonal directions and W1often is equal to W2as shown inFIGS.15and16. The lateral width W1and W2also define the widths of orthogonal channels611,613respectively that run in opposite planes within the intermediate substrate600. Thus, the apertures606form square openings when viewed from directly above. The square nature of the apertures606is shown inFIG.21which shows the intermediate layer600viewed from directly above. Multiple intermediate substrates600may be used to provide for a large aspect ratio structure. The intermediate substrates600may be located in various areas of the thermal ground plane within the vapor chamber300. Often, the intermediate substrates600as shown inFIGS.15and16are used in the evaporator region of the thermal ground plane.FIG.17shows multiple intermediate substrates600overlying the wicking structure220in the metal substrate210. This may be two or more intermediate substrates600. As shown inFIG.17, two intermediate substrates600are stacked together and are in direct overlying contact with wicking structure220on the metal substrate210. The wicking structure220has grooves622that provide wicking of the working fluid though capillary action and the grooves622are separated by ridges619located between the grooves622. As discussed above, the increasing the aspect ratio may enhance capillary action. Using the intermediate substrate600in combination with the wicking structure220or using multiple intermediate substrates600in combination with the wicking structure220can increase the aspect ratio and thereby increase capillary action. In the case of using a single intermediate substrate600with the wicking structure220, the apertures606overlying the grooves622cooperate to define a high aspect ratio fluid path whereby the working fluid must travel through the grooves622and through the apertures606so that the working fluid may evaporate into the vapor chamber300. Generally, the fluid path is the path that the working fluid must take from the metal substrate210to the vapor chamber. In the case shown inFIGS.15and16, the working fluid contacts the metal substrate210at the wicking structure220. Although the enclosure comprising the vapor chamber300is not shown inFIGS.17-20and22-24, the vapor chamber300is directly above the intermediate substrates600and opposite the wicking structure220. The intermediate substrates600are contained within the vapor chamber300. A single intermediate substrate600overlying the wicking structure220enhances the aspect ratio beyond that of just the wicking structure220. In some cases, a single intermediate substrate600may enhance the aspect ratio to a desired level. It is contemplated that the intermediate substrate600may itself serve as the wicking structure220and potentially eliminate the need for the wicking structure220on the metal substrate210. In addition to the intermediate substrate or layers600, microstructure112in the form of elongate members may be placed within the grooves622of the wicking structure220to enhance the aspect ratio within the wicking structure and this is shown inFIG.17. These microstructures112may be attached to the intermediate substrate600nearest the wicking structure220so that the fit of multiple microstructures is complementary with the grooves622as shown inFIG.17. In this way the intermediates substrate600spaces the microstructures112to fit within the grooves622of the wicking structure220. The cross members610connected to the microstructures112may be parallel to the microstructures as shown inFIG.17, or the cross members610connected to the microstructures112may be perpendicular to the microstructures112. Thus, the microstructures112fit in a complementary manner to the grooves622which are microstructure defining the wicking structure220. In either configuration, the intermediate substrate600may be used to set the pitch or spacing of the microstructures112. The manner in which the intermediate substrate600sets the spacing of the microstructures facilitates movement of fluid by capillary forces in at least two orthogonal directions along microstructures112and along said intermediate substrate600. Further, the intermediate substrate600facilitates fluid being driven by capillary forces in at least two orthogonal directions within the intermediate substrate600itself via its own channels611,613. It is contemplated that the cross members610in the intermediate substrate are not orthogonal and the cross members610can be at any oblique angle with respect to each other. Such a non-orthogonal configuration will still cause capillary forces in two directions adjacent to the cross members610. This multidirectional wicking may facilitate using just an intermediate substrate for the wicking structure and may obviate the need for the microstructures112or the grooves622. FIG.17also shows a second intermediate substrate600may be positioned above a first intermediate substrate600that is adjacent to the wicking structure220. Because the intermediate substrates600are made of a heat conducting metal, they add to the heat dissipation characteristics of the thermal ground plane and help thoroughly conduct heat into the working fluid. When the intermediate substrates600, are stacked, it may be done so that the apertures are aligned in a vertical direction as shown inFIGS.17and18. In other words, the apertures606are plumb with respect to each other. The alignment of multiple intermediate substrates600stacked with the apertures606aligned will leave openings as shown inFIG.21.FIGS.19and20further clarify the aligned nature of the apertures606in the stacked intermediate substrates600above the wicking structure220. The aspect ratio may be further enhanced by shifting adjacently stacked intermediate substrates600by an offset O1with respect to each other so that the apertures606are not aligned in a vertical direction. This is shown inFIG.22where three intermediate substrates600are used. This can also be done with as few as two intermediate substrates600, althoughFIG.22shows three intermediate substrates600. Having multiple intermediate substrates600defines a fluid path that is through the apertures606in the intermediate substrates and into the vapor chamber300. It is contemplated that only using the two intermediate substrates600nearest the wicking structure220as shown inFIGS.22-24, which are offset with respect to each other may provide the enhanced aspect ratio through the apertures606necessary for some applications. As can be seen, O1indicates a shift between the lowermost intermediate substrate600adjacent to the wicking structure220and the intermediate substrate immediately above it. The offset O1between intermediate substrates reduces the fluid path for the working fluid to be narrower than the narrowest dimension of the apertures606in either of the intermediate substrates600. This narrower effective aperture width W3makes the narrowest path through the intermediate substrates600smaller than the smallest dimension of the apertures606in the intermediate substrates600and thereby increases the aspect ratio for the fluid path. The narrowness of the fluid path through the intermediate substrates600and the effective aperture width W3can be chosen by the amount of the offset O1between intermediate substrates600. The greater the offset O1is made to be, the narrower the effective aperture width W3becomes and the narrower the fluid path. It is also contemplated that the apertures606may be other than square shapes between cross members610. For instance, round holes in adjacent intermediate substrates could be shifted by an offset to shrink the effective diameter of the holes and thereby increase the aspect ratio by making the holes effectively smaller than the diameter of adjacent holes in adjacent intermediate substrates. It may also be the case that the cross members610may not be orthogonal. Offsetting intermediate substrates600as discussed above may serve the purpose of making the thermal ground plane more effective than certain fabrication processes would otherwise allow. Certain fabrication processes may only provide the ability to create apertures606in the intermediate substrates down to a particular minimum size. Therefore, having two intermediate substrates600adjacent to each other and having their apertures606offset could provide an effectively smaller aperture than the machining process producing the individual intermediate substrates would allow. It should also be noted, that the offset O1need not be orthogonal as described above, but the offset O1can be an any direction that shifts the alignment of apertures606between stacked intermediate substrates. FIG.26shows the overall exploded assembly of the intermediate substrates600as they are used in the thermal ground plane. The backplane120that defines the vapor chamber300is opposite the metal substrate210and the wicking structure220thereon. The intermediate substrates600are held within the vapor chamber300. The backplane120is hermetically sealed to metal substrate210to define the vapor chamber300. When that hermetic seal is made that defines the vapor chamber300, non-condensing gasses must be removed. During the evacuation process to remove the non-condensing gasses there may be tremendous force exerted on the backplane120by atmospheric pressure. To prevent collapse of the backplane120supports660may extend from the backplane120onto the intermediate substrate600nearest the backplane120. The intermediate substrates600are located between the supports660and the wicking structure220on the metal substrate210when the assembly is complete. FIG.27is an example of a configuration for the thermal ground plane wherein the intermediate substrates600themselves are the wicking structure. In this configuration, the intermediate substrates600provide enough wicking through the channels606between the cross members610. As can be seen inFIG.27the offset O1may enhance the aspect ratio of the fluid path through the apertures as shown above. In this configuration, the step of providing a wicking structure in the metal substrate210may be eliminated because the intermediate substrates600provide suitable capillary action in multiple directions through their channels611,613. The surface617of the metal substrate210in communication with the vapor chamber is flat and includes no wicking structure of its own. Thus, wicking of the working fluid throughout the ground plane can be tailored to a desirable level while using a simpler flat metal substrate as opposed to a metal substrate210having a wicking structure such as grooves622. Thus, the thermal ground plane shown inFIG.27has no wicking structure in the metal substrate210that conducts heat into or out of the thermal ground plane and the intermediate substrates600are the only wicking structure. As such, the thermal ground plane shown inFIG.27provides may provide a cost savings by having no wicking structure in the metal substrate210or in the backplane120. This construction inFIG.27illustrates having no wicking structure in any of the outermost structure (namely the backplane120and metal substrate210) yet provides a highly customizable construction through configuration of the intermediate substrates600. Once a thermal ground plane is configure with its intermediate substrates600to have the desired wicking properties, the intermediate substrates600can serve as the only wicking structure in the entire thermal ground plane. While various details have been described in conjunction with the exemplary implementations outlined above, various alternatives, modifications, variations, improvements, and/or substantial equivalents, whether known or that are or may be presently unforeseen, may become apparent upon reviewing the foregoing disclosure. Accordingly, the exemplary implementations set forth above, are intended to be illustrative, not limiting. | 59,411 |
11859915 | DETAILED DESCRIPTION It will be readily understood that the components of the present embodiments, as generally described and illustrated in the Figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the apparatus, system, and method, as presented in the Figures, is not intended to limit the scope of the embodiments, as claimed, but is merely representative of selected embodiments. Reference throughout this specification to “a select embodiment,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “a select embodiment,” “in one embodiment,” or “in an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment. The illustrated embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and processes that are consistent with the embodiments as claimed herein. With reference toFIG.1, a perspective view of a heat exchanger assembly (100) is provided. As shown, a secondary medium is illustrated in this example as a heat exchanger (110) provided with a base (112) and an attached fin field (114). A fluid mover (120) is provided in the assembly (100) to force fluid, e.g. air, through the fin field (114). In this assembly, the fluid mover (120) is a fan impeller, although this embodiment should not be considered limiting. It is understood in the art that motorized impeller fans have a high volumetric flow rate and pressure. One of the challenges with this type of fluid mover is the attachment to the heat exchanger (110). A blade assembly within the motorized impeller fan (120) is subject to rotation while a corresponding motor assembly is stationary. As shown herein, a housing (150) is provided and positioned relative to the heat exchanger (110) and the fluid mover (120). The housing (150) is shown with a plurality of fenestrations (152), e.g. openings. The fenestrations (152) are shown to have an arcuate shape, e.g. round or circular, although this shape should not be considered limiting. In one embodiment, the fenestrations (152) may have a different shape or configuration, as shown and described below inFIGS.11A,11B, and11C. The housing (150) is positioned to cover or otherwise extend over the fluid mover (120). An attachment mechanism (154) is shown to secure or otherwise attached the fluid mover (120) to the housing (150). The quantity of attachment mechanisms (154) should not be considered limiting. In addition, although the attachment mechanism (154) is shown as a screw, this embodiment of the attachment mechanism (154) should not be considered limiting, and in one embodiment, an alternative form of the attachment mechanism may be employed to secure or otherwise support the fluid mover (120) with respect to the housing (150). The housing (150) is shown secured or otherwise attached to the heat exchanger (110) through a plenum (130). In the example shown herein, the plenum (130) is attached to the base (112) of the heat exchanger (110) through an attachment mechanism (132). The quantity and form of the attachment mechanisms (132) should not be considered limiting. In one embodiment, an alternative mechanism and/or an alternative quantity of mechanisms may be utilized. Accordingly, as shown herein, the housing (150) is positioned relative to the plenum (130) which is secured to the heat exchanger (110) and relative to the fluid mover (120). As the fluid mover (120) is subject to rotation, a fluid flow and corresponding exhaust is formed. The housing fenestrations (152) provide an avenue for the fluid to flow. The arrows (160)-(168) are provided to illustrate directional flow of the fluid in the form of egress through the housing fenestrations (152). It is understood that the fenestrations (152) may function as egress ports to exhaust fluid caused by the fluid mover (120) and away from the corresponding secondary medium (110) in multiple directions (160)-(168). In one embodiment, the functionality of the fenestrations (152) and the fluid mover may be inverted, such that the fenestrations function as ingress ports to direct fluid flow toward the secondary medium (110). With functioning as ingress ports or egress ports, the housing fenestrations (152) function as a safety barrier from the fluid mover (120). Accordingly the housing (150) together with the associated fenestrations (152) function as a support to the impeller, while enabling a multi-direction flow of fluid with respect to the corresponding secondary medium (110). Referring toFIG.2, a perspective view (200) of the housing shown and described inFIG.1is provided. As shown, a planar region or surface (210) is provided with a plurality of fenestrations (212) placed across the surface (210). The fenestrations (212) are shown herein with an arcuate shape, although the shape of the fenestrations should not be considered limiting. A plurality of vertical or near vertical walls is shown in communication with the surface (210). Walls (230) and (240) are shown extending from oppositely disposed walls (214) and (216), respectively, and in communication with the surface (210). The walls (230) and (240) are parallel to each other, and both orthogonal to the surface (210). Each of the walls (230) and (240) has a length (232) and (242), respectively, extending from the surface (210). Walls (250) and (260) are shown extending from oppositely disposed walls (218) and (220), respectively, and in communication with the surface (210). The walls (250) and (260) are parallel to each other and orthogonal to the surface (210) and orthogonal to walls (230) and (240). Each of the walls (250) and (260) has a length (252) and (262), respectively, extending from the surface (210). As shown herein, the lengths (252) and (262) are relatively or substantially equal and less than the lengths (232) and (242), which are also relatively or substantially equal in length. Accordingly, as shown herein, the housing is provided with a plurality of walls having a plurality of fenestrations positioned across each of the walls. The plenum is provided in the housing assembly, although with limited visibility in the perspective view of the assembly shown inFIG.1. Referring toFIG.3, a perspective view (300) of one embodiment of a plenum for adaptation with the assembly ofFIG.1is shown. The plenum functions within the assembly to position the fluid mover and direct a corresponding fluid flow toward or away from a proximally positioned secondary medium, such as a heat exchanger, printed circuit board, computer chassis, etc. As shown, the plenum (310) has a shape commensurate with a shape of a secondary medium. In one embodiment, the secondary medium may have a different geometric shape, and the shape of the plenum would be modified to match or corresponding to the shape or profile of the assembly. As shown, the plenum (310) has a planar surface (320) with a primary aperture (322), also referred to herein as a centrally positioned aperture (322). In the embodiment shown herein, the primary aperture has a four sided shape, which may be a square or rectangular shaped opening. Once assembled, the aperture (322) functions to receive the fluid mover. In one embodiment, the aperture (322) is sized to ensure that the received fluid mover does not interfere or cause friction with the perimeter (324) of the aperture (322). Accordingly, the aperture (322) is sized to receive the fluid mover. A plurality of walls is provided extending from and in communication with the planar surface (320), including a first pair of walls (330) and (340), and a second pair of walls (350) and (360), respectively. The first pair of walls (330) and (340) has a length (332) and (342), respectively. In one embodiment, the lengths (332) and (342) are the same. Similarly, in one embodiment, the lengths (332) and (342) are the same or relatively equal to the lengths (232) and (242), respectively. As shown, the walls (330) and (340) are provided with secondary apertures (336) and (346), respectively. Each of the secondary apertures (336) and (346) are configured to receive an attachment mechanism (not shown) to secure the plenum (310) to the proximally positioned secondary medium, as shown inFIG.1. The second pair of walls (350) and (360) has a length (352) and (362), respectively. In one embodiment, the lengths (352) and (362) are the same. Similarly, in one embodiment, the lengths (352) and (362) are the same or relatively equal to the length (252) and (262), respectively. In one embodiment, the second pair of walls (350) and (360) is referred to as a lip, as shown inFIG.1at (190) with limited visibility. Once assembled proximal to the secondary medium, the lip (350) and (360) function to prevent premature egress of the fluid flow from the secondary medium, which in the example of a heat exchanger prevents premature egress of fluid flow from the fin field (114). In one embodiment, the length (352) and (362) may be optimized through computational fluid dynamics (CFD) depending on the heat exchanger fin height and fan curve. It is understood that the size and shape of the primary aperture (322) of the plenum (310) should not be considered limiting. Referring toFIG.4, a perspective view (400) of one embodiment of a plenum for adaptation with the assembly ofFIG.1is shown. The plenum (410) has a shape commensurate with the shape of the assembly (100) and corresponding heat exchanger (110). In one embodiment, the assembly and corresponding heat exchanger may have a different geometric shape, and the shape of the plenum would be modified to match that of the assembly. As shown, the plenum (410) has a planar surface (420) with a primary aperture (422), also referred to as a centrally positioned aperture (422). In the embodiment shown herein, the primary aperture has an arcuate shape, which may be a circle or elliptical shaped opening. Once assembled, the central aperture (422) functions to receive the fluid mover. In one embodiment, the aperture (422) is sized to ensure that the received fluid mover does not interfere or cause friction with the perimeter (424) of the central aperture (422). Accordingly, the aperture of size to receive the fluid mover. Similar to the plenum shown inFIG.3, a plurality of walls is provided extending from and in communication with the planar surface (420), including a first pair of walls (430) and (440), and a second pair of walls (450) and (460), respectively. The first pair of walls (430) and (440) has a length (432) and (442), respectively. In one embodiment, the lengths (432) and (442) are the same. Similarly, in one embodiment, the lengths (432) and (442) are the same or relatively equal to the lengths (232) and (242), respectively. As shown, the walls (430) and (440) are provided with secondary apertures (436) and (446), respectively. Each of the secondary apertures (436) and (446) are configured to receive an attachment mechanism (not shown) to secure the plenum (410) to a proximally positioned secondary medium, as shown inFIG.1. The second pair of walls (450) and (460) has a length (452) and (462), respectively. In one embodiment, the lengths (452) and (462) are the same. Similarly, in one embodiment, the lengths (452) and (462) are the same or relatively equal to the length (252) and (262), respectively. In one embodiment, the second pair of walls (450) and (460) is referred to as a lip, as shown inFIG.1at (190) with limited visibility. When assembled proximal to the heat exchanger, the lip (450) and (460) function to prevent premature egress of the fluid flow from the fin field (114). In one embodiment, the length (452) and (462) may be optimized through computational fluid dynamics (CFD) depending on the heat exchanger fin height and fan curve. Accordingly, the plenum (410) has similar functionality to the plenum shown and described inFIG.3, with a different geometric characteristic of the primary aperture (422). Referring toFIG.5, a perspective view (500) of an assembly of the plenum and an associated fan impeller is shown. In the embodiment, shown herein, the plenum (510) includes the properties of the plenum (310) shown and described inFIG.3. The plenum (510) includes a primary opening or aperture (520), which is shown receiving or in communication with a fluid mover (570). As shown and described inFIG.1, the fluid mover (570) is secured to the housing (not shown). A plurality of secondary apertures (572), (574), and (576) are shown provided with the fluid mover (570). The secondary apertures (572), (574), and (576) are configured to receive a corresponding attachment mechanism to secure the fluid mover (570) to the housing (150). When operating, the fluid mover (570) is subject to rotation within the aperture (520). In one embodiment and once secured, the fluid mover (570) is effectively suspended from the housing (150). Although the secondary openings (572), (574), and (576) are configured to receive screws, in one embodiment, any mechanical fastening mechanism or technique may be employed, including but not limited to welding or brazing. Accordingly, the assembly shown herein illustrates positioning and receipt of the fluid mover within the four sided primary aperture (520) of the plenum (510). Referring toFIG.6, a perspective view (600) of an assembly of the plenum and an associated fluid mover is shown. In the embodiment, shown herein, the plenum (610) includes the properties of the plenum (410) shown and described inFIG.4. The plenum (610) includes a primary opening or aperture (620), which is shown receiving or in communication with a fluid mover (670). As shown and described inFIG.1, the fluid mover (670) is secured to the housing (not shown). A plurality of secondary apertures (672), (674), and (676) are shown provided with the fluid mover (670). The secondary apertures (672), (674), and (676) are configured to receive a corresponding attachment mechanism to secure the fluid mover (670) to the housing. When operating, the fluid mover (670) is subject to rotation within the aperture (620). In one embodiment and once secured, the fluid mover (670) is effectively suspended from the housing. Although the secondary openings (572), (574), and (576) are configured to receive screws, in one embodiment, any mechanical fastening mechanism or technique may be employed, including but not limited to welding or brazing. Similarly, in one embodiment a different quantity of secondary openings may be provided. Accordingly, the assembly shown herein illustrates positioning and receipt of the fluid mover within the circular or arcuate shaped primary aperture (620) of the plenum (610). Referring toFIG.7, a schematic diagram (700) is provided to illustrate a plenum (730) and positioning relative to the secondary medium in the form of a heat exchanger (710). As shown the plenum (730), also referred to herein as a plenum body, is positioned as an interface between the heat exchanger (710) and the housing (not shown in this illustration). The plenum (730) is shown attached or otherwise secured to the base (712) of the heat exchanger (710) through an attachment mechanism (718). The quantity and form of the attachment mechanism (718) should not be considered limiting. In one embodiment, an alternative mechanism and/or an alternative quantity of mechanisms may be utilized, including but not limited to solder, welding or brazing of the plenum (730) to the base (712). The plenum (730) is shown with an opening (750), hereinafter referred to as a primary opening. As shown, the primary opening (750) has an arcuate shape. It is understood that the shape of the opening should not be considered limiting, and in one embodiment, the primary opening (750) may have a different size and/or shape, as shown inFIGS.4and6. The primary opening (750) is sized and shaped to receive the impeller fan (not shown in this illustration). The plenum body (730) is provided with a wall or surface (732), referred to herein as a primary wall, surrounding the primary opening (750). As shown herein, the wall (732) and the opening (750) are co-planar, or in one embodiment relatively co-planar. In addition, the plenum body (730) is provided with a plurality of walls or surfaces extended perpendicular or relatively perpendicular with respect to the wall (732). A first set of secondary walls (734a) and (734b) are shown positioned perpendicular or relatively perpendicular to and in communication with the primary wall (732). In addition, a second set of secondary walls (736a) and (736b) are shown positioned in communication with the primary wall (732) and also orthogonal to both the primary wall (732) and the first set of secondary walls (734a) and (734b). Accordingly, the plenum body (730) is comprised of an arrangement of walls and a primary opening. As shown herein, the first set of secondary walls (734a) and (734b) are secured to the base (732) of the heat exchanger (710) via the attachment mechanisms (718). The first set of secondary walls (734a) and (734b) are parallel or relatively parallel to the fin field (714) of the heat exchanger (710). As shown, the second set of secondary walls (736a) and (736b) do not extend to the base (712) of the heat exchanger (710). Rather, the secondary walls (736a) and (736b) extend over a select length (738) of the fin field (714). The length (738), which also referred to herein as a lip, functions to prevent fluid flow egress from the entrance to the fin field (714). The size (738) of the lip (736a) and (736b) may vary, with the size determined through optimization. When positioned relative to the heat exchanger (710), the primary opening (750) of the plenum (730) functions to receive an impeller (not shown). A gap (770) is formed between the top (716) of the fin field (714) and the wall (732). In one embodiment, the minimum length of the lip (738) is equal to a size of the gap (770). Similarly, in one embodiment, the length of the lip (730) is determined through optimization. Accordingly, as shown herein, the plenum body (730) is sized and configured through optimization in order to receive the fluid mover and obtain an optimal fluid flow. Referring toFIG.8, a sectional view (800) is provided to illustrate a positioning of the plenum with respect to the secondary medium in the form of a heat exchanger. As shown, the wall (834a) of the plenum body (830) is secured to the base (812) of the heat exchanger (810) through a securement mechanism (816). Although not visible in this view, in one embodiment, a corresponding attachment of the oppositely positional plenum wall (834b) to the base (812) may be provided. The fin field (814) is shown secured to the base (812). A gap (838) is formed between the top (818) of the fin field (814) and the primary wall (832) of the plenum body (830). The optimum dimension of the gap (838) may be determined through computational fluid dynamics (CFD), and in one embodiment analytical analysis. As shown and described inFIG.4below, in one embodiment there is a relationship between the dimension of the gap (818), and the length of the lip (836). Accordingly, the sectional view (800) illustrates a sectional view of the assembly without the housing or impeller. As shown and described above, the assembly includes a fluid mover, such as an impeller or an alternative mechanism, to facilitate fluid flow. The primary opening in the plenum is configured to receive the fluid mover. Referring toFIG.9, a schematic diagram (900) is provided to illustrate an assembly of the plenum together with the fluid mover positioned proximal to the secondary medium in the form of the heat exchanger, and without the housing. As shown, the plenum (930) is shown attached to the heat exchanger (910), and more specifically, attached to the base (912) of the heat exchanger (910). The fluid mover (920) is shown positioned within the primary opening (950). In one embodiment, the size of the primary opening (950) is slightly larger than the fluid mover diameter to prevent physical contact between the fluid mover (920) and the plenum (930), thereby mitigating or eliminating friction and loss associated with friction. The fluid mover (920) is shown configured to be received by the primary opening (950) and positioned proximal to the top (116) of the fin field (114). As shown inFIG.1, the fluid mover (920) is secured to the housing (not shown), and in one embodiment is positioned in a suspended state from the housing and proximal to the secondary medium (910) in a suspended relationship. In one embodiment, the fluid mover (920) extends into the gap (318) thereby creating a reduced opening between the top (116) of the fin field (114) and the fluid mover (920). Referring toFIG.10, a perspective view of an alternative housing assembly (1000) is provided. Similar to the assembly shown and described inFIG.1, a heat exchanger (1010) is provided with a base (1012) and an attached fin field (1014). As shown herein, a housing (1050) is provided and positioned relative to the heat exchanger (1010) and the fluid mover (1020). The housing (1050) is shown with a planar surface (1054) and a plurality of extending walls in communication with the planar surface (1054). The plurality of walls includes a first set of walls (1060) and (1062), and a second set of walls (1064) and (1066). In one embodiment, the first set of walls (1060) and (1062) are parallel or relatively parallel, and the second set of walls (1064) and (1066) are parallel or relatively parallel. A plurality of fenestrations (1052), e.g. openings, is positioned in the first and second sets of walls, although with a different arrangement than that shown and described inFIG.1. The fenestrations (1052) are shown herein to have an arcuate shape, e.g. circular, although this shape should not be considered limiting. In one embodiment, the fenestrations (1052) may have a different shape or configuration, as shown and described below inFIGS.11A,11B, and11C. The housing (1050) is positioned to cover or otherwise extend over the fluid mover (1020). An attachment mechanism (1058) is shown to secure or otherwise attach the fluid mover (1020) to the housing (1050) so that the fluid mover (1020) is suspended over the fin field (1014). Although three attachment mechanisms (1058) are shown herein, the quantity should not be considered limiting. In addition, although the attachment mechanism (1058) is shown as a screw, this embodiment of the attachment mechanism (1058) should not be considered limiting, and in one embodiment, an alternative from of the attachment mechanism may be employed to secure or otherwise support the fluid mover (1020) with respect to the housing (1050). The housing (1050) is shown secured or otherwise attached to the base (1012) of the heat exchanger (1010) through an attachment mechanism (1018). The quantity and form of the attachment mechanism (1018) should not be considered limiting. In one embodiment, an alternative mechanism and/or an alternative quantity of mechanisms may be utilized. Accordingly, as shown herein, the housing (1050) is positioned and secured to the heat exchanger (1010) and relative to the fluid mover (1020), and an arrangement of fenestrations (1052) are positioned across the secondary walls (1060)-(1066). The fluid mover (1020) is positioned within the primary aperture of the plenum that is positioned within the housing to direct fluid, e.g. air, toward or away from the secondary medium. It is understood in the art that motorized fluid movers have a high volumetric flow rate and pressure. One of the challenges with this type of fluid mover is the attachment to the heat exchanger (1010). A blade assembly within the fluid mover (1020) is subject to rotation while a corresponding motor assembly is stationary. The arrangement of fenestrations (1052) is different from the arrangement shown and described inFIG.1. Namely, the housing (1050) includes a planar or relatively planar surface (1054) that is positioned parallel or relatively parallel to the planar or relatively planar surface (320) of the plenum shown inFIG.3. In the embodiment shown herein, the planar surface (1054) is also referred to herein as a primary wall (1054). As shown, the primary wall (1054) is solid, e.g. no fenestrations, with the exception of openings (not shown) to receive the attachment mechanism(s) (1058) to secure the fluid mover (1020). Secondary walls of the housing (1050) are shown in communication with the primary wall. Namely, the first set of secondary walls (1060) and (1062) is shown positioned perpendicular to the surface (1054) and parallel to the fin field (1014), and the second set of secondary walls (1064) and (1066) is shown positioned perpendicular to the surface (1054) and orthogonal to the first set of secondary walls (1060) and (1062). The plurality of fenestrations (1052) is shown positioned in both the first and second sets of secondary walls (1060)-(1066). The fenestrations (1052) are shown positioned relative to the height of the fin field (1014). An area (1052b) of the secondary walls (1060) and (1062) positioned proximal and parallel to the fin field (1014) is solid and does not include any fenestrations. In one embodiment, the area (1052c) of the secondary walls (1060) and (1062) with fenestrations (1052) extends from the planar surface (not shown) of the plenum to the primary surface (1054). Secondary walls (1064) and (1066) are configured with a similar selection and arrangement of fenestrations to that shown in secondary walls (1060) and (1062). Secondary walls (1064) and (1066) each have a length (1064a) that extends from the surface (1054) to the height of the fin field (1014). In one embodiment, the length of (1052c) and the length of (1064a) are the same, or substantially equal. Accordingly, the housing assembly shown herein has a similar arrangement of secondary walls to the housing shown inFIG.1, but with a different arrangement of fenestrations to the housing assembly shown and described inFIG.1. As shown and described above, the housing assembly is configured with fenestrations, and in one embodiment different arrangement of fenestrations. Although the fenestrations are shown with a circular or arcuate shape, this shape should not be considered limiting. Referring toFIG.11A, a perspective view (1100) of the housing assembly fromFIG.2is shown with 4-sided fenestrations (1110), which may be square and/or rectangular shaped. Similarly,FIG.11Bis a perspective view (1120) of the housing assembly fromFIG.2with arcuate fenestrations (1130) having an elliptical shape. Referring toFIG.11C, a perspective view (1140) of the housing assembly fromFIG.2is shown with fenestrations (1150) in the form of louvers. In one embodiment, the fenestrations may be square, rectangular, round, oval, 6-sides, 8-sides, etc., and as such the geometrical properties of the fenestrations should not be considered limiting. The housing assembly shown and described above can be applied to any size and shape of a corresponding heat exchanger. As shown inFIGS.1,5,6,9, and10, the fluid mover is smaller than the corresponding heat exchanger, such as heat exchanger (110). In one embodiment and as described in detail below, the fluid mover may be larger than a corresponding footprint of the heat exchanger. Accordingly, the size and shape of the heat exchanger and proximally positioned fluid mover should not be considered limiting. The embodiments shown and described inFIGS.1-11Care directed at a heat exchanger having a four sided profile with a plenum and housing having a corresponding shape to complement the heat exchanger profile. It is understood in the art that the heat exchanger is not limited to a four sided profile. In one embodiment, the plenum and the housing may come in different profiles, with the scope of the embodiments directed at the assembly of the housing and/or plenum with respect to the heat exchanger, and as such, the scope of the embodiments should not be limited to the size and shape of the corresponding fluid mover. Referring toFIG.12, a perspective view (1200) of a four sided heat exchanger with an eight sided corresponding plenum is provided. As shown, the heat exchanger (1210) is comprised of four sides, including side0(1212), side1(1214), side2(1216), and side3(1218). An area is provided between each of the adjacent sides to receive an attachment mechanism to secure the heat exchanger (1210) to a secondary surface. Three of those areas are visible in this view, including area0(1222) positioned between side0(1212) and side1(1214), area1(1224) positioned between side0(1212) and side2(1216), area2(1226) positioned between side2(1216) and side3(1218), and area3(1228) positioned between side3(1218) and side1(1214). An opening (not shown) is provided in each of the respective areas and is configured to receive a corresponding attachment mechanism. As shown, attachment mechanism0(1232) is received in area0(1222), attachment mechanism1(1234) is received in area1(1224), attachment mechanism2(not shown) is received in area2(1226), and attachment mechanism3(1238) is received in area3(1228). Although the attachment mechanisms shown herein are mechanical screws, this embodiment should not be considered limiting, and in one embodiment, an alternative mechanism may be employed to attached and/or secure the heat exchanger (1210) to the secondary surface (not shown). Similar to the assembly shown inFIG.1, the partial assembly shown inFIG.12includes a plenum (1240) proximally positioned with respect to the heat exchanger (1210). As shown, the heat exchanger (1210) is provided with a base (1213) and a fin field (1215) in communication with the base (1213). The plenum (1240) has a profile that corresponds to the profile of the base (1213), including cut-outs for the defined areas (1222)-(1228). More specifically, the plenum (1240) is shown with four primary sides, including side0(1242), side1(1244), side2(1246), and side3(1248). Side0(1242) and side2(1246) are parallel to each other and parallel to the fin field (1215), and have a same or similar first length. Side1(1244) and side3(1248) are parallel to each other and perpendicular to the fin field (1215), and have a same or similar second length. Each of side0(1242) and side2(1246) is shown as a solid element that is fixed or otherwise secured to the heat exchanger (1210). The corresponding fin field (1215) is shown with a bar (1217), also referred to herein as a fixed element, extending perpendicular to the fin field (1215) and across a width (or in one embodiment a length) of the base (1213). The plenum (1240) is shown with an attachment mechanism (1219) secured to the bar (1217). In one embodiment, a second attachment mechanism (not shown) is positioned on an opposite side of the assembly to further secure the plenum (1240) to the bar (1217). Similar to the plenum shown and described inFIG.1, side1(1244) and side3(1248) have the same or similar length and are parallel or relatively parallel to each other while orthogonal to the fin field (1215). Each of side1(1244) and side3(1248) extend across a top portion of the fin field (1215) with the length of the extension functioning to prevent premature egress of the fluid flow from the fin field (1215). In one embodiment, the lengths of side1(1244) and side3(1248) may be optimized through computational fluid dynamics (CFD) depending on the heat exchanger fin height and fan curve. Accordingly, the parallel positioning of side0(1242) and side2(1246) with respect to the fin field (1215) and the orthogonal positioning of side1(1244) and side3(1248) enables fluid to flow through the fin field, and functions as an avenue to enable a directed exit, e.g., exhaust, of the fluid from the fin field (1215). As further shown, the plenum (1240) has a plurality of apertures, including a primary aperture (1250) and a plurality of secondary apertures. The primary aperture (1250) is shown with an arcuate or circular shape, although the shape of the opening should not be considered limiting. Similar to the plenum shown inFIG.3, the primary aperture may have a four sided opening. The secondary apertures are proximally positioned relative to each of the sides (1242)-(1248) and are configured to receive the housing (not shown in this view). Each side of the plenum is shown with two secondary apertures, although the quantity of secondary apertures per side should not be considered limiting. In this example embodiment, side0(1242) is shown with secondary apertures (1262) and (1272), side1(1244) is shown with secondary apertures (1264) and (1274), side2(1246) is shown with secondary apertures (1266) and (1276), and side3(1248) is shown with secondary apertures (1268) and (1278). Accordingly, the primary aperture (1250) is configured to receive the fluid mover and the secondary apertures are configured to secure the housing to the plenum thereby forming the housing assembly. Referring toFIG.13, a perspective view (1300) of the heat exchanger assembly ofFIG.12is shown with the perforated housing removed. The heat exchanger (1310) is shown with a proximally positioned and attached plenum (1340). As shown, the plenum (1340) covers the boundary of the fin field (1314) of the heat exchanger (1310). The plenum (1340) is further shown with four corners removed, including area0(1382) positioned between side0(1342) and side1(1344), area1(1384) positioned between side1(1344) and side2(1346), area2(1386) positioned between side2(1346) and side3(1348), and area3(1388) positioned between side3(1348) and side0(1342). The areas (1382)-(1388) are positioned proximal to areas (1322)-(1328) for ease of attachment of the heat exchanger (1310) to a secondary surface (not shown). As further shown, an impeller fan (1390) is positioned within the primary aperture (not shown) of the plenum (1340). In this example, the fluid mover (1390) has a size and profile adapted to be received in the primary aperture of the plenum (1340). At the same time, the fluid mover (1390) has a profile larger than the profile of the heat exchanger (1310). Accordingly, as demonstrated herein, the profile and size of the fluid mover (1390) may be greater than the profile and size of the heat exchanger. The partial assembly of the heat exchanger (1310) and the plenum (1340) shown and described inFIG.13is configured to receive a housing. Referring toFIGS.14A and14B, perspective views of the housing are provided, including a top perspective view (1410) inFIG.14Aand a bottom perspective view (1450) inFIG.14B. As shown in view of the views, the housing (1420) is shown with a top surface (1430) with a plurality of fenestrations (1432), e.g. openings. The top surface (1430) shown herein includes an arrangement of the fenestrations positioned adjacent to the perimeter of the surface. This is merely one embodiment of the arrangement of fenestrations, and the arrangement should not be considered limiting. It is understood that the fenestrations (1432) function to as ingress or egress ports to direct fluid and fluid flow caused by the fluid mover (not shown) with respect to the corresponding heat exchanger (not shown) in multiple directions. In addition, the housing fenestrations (1432) function as a safety barrier from the fluid mover (not shown). In addition to the fenestrations, the surface (1430) includes a plurality of apertures (1434), (1436), and (1438), also referred to herein as housing primary apertures. Although three primary apertures (1434)-(1438) are shown, the quantity should not be considered limiting. The primary apertures (1434)-(1438) are positioned and configured to receive the fluid mover, and more specifically, to secure the fluid mover to the housing (1420). The housing (1420) is shown in the form of an octagon. As further shown in the top view (1410) ofFIG.14A, the housing (1420) is provided with a plurality of side walls, shown herein as wall0(1470), wall1(1472), wall2(1474), wall3(1476), wall4(1478), wall5(1480), wall6(1482), and wall7(1484). Each of the side walls (1470)-(1484) is perpendicular to the top surface (1430) and is position adjacent to the corresponding side of the octagon. In addition, as shown, each of the side walls (1470)-(1484) is shown with a plurality of fenestrations (1432) positioned across the respective side wall area. In one embodiment, a different arrangement of fenestrations (1432) may be provided across one or more of the side walls (1470)-(1484). The fenestrations (1432) shown herein have an arcuate or circular shape, although in one embodiment, the size and shape of the fenestrations may be different, including but not limited to four sided, elliptical, star-shaped, louvers, etc. A series of extensions (1460)-(1466) are provided on select sides of the octagon, and as shown herein, on alternating sides of the octagon. Each extension is positioned perpendicular to the respective side wall and parallel to the surface (1430). The extensions are sized and configured to be attached to a proximally positioned plenum. Details of the attachment are shown and described inFIG.15. A plurality of secondary apertures are provided to support and enable the attachment of the housing (1420) to the plenum (not shown). Extension0(1460) is positioned proximal to wall0(1470) and shown with secondary apertures (1460a) and (1460b), extension1(1462) is positioned proximal to wall2(1474) and shown with secondary apertures (1462a) and (1462b), extension2(1464) is positioned proximal to wall4(1478) and shown with secondary apertures (1464a) and (1464b), and extension3(1466) is positioned proximal to wall6(1482) and shown with secondary apertures (1466a) and (1466b). Although two apertures are shown with each extension, this quantity should not be considered limiting. In one embodiment, a single secondary aperture may be provided with each extension, or an additional quantity of secondary apertures may be provided with each extension. In another embodiment, the extensions may not include any secondary apertures, and an alternative attachment mechanism may be provided to attach the housing to the plenum. Similarly, in one embodiment, the alternative mechanism may be utility together with a mechanical attachment mechanism received by one or more of the secondary apertures. Accordingly, the extensions (1460)-(1466) are configured to attach and/or secure the housing to the plenum. Referring toFIG.15, a bottom perspective view (1500) is provided of the housing and a proximally positioned fluid mover. As shown, the housing (1520) has a similar shape and configuration as the housing shown and described inFIG.4, e.g. octagon. The fluid mover (1590) is shown positioned within an interior area (1592) formed by the fenestrated side walls (1570)-(1584). The fluid mover (1590) is shown herein with a wire(s) (1594) to deliver electricity to enable and facilitate rotation of the fluid mover (1590). In one embodiment, the wire(s) (1594) extends through one of the fenestrations in one of the side walls. As further shown in this view, alternating side walls are provided with a corresponding extension to secure the housing to the plenum (not shown). Extension0(1560) is positioned proximal to wall0(1570) and shown with secondary apertures (1560a) and (1560b), extension1(1562) is positioned proximal to wall2(1574) and shown with secondary apertures (1562a) and (1562b), extension2(1564) is positioned proximal to wall4(1578) and shown with secondary apertures (1564a) and (1564b), and extension3(1566) is positioned proximal to wall6(1582) and shown with secondary apertures (1566a) and (1566b). Accordingly, the walls of the housing (1520) form a cavity to receive the fluid mover (1590). Referring toFIG.16, a perspective view (1600) of a heat exchanger housing assembly is provided. As shown, a heat exchanger (1610) is provided with a fin field (1614) operatively coupled to a base (1612). The base (1612) is shown herein as a quadrilateral, e.g. 4-sided, with a heat exchanger aperture provided at the intersection of each adjacent side of the base. Three apertures are shown herein as (1630), (1632), and (1634). The fourth aperture is not visible in this view. Aperture0(1630) is positioned between side0(1620) and side1(1622), aperture1is not visible in this view, aperture2(1632) is positioned between side2(1624) and side3(1626), and aperture3(1634) is positioned between side0(1620) and side3(1626). Each of the apertures (1630)-(1634) is shown in receipt of a corresponding attachment mechanism. Specifically, aperture0(1630) is shown with attachment mechanism (1630a), aperture2(1632) is shown with attachment mechanism (1632a), and aperture3(1634) is shown with attachment mechanism (1634a). Each of the attachment mechanisms (1630a), (1632a), and (1634a) is provided to attach and/or secured the heat exchanger (1610) to a secondary surface (not shown). The plenum (1640) is shown herein attached or otherwise secured to the heat exchanger (1610). As shown, the plenum (1640) extends over the fin field (1614) and has a first wall (1642) that extends the length of the fins and is positioned parallel to the fin field (1614). A similar plenum wall (not shown) is positioned on the opposite side of the plenum (1640) and also extends the length of the fins and is positioned parallel to the fin field (1614). The plenum wall (1642) has an aperture in receipt of an attachment mechanism to attach and/or secure the plenum to the heat exchanger (1610). The plenum is also provided with a second wall (1666) positioned perpendicular to the first wall (1642) that extends a partial length of the fins and is positioned perpendicular to the fin field (1614). As shown and described inFIG.1, the second wall (1666) is also referred to as the lip. Accordingly, as shown herein, the plenum (1640) is configured to at least partially enclose the heat exchanger (1610) while enabling fluid flow across the fin field and providing ingress or egress for the fluid flow. As shown and described, the plenum (1640) is configured with a primary aperture (not shown) to receive the fluid mover (1650). The housing (1670) is shown herein to envelop the fluid mover (1650). The housing (1670) is similarly configured to the housing shown and described inFIG.14, including extensions and corresponding apertures to align with the plenum and to receive corresponding attachment mechanisms. The housing (1670) is shown with a selection and arrangement of fenestrations. The top surface (1672) of the housing (1670) is planar or relatively planar, and is shown herein without fenestrations. This is merely an embodiment of the selection and arrangement of housing fenestrations and should not be considered limiting. As further shown, the top surface (1672) is provided with a plurality of apertures (1674) and corresponding attachment mechanisms (1676) to secure the fluid mover (1650) to the housing (1670). Although three apertures (1674) are shown, the quantity should not be considered limiting. Similarly, although a mechanical attachment mechanism (1676) is shown in the form of a screw, it is understood that in one embodiment an alternative attachment mechanism may be provided and as such, the mechanism (1676) shown herein should not be considered limiting. Accordingly, the assembly (1600) shown herein illustrates an embodiment, with the profile of the fluid mover (1650) being larger than the perimeter of the heat exchanger (1610). Referring toFIG.17, a perspective view (1700) of an assembly of the fenestrated housing and a proximally positioned plenum is shown. As shown, the plenum (1720) has a planar surface (1722) with a centrally positioned primary aperture (1724). A plurality of walls is provided as part of the plenum (1720) and in communication with the planar surface (1722). The walls include a first set of walls (1732) and (1734) have the same or similar length, and a second set of walls (1736) with the opposite wall not visible in this view. The second set of walls (1736) and the second not shown have a second length shorter than the length of the first set of walls (1372) and (1734). Details of the lengths of the walls are shown and described in detail inFIGS.3and4. As further shown, the fenestrated housing (1740) is shown in communication with the plenum (1720). The housing (1740) includes a primary surface (1742) positioned parallel or relatively parallel to the surface (1722) of the plenum (1720). Similarly, a first set of walls (1752) and (1754) are positioned proximal and parallel or relatively parallel to walls (1732) and (1734). Walls (1752) and (1754) are shown with a plurality of fenestration (1760). A second set of walls (1756) and (1758) are positioned proximal and parallel or relatively parallel to wall (1736) and the oppositely positioned wall of the plenum (1720). The second set of walls (1756) and (1758) are sized relative to wall (1736), and are configured with a plurality of fenestration (1760). The assembly shown and described inFIG.17is directed to the plenum (1720) and the proximally positioned fenestrated housing (1740). The secondary medium is not shown. It is understood that the secondary medium may be in the form of a heat exchanger or a printed circuit board, or any medium that may benefit from directional fluid flow. In the case of a printed circuit board, the first set of walls (1732) and (1734) of the plenum are attached or otherwise secured to the board. Similarly, in one embodiment, the assembly (1700) may be adapted for a computer chassis, with the chassis functioning as the plenum (1720) and the housing positioned proximal to the plenum. Referring toFIG.18, a top view (1800) of the fenestrated housing and proximally positioned plenum ofFIG.17is shown. The plenum (1820) is shown enveloped by the housing (1840). The centrally positioned primary aperture (1824) is shown with the proximally positioned planar surface (1822). The housing (1840) is shown with fenestrations (1860) positioned across the primary surface (1842). In addition, the first set of parallel or relatively parallel walls (1852) and (1854) and the second set of parallel or relatively parallel walls (1856) and (1858) are shown. Referring toFIG.19, a bottom view (1900) of the fenestrated housing and proximally positioned plenum ofFIG.13is shown. The plenum (1920) is shown with the centrally positioned primary aperture (1924). Due to the proximally positioned housing (1940), the fenestrations (1960) are shown in the primary surface (1942) positioned parallel or relatively parallel to the surface (1922) of the plenum (1920). Referring toFIG.20is a front view (2000) of the fenestrated housing and proximally positioned plenum ofFIG.13is shown, and referring toFIG.21a rear view (2100) of the fenestrated housing and proximally positioned plenum ofFIG.13is shown. In view of the views (2000) and (2100). The plenum (2020) and (2120), respectively, is shown. The proximally positioned housing (2040) and (2140), respectively, are also shown, with the housing having fenestrations (2060) and (2160), respectively. As shown, the plenum (2020) and (2120), respectively, has oppositely positioned side walls (2022) and (2122), respectively, each positioned proximal to the sides walls (2042) and (2142), respectively, of the fenestrated housing (2040) and (2140), respectively. The front side walls (2024) and (2124) of the plenum (2020) and (2120), respectively, is shown with a proximally positioned front side wall (2044) and (2144) of the housing (2040) and (2140), respectively. The front side walls (2024) and (2124) are each orthogonal to the respective side walls (2022) and (2122). Accordingly, as shown, the front side walls (2044) and (2144) of the housing do not extend the entirety of the length of the respectively positioned side walls (2042) and (2142). Referring toFIG.22, a left side view (2200) of the fenestrated housing and proximally positioned plenum ofFIG.13is shown, and referring toFIG.23, a right side view (2300) of the fenestrated housing and proximally positioned plenum ofFIG.13is shown. As shown inFIG.22, the plenum (2220) is provided with a first set of walls (2222) and a second set of walls (2224). The oppositely positioned wall of (2222) is not visible in this view. The first set of walls (2222) has a first length, and the second set of walls (2224) has a second length, less than the first length. The housing (2240) is shown with fenestrations (2260) positioned across a first wall (2242), with the first wall positioned parallel or relatively parallel and proximal to the first wall (2222) of the plenum (2220). The housing (2240) is shown with a second set of walls (2244) positioned parallel or relatively parallel and proximal to the second set of walls (2224) of the plenum (2220). The first wall(s) (2242) has a length commensurate with the length of the first wall(s) (2222) of the plenum (2220), and the second set of walls (2244) has a length (2246) that extends to an area proximal to the primary surface (2226) of the plenum (2220).FIG.23has like parts toFIG.22designated by like numerals. Referring toFIG.24, a front perspective view (2400) of the fenestrated housing (1400) ofFIG.14is shown with a proximally positioned plenum. As shown, the housing (2440) is positioned to cover or envelop the plenum (2420). The housing (2440) is provided with a primary surface (2442) that is planar or relatively planar. The primary surface (2442) is shown with fenestrations (2460) positioned across the primary surface (2442). A plurality of secondary apertures (2444) are shown positioned in the primary surface (2442) and are configured to receive an attachment mechanism (not shown) to secured the fluid mover (not shown) to the housing (2440). The plenum (2420) is shown with a surface (2422) parallel or relatively parallel to primary surface (2442). The housing (2440) has a secondary surface (2446) positioned proximal to the plenum surface (2422). As shown, apertures (2446a) are provided to receive an attachment mechanism to attached or secure the housing (2440) to the plenum (2420). The plenum surface (2422) is positioned parallel or relatively parallel to the secondary surface (2446) of the housing (2440). In addition, as further shown, the plenum (2420) includes openings (2426) so support receipt of an attachment mechanism (1634a) to secure a base of a proximally positioned heat exchanger (not shown) to a secondary surface (not shown). The openings (2426) are shown to have an arcuate shape, although this shape should not be considered limiting. Referring toFIG.25, a top view (2500) ofFIG.24is shown. Similarly, referring toFIG.26, a bottom view (2600) ofFIG.24is shown, referring toFIG.27a left side view (2700) ofFIG.24is shown, referring toFIG.28, a right side view (2800) ofFIG.24is shown, referring toFIG.29, a front view (2900) ofFIG.24is shown, and referring toFIG.30, and a rear view (3000) ofFIG.24is shown. Referring toFIG.31, a front perspective view (3100) of the fenestrated housing (1600) ofFIG.16is shown with a proximally positioned plenum. This embodiment is similar to the assembly ofFIG.24with the exception of the housing (3140) having a surface (3142) without fenestrations. A plurality of apertures (3146) are shown and are configured to receive an attachment mechanism (not shown) to secure a fan impeller (not shown) to the housing (3140). Referring toFIG.32, a top view (3200) ofFIG.31is shown. Similarly, referring toFIG.33, a bottom view (3300) ofFIG.31is shown, referring toFIG.34a left side view (3400) ofFIG.31is shown, referring toFIG.35, a right side view (3500) ofFIG.31is shown, referring toFIG.36, a front view (3600) ofFIG.31is shown, and referring toFIG.37, a rear view (3700) ofFIG.31is shown. The configuration of the heat exchanger assembly with the fenestrated housing utilizes a motorized fluid mover to force fluid through the heat exchanger fins, e.g. fin field. Motorized fluid movers have a high volumetric flow rate and pressure. The fenestrated housing servers multiple purposes, as described above, including but not limited to attachment of the fluid mover to the fenestrated housing. Specifically, the fenestrated housing is configured with a surface to secure the housing structure of the fluid mover, while enabling the fan or corresponding blades within the housing structure to rotate and a corresponding motor to remain stationary. The fenestrations of the housing provide an ingress or egress venue for the fluid in multiple directions, while providing a limited barrier from the fluid mover. The Figures provides herein illustrate the architecture, functionality, and operation of possible implementations of the heat exchanger assembly utilizing the combinations of various configurations of the heat exchanger, plenum, and fenestrated housing. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present embodiments has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the embodiments. The embodiments were chosen and described in order to best explain the principles and the practical application, and to enable others of ordinary skill in the art to understand the embodiments with various modifications as are suited to the particular use contemplated. Accordingly, the implementation pertains to a heat exchanger assembly with a fenestrated housing to optimize the cooling of heat sources in contact thereto. It will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the embodiments. For example, the base of the heat exchanger in one or more embodiment may be configured with embedded one or more heat pipes or a vapor chamber to spread heat generated from a proximally positioned heat source. The shape and pattern of the heat pipes can have different patterns, including, but not limited to straight, radial, u-shaped, etc. The heat pipes can be round and embedded into the base or flat and soldered to the base of the heat exchanger. Similarly, the embodiments shown and described include one or more mechanical attachment mechanisms in the form of screws. It is understood that this attachment mechanism should not be considered limiting, and in one embodiment may be replaced or provided in combination with an alternative attachment mechanism, such as an adhesive, soldering, welding, etc. Similarly, although shown in the form of a heat exchanger, the secondary medium can be any heat producing medium that would benefit from fluid flow. The size and shape of the plenum should not be considered limiting, and in one embodiment is restricted by the size and shape of the secondary medium. Similarly, the size and shape of the housing, and the arrangement of the fenestrations, should not be considered limiting. Accordingly, the scope of protection of the embodiments is limited only by the following claims and their equivalents. | 56,657 |
11859916 | DETAILED DESCRIPTION OF THE INVENTION The heat exchanger described in Patent Literature 1 is provided with a feed port and a discharge port for the second fluid in a distance of less than half the circumference of the outer cylinder in a circumferential direction. Therefore, it causes a problem that the second fluid fed from the feed port more easily flows through a shorter circumferential side flow path between the feed port and the discharge port than through a longer circumferential side flow path between the feed port and the discharge port, resulting in a lower heat recovery amount (heat exchange amount). The present invention has been made to solve the above problems. An object of the present invention is to provide a flow path member for a heat exchanger, and a heat exchanger, which can improve a heat recovery amount. According to the present invention, it is possible to provide a flow path member for a heat exchanger, and a heat exchanger, which can improve a heat recovery amount. Hereinafter, embodiments of the present invention will be specifically described with reference to the drawings. It is to understand that the present invention is not limited to the following embodiments, and those which appropriately added changes, improvements and the like to the following embodiments based on knowledge of a person skilled in the art without departing from the spirit of the present invention fall within the scope of the present invention. Embodiment 1 (1) Flow Path Member for Heat Exchanger FIG.1is a perspective view of a flow path member for a heat exchanger according to Embodiment 1 of the present invention.FIG.2is a top view of the flow path member for the heat exchanger inFIG.1.FIG.3is a cross-sectional view of the A-A′ line inFIG.1and the B-B′ line inFIG.2(a direction orthogonal to an axial direction of an outer cylinder and an inner cylinder). A flow path member100for a heat exchanger according to Embodiment 1 of the present invention includes: an inner cylinder10capable of housing a heat recovery member through which a first fluid can flow; an outer cylinder20having a feed port21capable of feeding a second fluid and a discharge port22capable of discharging the second fluid, the outer cylinder20being disposed so as to be spaced on a radially outer side of the inner cylinder10such that a flow path R1, R2for the second fluid is formed between the outer cylinder20and the inner cylinder10; a feed pipe30connected to the feed port21; and a discharge pipe40connected to the discharge port22. Further, the feed port21and the discharge port22of the outer cylinder20are provided so as to be located in a distance of less than half the circumference of the outer cylinder20in a circumferential direction. AlthoughFIG.1shows an example in which the inner cylinder10and the outer cylinder20are connected by a connecting member50, the inner cylinder10and the outer cylinder20may be directly connected by increasing diameters of both end portions of the inner cylinder10and/or decreasing diameters of both end portions of the outer cylinder20. Here,FIG.4shows a cross-sectional view of a flow path member for a conventional heat exchanger in a direction orthogonal to an axial direction of an outer cylinder and an inner cylinder. In the flow path member for the conventional heat exchanger, a second fluid fed from the feed pipe30through the feed port21passes through any one of a flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22, and a flow path R2for the second fluid on the longer circumference side between the feed port21and the discharge port22, and is discharged from the discharge pipe40through the discharge port22. InFIG.4, the arrows indicate a flow direction D2of the second fluid. However, the second fluid has a higher rate at which it passes through the flow path R1for the second flow path on the shorter circumference side where a distance between the feed port21and the discharge port22is shorter, than through the flow path R2for the second fluid on the longer circumference side where the distance between the feed port21and the discharge port22is longer, so that it has a lower opportunity to bring the second fluid into contact with the inner cylinder10, which is one of reasons for a decrease in the heat recovery amount. In an embodiment, the flow path member100for the heat exchanger according to Embodiment 1 of the present invention has a flow path resistance (a resistance of the flow path R1) for the second fluid on the shorter circumference side between the feed port21and the discharge port22, lower than a flow path resistance (a resistance of the flow path R2) for the second fluid on the longer circumference side between the feed port21and the discharge port22. By thus controlling the flow path resistance, a rate at which the second fluid passes through the flow path R2for the second fluid on the longer circumference side where the distance between the feed port21and the discharge port22is longer is increased as compared with the flow path R1for the second fluid on the shorter circumference side where the distance between the feed port21and the discharge port22is shorter, so that an opportunity to bring the second fluid into contact with the inner cylinder10can be increased, and the heart recovery amount can be increased. The flow path resistance for the second fluid on the shorter circumference side and the flow path resistance for the second fluid on the longer circumference side can be obtained, for example, by the following method. The flow path resistance for the second fluid on the shorter circumference side can be calculated from a pressure loss when the flow path for the second fluid on the longer circumference side is blocked and the second fluid (e.g., water) is circulated at 10 L/min. Also, the flow path resistance for the second fluid on the longer circumference side can be calculated from pressure loss when the flow path for the second fluid on the shorter circumference side is blocked and the second fluid (e.g., water) is circulated at 10 L/min. As a method of increasing the flow path resistance for the second fluid on the shorter circumference side between the feed port21and the discharge port22as compared with the flow path resistance for the second fluid on the longer circumference side between the feed port21and the discharge port22, a flow path resistance increasing structure portion23may be provided at the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22, or a flow path resistance increasing member may be arranged in the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22, or a combination of these may be used, although not particularly limited thereto. The flow path resistance increasing structure portion23can be provided at the inner cylinder10, the outer cylinder20, or both, which face the flow path R1for the second fluid. However, the flow path resistance increasing structure portion23may preferably be provided at the outer cylinder20in terms of productivity. Similarly, the flow path resistance increasing member may be provided at the inner cylinder10, the outer cylinder20, or both, which face the flow path R1for the second fluid. However, the flow path resistance increasing member may preferably be provided at the outer cylinder20in terms of productivity. The flow path resistance increasing structure portion23and the flow path resistance increasing member are different from each other in that the former is a portion formed by shaping the inner cylinder10and/or the outer cylinder20, whereas the latter is a member provided separately from the inner cylinder10and/or the outer cylinder20. Here, each ofFIGS.1to3shows an example of the case where the flow path resistance increasing structure portion23is provided at the outer cylinder20facing the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22. Other examples are shown inFIGS.5to7. FIG.5is an example of the case where the flow path resistance increasing structure portion23is provided at the inner cylinder10facing the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22. Each ofFIGS.6and7shows an example of the case where the flow path resistance increasing member60is arranged at the outer cylinder20facing the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22. FIG.8is an example of the case where the flow path resistance increasing member60is arranged at the inner cylinder10facing the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22. Each ofFIGS.5to8is a cross-sectional view of the flow path member for the heat exchanger in the direction orthogonal to the axial direction of the outer cylinder and the inner cylinder. The perspective views and the top views of the flow path member for the heat exchanger are omitted, because they are easily understood with reference toFIGS.1to3. It is preferable that the flow path resistance increasing structure portion23and/or the flow path resistance increasing member60are provided along the flow direction D1of the first fluid. Thus, the provision of the flow path resistance increasing structure portion23and/or the flow path resistance increasing member60can further increase the rate at which the second fluid passes through the flow path R2for the second fluid on the longer circumference side having the longer distance between the feed port21and the discharge port22, so that the heat recovery amount can be further increased. The flow path resistance increasing structure portion23and/or the flow path resistance increasing member60preferably have a structure capable of partially reduce the cross-sectional area of the flow path for the second fluid, as shown inFIGS.3and5-8. Such a structure can allow the flow path resistance for the second fluid to be increased. The structure capable of partially reducing the cross-sectional area of the flow path for the second fluid is not limited to any particular structure, and can be a variety of structures including shapes such as those shown inFIGS.3and5-8. The flow path resistance increasing member60as shown inFIGS.6-8may be divided into a plurality of parts, and its width, thickness, and the like may be adjusted as needed. Among these structures, a bellows structure as shown inFIG.6is preferred. Since the bellows structure has a larger surface area, the heat exchange easily take place even in the flow path R1for the second fluid on the shorter circumference side having the shorter distance between the feed port21and the discharge port22, so that the heat recovery amount can be increased. Hereinafter, the flow path member100for the heat exchanger will be described in detail for each member. <Regarding Inner Cylinder10> The inner cylinder10is a cylindrical member capable of housing a heat recovery member through which the first fluid can pass. The inner cylinder10may have any shape such as a cylindrical shape having a circular cross section perpendicular to the axial direction, a rectangular cylindrical shape having a triangular, quadrangular, pentagonal, or hexagonal cross section, and an elliptical cylindrical shape having an elliptical cross section, although not particularly limited thereto. Among them, the inner cylinder10is preferably cylindrical. An inner peripheral surface of the inner cylinder10may be in direct or indirect contact with an outer peripheral surface of the heat recovery member in the axial direction (the flow path direction D1of the first fluid). However, in terms of thermal conductivity, it is preferable that the inner peripheral surface of the inner cylinder is in direct contact with the axial outer peripheral surface of the heat recovery member. In this case, a cross-sectional shape of the inner peripheral surface of the inner cylinder10coincides with a cross-sectional shape of the outer peripheral surface of the heat recovery member. Also, it is preferable that the axial direction of the first inner cylinder10coincides with that of the heat recovery member, and a central axis of the inner cylinder10coincides with that of the heat recovery member. Diameters (outer and inner diameters) of the inner cylinder10are not particularly limited. However, it is preferable that the diameters of both end portions in the axial direction are increased. Such a structure can allow the inner cylinder10to be directly joined to the outer cylinder20, thus eliminating any need for a connecting member50. Further, when an intermediate cylinder is provided between the inner cylinder10and the outer cylinder20, the intermediate cylinder can be provided directly on the outer peripheral surfaces of both diameter-increased end portions of the inner cylinder10in the axial direction. Since the heat of the first fluid circulating the heat recovery member is transmitted to the inner cylinder10via the heat recovery member, the inner cylinder10is preferably formed of a material having good heat conductivity. Examples of a material used for the inner cylinder10include, metals, ceramics, and the like. Examples of the metals include stainless steel, titanium alloys, copper alloys, aluminum alloys, and brass. The material of the inner cylinder10is preferably stainless steel because of its higher durability and reliability. <Regarding Outer Cylinder20> The outer cylinder20is a cylindrical member disposed so as to be spaced on a radially outer side of the inner cylinder10. The outer cylinder20may have any shape such as a cylindrical shape having a circular cross section perpendicular to the axial direction, a rectangular cylindrical shape having a triangular, quadrangular, pentagonal, or hexagonal cross section, and an elliptical cylindrical shape having an elliptical cross section, although not particularly limited thereto. Among them, the outer cylinder20is preferably cylindrical. The outer cylinder20may be arranged coaxially with the inner cylinder10. More particularly, an axial direction of the outer cylinder20may coincide with that of the inner cylinder10, and a central axis of the outer cylinder20may coincide with that of the inner cylinder10. It is preferable that an axial length of the outer cylinder20is set to be longer than that of the heat recovery member housed in the inner cylinder10. In the axial direction of the outer cylinder20, a center position of the outer cylinder20preferably coincide with that of the inner cylinder10. Diameters (outer and inner diameters) of the outer cylinder20are not particularly limited. However, it is preferable that the diameters of both end portions in the axial direction are decreased. Such a structure can allow the outer cylinder20to be directly joined to the inner cylinder10, thus eliminating any need for a connecting member50. Further, when an intermediate cylinder is provided between the inner cylinder10and the outer cylinder20, the intermediate cylinder can be provided directly on the outer peripheral surfaces of both diameter-decreased end portions of the outer cylinder10in the axial direction. The outer cylinder20can preferably be made of, for example, a metal or ceramics. Examples of metal include stainless steel, titanium alloys, copper alloys, aluminum alloys, brass and the like. Among them, the material of the outer cylinder20is preferably the stainless steel because it has higher durability and reliability. The outer cylinder20has the feed port21capable of feeding the second fluid and the discharge port22capable of discharging the second fluid. The positions of the feed port21and the discharge port22are not particularly limited as long as they are provided so as to be located in a distance of less than half the circumference of the outer cylinder20in the circumferential direction. For example, as shown inFIG.2, the feed port21and the discharge port22can be provided such that the feed port21and the discharge port22are located on the same circumference of the outer cylinder20. More preferably, the feed port21and the discharge port22can be provided such that a central portion P1of the feed port21and a central portion P2of the discharge port22are located on the same circumference of the outer cylinder20. As used herein, the phrase “a central portion P1of the feed port21and a central portion P2of the discharge port22are located on the same circumference of the outer cylinder20” means that the central portion P1of the feed port21and the central portion P2of the discharge port22are located on one circumference line L orthogonal to the axial direction of the cylinder20. Further, the feed port21and the discharge port22may be provided such that the feed port21and the discharge port22are located on different circumferences of the outer cylinder20.FIG.9shows a top view of the flow path member for the heat exchanger according to such an embodiment. As used herein, the phrase “the feed port21and the discharge port22are located on different circumferences of the outer cylinder20” means that the central portion P1of the feed port21and the central portion P2of the discharge port22are located on two circumference lines L1and L2, respectively, which are each orthogonal to the axial direction of the outer cylinder20. By thus providing the feed port21and the discharge port22, the flow direction D2of the second fluid is opposed to the flow direction D1of the first fluid, so that the heat recovery amount can be increased. <Regarding Feed Pipe30and Discharge Pipe40> The feed pipe30and the discharge pipe40are tubular members through which the second fluid can flow. The feed pipe30and the discharge pipe40are connected to the feed port21and the discharge port22, respectively. The connection method may be known methods, including, but not limited to, shrink fitting, press fitting, brazing, and diffusion bonding. Each of the feed pipe30and the discharge pipe40may have any shape such as a cylindrical shape having a circular cross section perpendicular to the axial direction, a rectangular cylindrical shape having a triangular, quadrangular, pentagonal, or hexagonal cross section, and an elliptical cylindrical shape having an elliptical cross section, although not particularly limited thereto. Among them, the each of the feed pipe30and the discharge pipe40is preferably cylindrical. The axial direction of each of the feed pipe30and the discharge pipe40is not particularly limited. For example, in a cross section perpendicular to the axial direction of the outer cylinder20, the feed pipe30and the discharge pipe40may be configured such that the axial direction is oriented toward a central portion P4of the outer cylinder20as shown inFIG.10, or the feed pipe30and the discharge pipe40may be configured such that the axial direction is oriented toward the flow path R2for the second fluid on the longer circumference side, as shown inFIGS.3to8. Especially, by configuring the feed pipe30and the discharge pipe40such that the axial direction of each of the feed pipe30and the discharge pipe40is oriented toward the flow path R2for the second fluid on the longer circumference side, the second fluid is facilitated to flow through the flow path R2for the second fluid on the longer circumference side, so that an opportunity to bring the second fluid into contact with the inner cylinder10can be increased, and the heat recovery amount can be increased. Further, as shown inFIG.11, in the cross section perpendicular to the axial direction of the outer cylinder20, a buffer portion31may be provided at the end portion of the feed pipe30on the feed port21side, and the buffer portion31may be formed such that the second fluid preferentially flow through the flow path R2for the second fluid on the longer circumference side. AlthoughFIG.11shows an example in which the buffer portion31is provided at the feed pipe30, the buffer portion may be provided at the end portion of the discharge pipe40on the discharge port22side. Such a configuration can provide an increased opportunity to bring the second fluid into contact with the inner cylinder10, so that the heat recovery amount can be increased. The feed pipe30and the discharge pipe40can preferably be made of, for example, a metal or ceramics. Examples of the metal include stainless steel, titanium alloys, copper alloys, aluminum alloys, brass and the like. Among them, the material of each of the feed pipe30and the discharge pipe40is preferably the stainless steel because it has higher durability and reliability. The feed pipe30and the discharge pipe40may be fitted into the feed port21and the discharge port22, respectively, via a flow adjustment portion70, as shown inFIG.12. When the feed pipe30and the discharge pipe40are directly fitted into the feed port21and the discharge port22of the outer cylinder20, the second fluid may stagnate and boil around the fitted portion of the feed port30and the discharge port40, causing problems such as 1) to 3) as described below: 1) The heat exchanger becomes locally hot, causing defects of the heat exchanger itself. 2) The heat is excessively recovered. 3) Generated bubbles (vapor) degrade the characteristics of other components. By fitting the feed pipe30and the discharge pipe40into the feed port21and the discharge port22, respectively, via the flow adjustment portion70, the stagnation of the second fluid around the fitted portion of the feed pipe30and the discharge pipe40can be suppressed. The structure of the flow adjustment portion70is not particularly limited as long as it can adjust the flow of the second fluid, but it is preferable that the flow adjustment portion has a structure provided at a part of the outer cylinder20in the outer circumferential direction and expanding outward in the radial direction of the outer cylinder20. Such a structure can allow the stagnation of the second fluid around the fitted portion of the feed pipe30and the discharge pipe40to be stably suppressed. It is preferable that the flow adjustment portion70has at least one planar region, and the planar region is provided with the fitted portion of the feed pipe30and the discharge pipe40. Such a structure can provide easy joining of the feed pipe30and the discharge pipe40to the flow adjustment portion70. <Regarding Connecting Member50> The connecting member50is a tubular member that connects an upstream side of the inner cylinder10to an upstream side of the outer cylinder20, and a downstream side of the inner cylinder10to a downstream side of the outer cylinder20, as needed. As described above, it should be noted that it is not necessary to provide the connecting member50as long as the inner cylinder10and the outer cylinder20are directly connected to each other by increasing the diameters of the inner cylinder10on the upstream side and the downstream side, and/or decreasing the diameters of the outer cylinder20on the upstream side and the downstream side. The axial direction of the connecting member50is preferably arranged coaxially with the inner cylinder10and the outer cylinder20. More particularly, the axial direction of the connecting member50may preferably coincide with that of each of the inner cylinder10and the outer cylinder20, and the central axis of the connecting member50may preferably coincide with that of each of the inner cylinder10and the outer cylinder20. The connecting member50has a flange portion for connecting the inner cylinder10to the outer cylinder20. The flange portion may have various known shapes, although not particularly limited. The material used for the connecting member50is not particularly limited, and the same materials as those illustrated for the inner cylinder10and the outer cylinder20may be used. <Regarding Intermediate Cylinder> The intermediate cylinder can optionally be provided between the inner cylinder10and the outer cylinder20. The intermediate cylinder may have any shape such as a cylindrical shape having a circular cross section perpendicular to the axial direction, a rectangular cylindrical shape having a triangular, quadrangular, pentagonal, or hexagonal cross section, and an elliptical cylindrical shape having an elliptical cross section, although not particularly limited thereto. Among them, the intermediate cylinder20is preferably cylindrical. It is preferable that an axial direction of the intermediate cylinder coincides with that of each of the inner cylinder10and the outer cylinder20, and a center axis of the intermediate cylinder coincides with that of each of the inner cylinder10and the outer cylinder20. It is preferable that an axial length of the intermediate cylinder is longer than that of the heat recovery member housed in the inner cylinder10. In the axial direction of the intermediate cylinder, the central position of the intermediate cylinder30preferably coincides with that of the outer cylinder20. The intermediate cylinder is arranged between the inner cylinder10and the outer cylinder20, and forms a first flow path which can allow the second fluid to flow between the outer cylinder20and the intermediate cylinder, and a second flow path which can allow the second flow path to flow between the inner cylinder10and the intermediate cylinder. The intermediate cylinder has at least one communication hole which can allow the second fluid to flow between the first flow path and the second flow path. Such a structure can allow the second fluid to be circulated in the second flow path. The shape of the communication hole is not particularly limited as long as it allows the second fluid to flow, and it can be, for example, various shapes such as a circular shape, an elliptical shape, and a polygonal shape. Further, a slit may be provided as the communication hole along the axial direction or the circumferential direction of the inner cylinder. The number of communication holes is not particularly limited, and there may be a plurality of communication holes in the axial direction of the inner cylinder. In general, the number of communication holes may be appropriately set depending on the shape of the communication hole. When the second flow path is filled with the liquid second fluid, the heat of the first fluid transmitted from the heat recovery member to the inner cylinder10is transmitted to the second fluid in the first flow path via the second fluid in the second flow path. On the other hand, when a temperature of the inner cylinder10is higher and vapor (bubbles) of the second fluid is generated in the second flow path, the thermal conduction of the second fluid in the first flow path via the second fluid in the second flow path is suppressed. This is because thermal conductivity of a gaseous fluid is lower than that of a liquid fluid. That is, a state where heat exchange is promoted and a state where heat exchange is suppressed can be switched depending on whether or not the second fluid in the gaseous state is generated in the second flow path. The states of heat exchange do not require any external control. Therefore, the providing of the intermediate cylinder can allow for easy switching between promotion and suppression of heat exchange between the first fluid and the second fluid without external control. It should be noted that the second fluid may be a fluid having a boiling point in a temperature range in which heat exchange is to be suppressed. In another embodiment, the flow path member100for the heat exchanger may have the following configuration: A flow path member100for a heat exchanger, including:an inner cylinder10capable of housing a heat recovery member through which a first fluid can flow; an outer cylinder20having a feed port21capable of feeding a second fluid and a discharge port22capable of discharging the second fluid, the outer cylinder20being disposed so as to be spaced on a radially outer side of the inner cylinder10such that a flow path R1, R2for the second fluid is formed between the outer cylinder20and the inner cylinder10; a feed pipe30connected to the feed port21; and a discharge pipe40connected to the discharge port22,wherein the feed port21and the discharge port22are provided so as to be located in a distance of less than half the circumference of the outer cylinder20in a circumferential direction20,wherein the feed port21and the discharge port22are located on the same circumference of the outer cylinder20, andwherein the flow path member100includes at least one of a flow path resistance increasing structure portion23provided at the flow path R1for the second fluid on a shorter circumference side between the feed port21and the discharge port22, and a flow path resistance increasing member60provided at the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22. The flow path member100for the heat exchanger having such a configuration can also improve the heat recovery amount. The flow path member100for the heat exchanger according to Embodiment 1 of the present invention having the above structure can be produced according to a known method. More particularly, the flow path member for the heat exchanger according to Embodiment 1 of the present invention can be produced as follows: First, the inner cylinder10is prepared. When the flow path resistance increasing structure portion23is provided on the outer peripheral surface of the inner cylinder10, the flow path resistance increasing structure portion23is formed by a forming process or the like. When the flow path resistance increasing member60is arranged on the outer peripheral surface of the inner cylinder10, the flow path resistance increasing member60is placed on the outer peripheral surface of the inner cylinder10and fixed by welding or the like. Examples of the forming process include pressing and embossing. Similarly, when the outer cylinder20provided with the feed pipe30and the discharge pipe40is prepared. When the flow path resistance increasing structure portion23is provided on the inner peripheral surface of the outer cylinder20, the flow path resistance increasing structure portion23is formed by a forming process or the like. When the flow path resistance increasing member60is arranged on the inner peripheral surface of the outer cylinder20, the flow path resistance increasing member60is arranged on the inner peripheral surface of the outer cylinder20and fixed by welding or the like. Subsequently, the inner cylinder10as described above is arranged in the outer cylinder20as described above and fixed by welding or the like. It should be noted that the above production method is merely illustrative, and the order of steps can be changed as needed. Since the flow path member100for the heat exchanger according to Embodiment 1 of the present invention has the structure as described above, the heat recovery amount can be improved. (2) Heat Exchanger The heat exchanger according to Embodiment 1 of the present invention includes the flow path member100for the heat exchanger as described above and a heat recovery member housed in the inner cylinder10. The heat recovery member is not particularly limited as long as it can recover heat. For example, a honeycomb structure can be used as the heat recovery member. The honeycomb structure is generally a pillar shaped structure. A cross-sectional shape orthogonal to an axial direction of the honeycomb structure is not particularly limited, and it may be a circle, an ellipse, a quadrangle, or other polygons. The honeycomb structure has an outer peripheral wall, and a partition wall which is arranged inside the outer peripheral wall and define a plurality of cells forming flow paths each extending from a first end face to a second end face. The partition wall and the outer peripheral wall contain ceramics as main components. The first end face and the second end face are end faces on both sides of the honeycomb structure in the axial direction (a cell extending direction). Each cell may have any cross-sectional shape (a shape of a cross section perpendicular to the cell extending direction), including, but not particularly limited to, circular, elliptical, triangular, quadrangular, hexagonal and other polygonal shapes. Also, the cells may be radially formed in a cross section in a direction perpendicular to the cell extending direction. Such a structure can allow heat of the first fluid flowing through the cells to be efficiently transmitted to the outside of the honeycomb structure. The outer peripheral wall preferably has a thickness larger than that of the partition wall. Such a structure can lead to increased strength of the outer peripheral wall which would otherwise tend to generate breakage (e.g., cracking, chinking, and the like) by thermal stress or the like due to a difference between temperatures of the first fluid and the second fluid. A thickness of the partition wall is not particularly limited, and it may be adjusted as needed depending on applications. For example, the thickness of the partition wall may preferably be from 0.1 to 1 mm, and more preferably from 0.2 to 0.6 mm. The thickness of the partition wall of 0.1 mm or more can ensure a sufficient mechanical strength of the honeycomb structure. Further, the thickness of the partition wall of 1 mm or less can suppress problems that the pressure loss is increased due to a decrease in an opening area and the heat recovery efficiency is decreased due to a decrease in a contact area with the first fluid. The honeycomb structure can be produced as follows: First, a green body containing ceramic powder is extruded into a desired shape to prepare a honeycomb formed body. The material of the honeycomb formed body is not particularly limited, and a known material can be used. For example, when producing a honeycomb formed body containing a Si-impregnated SiC composite as a main component, a binder and water or an organic solvent are added to a predetermined amount of SiC powder, and the resulting mixture is kneaded to form a green body, which can be then formed into a honeycomb formed body having a desired shape. The resulting honeycomb formed body can be then dried, and the dried honeycomb formed body can be impregnated with metallic Si and fired in an inert gas under reduced pressure or in vacuum to obtain a honeycomb structure having cells serving as flow paths for the first fluid, defined by the partition wall. When the honeycomb structure is housed in the inner cylinder10, the honeycomb structure may be inserted into the inner cylinder10, arranged at a certain position, and then shrink-fitted. In this case, press fitting, brazing, diffusion bonding, or the like may be used in place of the shrink fitting. Since the heat exchanger according to Embodiment 1 of the present invention uses the flow path member100for the heat exchanger, the heat recovery amount can be improved. Embodiment 2 FIG.13is a cross-sectional view of a flow path member for a heat exchanger according to Embodiment 2 of the present invention in a direction orthogonal to an axial direction of an outer cylinder and an inner cylinder. It should be noted that, in the descriptions of a flow path member200for a heat exchanger according to Embodiment 2 of the present invention, the components having the same reference numerals as those appearing in the descriptions of the flow path member100for the heat exchanger according to Embodiment 1 of the present invention are the same as those of the flow path member200for the heat exchanger according to Embodiment of the present invention. Therefore, detailed descriptions of those components will be omitted. The flow member200for the heat exchanger according to Embodiment 2 of the present invention is different from the flow member100for the heat exchanger according to Embodiment 1 in the method of providing the higher flow path resistance for the second fluid on the shorter circumference side between the feed port21and the discharge port22than the flow path resistance for the second fluid on the longer circumference side between the feed port21and the discharge port22, and is otherwise the same as the flow member100for the heat exchanger according to Embodiment 1. That is, in the flow member200for the heat exchanger according to Embodiment 2 of the present invention, the inner cylinder10is eccentric such that the central portion P3of the inner cylinder10is located on the feed port21and discharge port22side relative to the central portion P4of the outer cylinder20in the cross section perpendicular to the flow direction D1of the first fluid. Such an eccentric inner cylinder10can increase the flow path resistance for the second fluid on the shorter circumference side where the distance between the feed port21and the discharge port22is shorter, so that the rate of the second fluid passing through the flow path R2on the longer circumference side where the distance between the feed port21and the discharge port22is longer can be increased, thereby increasing the heat recovery amount. The flow path member200for the heat exchanger according to Embodiment 2 of the present invention can be produced by arranging the inner cylinder10inside the outer cylinder20such that the inner cylinder10is eccentric, and fixing them by welding or the like. The flow path member200for the heat exchanger according to Embodiment 2 of the present invention has higher productivity and lower production cost than those of the flow path member100for the heat exchanger according to Embodiment 1 of the present invention, because in the former, there is no need to provide the flow path resistance increasing structure portion23at the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22, or to provide the flow path resistance increasing member60at the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22. However, from the viewpoint of a fine adjustment of the rate of the second fluid passing through the flow path R1, R2for the second fluid, the flow path resistance increasing structure portion23may be provided at the flow path R1for second fluid on the shorter circumference side between the feed port21and the discharge port22, or the flow path resistance increasing member60may be provided at the flow path R1for the second fluid on the shorter circumference side between the feed port21and the discharge port22. In another embodiment, the flow path member200for the heat exchanger according to Embodiment 2 of the present invention may have the following configuration: A flow path member200for a heat exchanger, including: an inner cylinder10capable of housing a heat recovery member through which a first fluid can flow; an outer cylinder20having a feed port21capable of feeding a second fluid and a discharge port22capable of discharging the second fluid, the outer cylinder20being disposed so as to be spaced on a radially outer side of the inner cylinder10such that a flow path R1, R2for the second fluid is formed between the outer cylinder20and the inner cylinder10; a feed pipe30connected to the feed port21; and a discharge pipe40connected to the discharge port22, wherein the feed port21and the discharge port22are provided so as to be located in a distance of less than half the circumference of the outer cylinder20in a circumferential direction20, wherein the feed port21and the discharge port22are located on the same circumference of the outer cylinder20, and wherein, in a cross section orthogonal to a flow direction D1of the first fluid, the inner cylinder10is eccentric such that a central portion P3of the inner cylinder10is located on the feed port21and discharge port22side relative to a central portion P4of the outer cylinder20. The flow path member200for the heat exchanger having such a configuration also can improve the heat recovery amount. The heat exchanger according to Embodiment 2 of the present invention includes the flow path member200for the heat exchanger and the heat recovery member housed in the inner cylinder10. Since the heat exchanger uses the flow path member200for the heat exchanger as described above, the heat recovery amount can be improved. DESCRIPTION OF REFERENCE NUMERALS 10inner cylinder20outer cylinder21feed port22discharge port23flow path resistance increasing structure30feed pipe31buffer portion40discharge pipe50connecting member60flow path resistance increasing member70flow adjustment portion100,200flow path member for heat exchangerR1, R2flow path for second fluidD1flow direction of first fluidD2flow direction of second fluid | 41,054 |
11859917 | REFERENCE NUMERALS IN THE DRAWINGS 10: Heat exchanger20: Vapor collection21: Rising pipe30: Liquid collectionpipe40: Exchange pipelinepipe31: Dropping pipe43: Transition section41: Condensing section42: Evaporation46: Liquid return pipe44: Vapor-liquidsection51: First separationseparation pipe45: Finboard47: Heat insulation50: Separation board441: Main pipe bodyboard100: Cabinet444: Liquid52: Second separation443: Vaporaggregation cavityboardaggregation cavity442: Porous separationboard461: Liquid return path DESCRIPTION OF EMBODIMENTS In the description of this application, it should be understood that directions or position relationships indicated by terms “length”, “width”, “up”, “down”, “top”, “bottom”, and the like are based on directions or position relationships shown in the accompanying drawings, which are used only for describing this application and for description simplicity, but do not indicate or imply that an indicated apparatus or component must have a specific direction or must be constructed and operated in a specific direction. Therefore, this should not be understood as a limitation on this application. In addition, the terms “first” and “second” are merely intended for a purpose of description, and shall not be understood as an indication or implication of relative importance or implicit indication of the number of indicated technical features. Therefore, a feature limited by “first” or “second” may explicitly or implicitly include one or more features. In the descriptions of this application, “a plurality of” means two or more than two, unless otherwise specifically limited. In this application, terms “installation”, “connect”, “connection”, “fix”, and the like should be understood in a broad sense unless otherwise expressly stipulated and limited. For example, “connection” may be fixed connection, dismountable connection, or integrated connection; may be mechanical connection or electrical connection; or may be direct connection, indirect connection through an intermediate medium, or connection inside two components or a mutual relationship between two components. A person of ordinary skill in the art may interpret specific meanings of the foregoing terms in this application according to specific cases. For ease of understanding, this application first briefly explains a principle of a thermosyphon (TS). As shown inFIG.1, the thermosyphon has an evaporator1and a condenser2. A phase change working substance in the evaporator1absorbs heat from a hot environment, so that a liquid phase change working substance is evaporated to a vapor phase change working substance. The vapor phase change working substance rises to the condenser2under a pressure difference function due to heat, and is converted to the liquid phase change working substance after heat release. The liquid phase change working substance returns to the evaporator1due to gravity. In an existing heat exchange device, the evaporator1and the condenser2need to be separately disposed. Because the evaporator1and the condenser2are independent components, an additional connection pipeline3needs to be disposed to connect the evaporator1and the condenser2. In this case, the heat exchange device has a large volume, the manufacturing costs are high, and manufacturing efficiency is not high. Therefore, an embodiment of this application provides a heat exchanger10, a cabinet100, and a communications base station, so that the manufacturing costs of the heat exchanger10can be preferably reduced and manufacturing efficiency of the heat exchanger10can be improved. As shown inFIG.2toFIG.4, this embodiment of this application provides the heat exchanger10, the cabinet100, and the communications base station. The communications base station indicates an interface device by using which a mobile device accesses the Internet. The cabinet may be an outdoor cabinet that is in the communications base station and that is configured to accommodate a module of a related device. When a device in the cabinet100runs, heat is greatly dissipated. Therefore, heat accumulates in the cabinet100to form the hot environment. An air environment outside the cabinet100is a cold environment relative to the hot environment. In this embodiment, a specific temperature range is not limited for the hot environment and the cold environment. In this embodiment, the heat exchanger10mainly includes the following components: a vapor collection pipe20, a liquid collection pipe30, and an exchange pipeline40. The vapor collection pipe20is disposed in the cold environment. The liquid collection pipe30is disposed in the hot environment. Specifically, the exchange pipeline40includes a condensing section41, an evaporation section42, and a transition section43. An upper end of the condensing section41is connected to the vapor collection pipe20. A lower end of the condensing section41is connected to a first end of the transition section43. An upper end of the evaporation section42is connected to a second end of the transition section43relative to the first end. A lower end of the evaporation section42is connected to the liquid collection pipe30. More specifically, the upper end of the condensing section41is connected to the vapor collection pipe20. The lower end of the condensing section41is connected to the first end of the transition section43. The upper end of the evaporation section42is connected to the second end of the transition section43relative to the first end. The lower end of the evaporation section42is connected to the liquid collection pipe30. With reference toFIG.10, the evaporation section42and the condensing section41respectively extend in directions opposite to each other. An included angle between an axis of the evaporation section42and an axis of the transition section43meets the following relationship: 60°≤θ1<180°. An included angle between an axis of the condensing section41and the axis of the transition section43meets the following relationship: 60°≤θ2<180°. Herein, θ1represents the included angle between the axis of the evaporation section42and the axis of the transition section43, and θ2represents the included angle between the axis of the condensing section41and the axis of the transition section43. The evaporation section42and the condensing section41respectively extend in the directions opposite to each other. The included angle exists between the transition section43and each of the evaporation section42and the condensing section41. In this way, the evaporation section42, the transition section43, and the condensing section41can form a pipeline like a “Z”-shaped pipeline or an “S”-shaped pipeline. When the heat exchanger10is applied to the cabinet100for heat dissipation, the liquid collection pipe30and the evaporation section42in the heat exchanger10are located in the hot environment in the cabinet body in the cabinet100, so that a phase change working substance flowing through the liquid collection pipe30and the evaporation section42rapidly absorbs heat. The vapor collection pipe20and the condensing section41in the heat exchanger10are located in the cold environment outside the cabinet body in the cabinet100, so that the phase change working substance flowing through the vapor collection pipe20and the condensing section41rapidly release heat. The transition section43is located at a junction part between the hot environment and the cold environment. With reference toFIG.2, the heat exchanger10provided in this embodiment of this application is further described in the following. When the heat exchanger10works, the liquid collection pipe30is filled with the phase change working substance. The phase change working substance absorbs heat in the hot environment, and undergoes a transform process from a liquid state to a vapor state. Then, a phase change working substance in a vapor-liquid mixed state absorbs sufficient heat when passing through the evaporation section42, and then arrives at the transition section43after an accelerated liquid-vapor transform process. Because the transition section43is in the junction part between the hot environment and the cold environment, vapor-liquid separation can be relatively fully implemented for the phase change working substance flowing through the transition section43, and then the vapor phase change working substance can flow to the vapor collection pipe20along the condensing section41. In a process in which the vapor phase change working substance flows to the vapor collection pipe20through the condensing section41, the vapor phase change working substance in the condensing section41and the vapor collection pipe20can fully release heat to the cold environment, be transformed to the liquid phase change working substance after the heat release process, and then return to the liquid collection pipe30. The evaporation section42, the transition section43, and the condensing section41form the foregoing pipeline like the “Z”-shaped pipeline or the “S”-shaped pipeline. In this way, volume optimization of the heat exchanger10is also implemented, and assembling space required by the heat exchanger10is reduced. However, the vapor-liquid separation process of the phase change working substance can be implemented only by using the evaporation section, the transition section, and the condensing section. Therefore, heat exchange efficiency of the heat exchanger10is greatly improved, and structure simplification of the heat exchanger10is also implemented, to reduce the manufacturing costs of the heat exchanger10, reduce components in the heat exchanger10, reduce assembling difficulty and time consumption of the heat exchanger10, and improve manufacturing efficiency of the heat exchanger10. The cabinet100provided in this embodiment of this application includes the cabinet body and the foregoing heat exchanger10. By using a combination of the vapor collection pipe20, the liquid collection pipe30, and the exchange pipeline40in the foregoing heat exchanger10, heat exchange efficiency of the heat exchanger10is greatly improved, and structure simplification and volume optimization of the heat exchanger10are also implemented, to reduce the manufacturing costs of the heat exchanger10. In this way, heat dissipation performance of a related device in the cabinet100is improved, the overall manufacturing costs of the cabinet100are reduced, and overall manufacturing efficiency of the cabinet100is improved. The communications base station provided in this embodiment of this application includes the foregoing cabinet100. By using the foregoing cabinet100, heat dissipation performance of a device in the cabinet100is improved by using the heat exchanger10disposed in the foregoing cabinet100. In this way, overall heat dissipation performance of the communications base station is improved, which helps improve overall performance of the communications base station. In another embodiment of this application, the evaporation section42, the transition section43, and the condensing section41are formed through bending a pipe. Specifically, the evaporation section42, the transition section43, and the condensing section41are formed through bending the pipe. In this way, a manufacturing process of the evaporation section42, the transition section43, and the condensing section41is simplified, so that the evaporation section42, the transition section43, and the condensing section41can be integrally manufactured, to reduce the manufacturing costs of the evaporation section42, the transition section43, and the condensing section41. In this way, manufacturing efficiency of the evaporation section42, the transition section43, and the condensing section41are improved, and an overall manufacturing process of the heat exchanger10is simplified. In another aspect, the evaporation section, the transition section, and the condensing section are formed through bending a pipe, to improve overall strength of the evaporation section, the transition section, and the condensing section and sealing performance of a connection part between the evaporation section, the transition section, and the condensing section. This greatly reduces a probability that the phase change working substance leaks from the connection part between the evaporation section, the transition section, and the condensing section. In another embodiment of this application, as shown inFIG.4, there are a plurality of exchange pipelines40. An upper end of each condensing section41in the plurality of exchange pipelines40is connected to the vapor collection pipe20. A lower end of one condensing section41in the plurality of exchange pipelines40is connected to a first end of one transition section43in the plurality of exchange pipelines40. An upper end of one evaporation pipe42in the plurality of exchange pipelines40is connected to a second end of the transition section43relative to the first end in the plurality of exchange pipelines40. A lower end of each evaporation section42in the plurality of exchange pipelines40is connected to the liquid collection pipe30. Specifically, to improve heat exchange efficiency of the heat exchanger10, there may be a plurality of exchange pipelines40. In this way, each condensing section41may be connected to the vapor collection pipe20in a radial direction of the vapor collection pipe20. Each evaporation section42may be connected to the liquid collection pipe30in a radial direction of the liquid collection pipe30. Therefore, assembling space of the heat exchanger10is fully used, to dispose the condensing section41and the evaporation section42as many as possible, thereby implementing higher heat exchange efficiency of the heat exchanger10. Optionally, the evaporation sections42are disposed at intervals, so that each evaporation section42can fully contact the hot environment. In this way, the phase change working substance flowing in each evaporation section42can fully absorb heat from the hot environment. Likewise, the condensing sections41may also be disposed at intervals, so that each condensing section41can fully contact the cold environment. In this way, the phase change working substance flowing in each condensing section41can fully release heat to the cold environment. In another embodiment of this application, as shown inFIG.5andFIG.6, the exchange pipeline40further includes a vapor-liquid separation pipe44. The vapor-liquid separation pipe44is connected to each transition section43. The vapor-liquid separation pipe41is configured to separate the phase change working substance in each transition section43into the vapor phase change working substance and the liquid phase change working substance. Specifically, the vapor-liquid separation pipe44is disposed. In this way, when the vapor-liquid mixed phase change working substance passes through the transition section43, separation can be better implemented for the liquid phase change working substance and the vapor phase change working substance in the vapor-liquid separation pipe44, to better improve heat exchange efficiency of the heat exchanger10. In another embodiment of this application, as shown inFIG.7, the vapor-liquid separation pipe44includes a main pipe body441and a porous separation board442. The main pipe body441is connected to the transition section43and is connected to the liquid collection pipe30. The porous separation board442is disposed in the main pipe body441along an axis of the main pipe body441. The porous separation board442separates space in the main pipe body441into a vapor aggregation cavity443and a liquid aggregation cavity444. The vapor aggregation cavity443is connected to the vapor collection pipe20. The liquid aggregation cavity444is connected to the liquid collection pipe30. Specifically, the vapor-liquid separation pipe44includes the main pipe body441and the porous separation board442. In this way, when a part of the vapor-liquid mixed phase change working substance enters the main pipe body441, the vapor phase change working substance in the vapor-liquid mixed phase change working substance can pass through a hole in the porous separation board442, enter the vapor aggregation cavity443, and then enter the vapor collection pipe20through the vapor aggregation cavity443. The liquid phase change working substance remaining in the liquid aggregation cavity444may return to the liquid collection pipe30through evaporation section42. In this way, flowing efficiency of the vapor phase change working substance and the liquid phase change working substance is improved. In addition, vapor-liquid separation efficiency of the phase change working substance is improved, to improve overall heat exchange efficiency of the heat exchanger10. In another embodiment of this application, as shown inFIG.7, the heat exchanger10further includes a rising pipe21of the vapor collection pipe20. The vapor collection pipe20is connected to the condensing section41. One end of the rising pipe21is connected to the vapor aggregation cavity443. The other end of the rising pipe21is connected to the vapor collection pipe20. Specifically, two ends of the vapor collection pipe20are sealed. When there are a plurality of condensing sections41, each condensing section41is arranged in a length direction of the vapor collection pipe20and is connected to the vapor collection pipe20. In this way, assembling space in the length direction of the vapor collection pipe20can be fully used to dispose a sufficient quantity of condensing sections41, to improve heat exchange efficiency of the heat exchanger10. Because of the existence of the rising pipe21, the vapor phase change working substance in the vapor aggregation cavity443can enter the vapor collection pipe20through the rising pipe21. The rising pipe21may be exposed in the cold environment. In this way, when entering the rising pipe21, the vapor phase change working substance can release heat to the cold environment, to improve overall heat exchange efficiency of the heat exchanger10. In another embodiment of this application, the vapor collection pipe20, the vapor-liquid separation pipe44, the evaporation section42, the condensing section41, and the transition section43are integrally welded. The vapor collection pipe20, the vapor-liquid separation pipe44, the evaporation section42, the condensing section41, and the transition section43are integrally welded, to improve processing manufacturing efficiency of the heat exchanger10. For example, the integrated welding manner of the vapor collection pipe20, the vapor-liquid separation pipe44, the evaporation section42, the condensing section41, and the transition section43may be vapor protection welding or aluminum soldering. The vapor protection welding or the aluminum soldering can be used to reduce the welding costs and improve organization stability of a welded part between the foregoing pipes, so that the integrally welded heat exchanger10has relatively good overall strength. In another embodiment of this application, the evaporation section42, the condensing section41, and the transition section43may be integrally formed. The integrated forming manner may be casting forming or extrusion forming. The evaporation section42, the condensing section41, and the transition section43may first cast or extruded to form an entire pipeline. Then, the entire pipeline is bent to form the evaporation section42, the condensing section41, and the transition section43. In addition, the evaporation section42, the condensing section41, and the transition section43may be first bent, and then be integrally welded with the vapor collection pipe20and the vapor-liquid separation pipe44. Alternatively, the entire pipeline formed through casting or extrusion is integrally welded with the vapor collection pipe20and the vapor-liquid separation pipe44, and then the entire pipeline is bent to form the evaporation section42, the condensing section41, and the transition section43. In another embodiment of this application, as shown inFIG.7, the heat exchanger10further includes a dropping pipe31. One end of the dropping pipe31is connected to the liquid aggregation cavity444. The other end of the dropping pipe31is connected to the liquid collection pipe30. Specifically, like the manner of forming the vapor collection pipe20, two ends of the liquid collection pipe30are also sealed. When there are a plurality of evaporation sections42, each evaporation section42is arranged in a length direction of the liquid collection pipe30and is connected to the liquid collection pipe30. In this way, assembling space in the length direction of the vapor collection pipe20can be fully used, to improve heat exchange efficiency of the heat exchanger10. In another embodiment of this application, the liquid collection pipe30, the vapor-liquid separation pipe44, the evaporation section42, the condensing section41, and the transition section43are integrally welded. Specifically, the liquid collection pipe30, the vapor-liquid separation pipe44, the evaporation section42, the condensing section41, and the transition section43are integrally welded, to improve processing manufacturing efficiency of the heat exchanger10. Optionally, the vapor collection pipe20, the liquid collection pipe30, the vapor-liquid separation pipe44, the evaporation section42, the condensing section41, the transition section43, the rising pipe21, and the dropping pipe31may be all integrally welded, to better reduce the manufacturing costs of the heat exchanger10and improve manufacturing efficiency of the heat exchanger10. In another embodiment of this application, as shown inFIG.8andFIG.9, the heat exchanger10further includes a separation board50. The separation board50is disposed in the junction part between the hot environment and the cold environment, and is configured to separate the hot environment from the cold environment. The transition section43penetrates the separation board50. The evaporation section42and the liquid collection pipe30are located on one side of the separation board50(that is, the side that is of the separation board50and that faces the hot environment). The condensing section41and the vapor collection pipe20are both located on the other side of the separation board50(that is, the side that is of the separation board50and that faces the cold environment). Specifically, the separation board50can separate the hot environment from the cold environment, to avoid mutual impacts between the hot environment and the cold environment. In this way, a stable hot environment can be established for the evaporation section42and the liquid collection pipe30, so that the phase change working substance flowing in the evaporation section42and the liquid collection pipe30has relatively high efficiency of absorbing heat from the hot environment. In addition, a stable cold environment is also established for the condensing section41and the vapor collection pipe20, so that the phase change working substance flowing in the condensing section41and the vapor collection pipe20has relatively high efficiency of releasing heat to the cold environment. For example, in consideration of structure integrity, the separation board50may be connected to the vapor-liquid separation pipe44. In consideration of the manufacturing costs, the separation board50may also cooperate with the transition section43. When the heat exchanger10is applied to the cabinet100, the separation board50may be disposed at an opening that is provided on an outer wall of the cabinet100for protrusion of the transition section43, to seal an internal environment of the cabinet100. In this way, heat in the cabinet100does not escape from the opening, and consequently, heat release of the phase change working substance in the condensing section41is affected. In another embodiment of this application, as shown inFIG.9, the separation board50includes a first separation board51and a second separation board52. The first separation board51and the second separation board52are both disposed along an axis of the vapor-liquid separation pipe44. An edge of the first separation board51and an edge of the second separation board52are separately connected to outer walls of two opposite sides of the vapor-liquid separation pipe44in a sealing manner. Specifically, when the separation board50is connected to the vapor-liquid separation pipe44, the separation board50may include the first separation board51and the second separation board52. The edge of the first separation board51and the edge of the second separation board52are separately connected to the outer walls of the two opposite sides of the vapor-liquid separation pipe44in a sealing manner. In this way, cooperation between the separation board50and the vapor-liquid separation pipe44is implemented. In addition, sealing is also implemented at a connection part between the separation board50and the vapor-liquid separation pipe44, to prevent hot air in the cabinet100from escaping to the outside from the connection part between the separation board50and the vapor-liquid separation pipe44. In another embodiment of this application, when space in a width direction in the cabinet body of the cabinet100is sufficient, the included angle between the axis of the evaporation section42and the axis of the transition section43in the heat exchanger10may be limited to 120°<θ1<180°. In this way, when the space in the width direction in the cabinet body is fully used, the evaporation section42is long enough to enable the phase change working substance in the evaporation section42to fully absorb heat. When the width space in the cabinet body of the cabinet100is limited and space in a height direction is insufficient, the included angle between the axis of the evaporation section42and the axis of the transition section43may be limited to 60°≤θ1≤90°. In this way, when the space in the width direction and the space in the height direction in the cabinet body are fully used, the length of the evaporation section42can also long enough. In another embodiment of this application, when space outside the cabinet100in the cold environment is sufficient, the included angle between the axis of the condensing section41and the axis of the transition section43in the heat exchanger10may be limited to 120°<θ1<180°. In this way, the space in the cold environment can be fully used, and the condensing section41can be long enough to enable the phase change working substance in the condensing section41to fully release heat. When the space in the cold environment is insufficient, the included angle between the axis of the evaporation section42and the axis of the transition section43may be limited to 60°≤θ1≤90°. In this way, the space in the cold environment can be fully used, and the length of the condensing section41can also be long enough. In another embodiment of this application, as shown inFIG.11andFIG.12, fins45are disposed on an outer wall of the evaporation section42. Specifically, the fins45are disposed on the outer wall of the evaporation section42, so that a contact area between the evaporation section42and the hot environment can be increased by using the fins45, to accelerate a heat absorption process of the phase change working substance in the evaporator. In this way, the phase change working substance can fully absorb heat. In another embodiment of this application, as shown in the figures, fins45are disposed on an outer wall of the condensing section41. Specifically, the fins45are disposed on the outer wall of the condensing section41, so that a contact area between the condensing section41and the cold environment can be increased by using the fins45, to accelerate a heat release process of the phase change working substance in the condenser. In this way, the phase change working substance can fully release heat. In another embodiment of this application, as shown inFIG.11andFIG.12, fins45are disposed on both an outer wall of the evaporation section42and an outer wall of the condensing section41, to fully implement heat absorption and heat release of the phase change working substance, thereby better improving heat exchange efficiency of the heat exchanger10. For example, a plurality of fins45are sequentially disposed in a length direction of the outer wall of the evaporation section42and the outer wall of the condensing section41. The plurality of fins45may be arranged in parallel or in a sawtoothed manner, to make full use of assembling space on the outer wall of the evaporation section42and the outer wall of the condensing section41and dispose the fins45as many as possible. The plurality of fins45corresponding to the evaporation section42may further be connected end to end and combined into an entity, so that the plurality of fins45and the evaporation section42are integrally welded or the plurality of fins45are assembled on the evaporation section42. In this way, assembling efficiency of the fins45can be improved, and an assembling process of the fins45can be simplified. In another embodiment of this application, as shown inFIG.6,FIG.11, andFIG.12, the exchange pipeline40further includes a liquid return pipe46. One end of the liquid return pipe46is connected to the vapor collection pipe20. The other end of the liquid return pipe46is connected to the liquid collection pipe30. Specifically, the liquid return pipe46is disposed. In this way, a specific path is established for a process in which the liquid phase change working substance converted from the vapor phase change working substance after heat release returns from the vapor collection pipe20to the liquid collection pipe30. Therefore, a cycle speed of the phase change working substance in the heat exchanger10can be improved, and heat exchange efficiency of the heat exchanger10can be further improved. For example, there may be two liquid return pipes46. Ends of the two liquid return pipes46are respectively connected to two opposite ends of the vapor collection pipe20. The other ends of the two liquid return pipes46are respectively connected to two opposite ends of the liquid collection pipe30. In this way, a cycle process of the phase change working substance in the heat exchanger10is as follows: The phase change working substance absorbs heat in the hot environment, passes through the vapor-liquid separation pipe44along the evaporation section42and the transition section43from the vapor collection pipe20, becomes the vapor phase change working substance, and then reaches the vapor collection pipe20along the condensing section41and/or the rising pipe22. In this process, the vapor phase change working substance releases heat and then is converted to the liquid phase change working substance; and then, a part of or all of the liquid phase change working substance returns to the liquid collection pipe30through the liquid return pipe46, to implement absorption/release cycle of the phase change working substance. (As shown inFIG.4, a hollow arrow represents a flowing direction of heat absorption flow of a phase change working substance, and a solid arrow represents a process in which the liquid phase change working substance obtained after heat release returns to the liquid collection pipe30through the liquid return pipe46.) In another embodiment of this application, as shown inFIG.6, a heat insulation part is disposed on an outer periphery of the liquid return pipe46. In this way, the heat insulation part can isolate the liquid return pipe46from the surrounding hot environment, so that heat transferred from the hot environment to the liquid return pipe46can be greatly reduced. In this way, the liquid return pipe46can be in an environment of a relatively low temperature, so that the phase change working substance in the liquid return pipe46stays in the liquid state and returns to the liquid collection pipe30. In another embodiment of this application, as shown inFIG.6, the heat insulation part may be specifically a heat insulation board47. The heat insulation board47is disposed between the liquid return pipe46and the adjacent evaporation section42. Specifically, the heat insulation board47is disposed between the liquid return pipe46and the adjacent evaporation section42. In this way, the heat insulation board47can prevent the evaporation section42from radiating heat to the liquid return pipe46, so that an environment with a relatively low temperature is formed around a pipe segment of the liquid return pipe46in the hot environment. Therefore, most of the phase change working substance in the liquid return pipe46returns to the liquid collection pipe30in the liquid state, to improve the vapor-liquid cycle efficiency of the phase change working substance in the heat exchanger10. Optionally, when the heat insulation board47is disposed between the liquid return pipe46and the adjacent evaporation section42, the fins45are no longer distributed between the liquid return pipe46and the adjacent evaporation section42. In this way, sufficient assembling space can be reserved for the heat insulation board47, and heat radiated from the evaporation section42to the liquid return pipe46can also be reduced. In another embodiment of this application, as shown inFIG.12andFIG.13, the liquid return pipe46is connected to the adjacent evaporation section42. A pipe wall thickness of the liquid return pipe46meets the following relationship: 1 mm≤D≤2 mm. Herein, D represents the wall thickness of the liquid return pipe46(as shown by D inFIG.13). Specifically, the liquid return pipe46is connected to the adjacent evaporation section42, to reduce a space proportion in the heat exchanger10in an arrangement direction of each evaporation section42. Therefore, the heat exchanger10has a smaller assembling space occupation rate. The pipe wall thickness of the liquid return pipe46is limited to be greater than or equal to 1 mm and less than or equal to 2 mm, so that a sufficient distance can exist between a pipe wall of the liquid return pipe46and a pipe wall of the adjacent evaporation section42. Therefore, the heat exchanger10has a smaller assembling space occupation rate, and the evaporation section42can be prevented from radiating heat to the liquid return pipe46. In this way, most of the phase change working substance in the liquid return pipe46returns to the liquid collection pipe30in the liquid state, to improve vapor-liquid cycle efficiency of the phase change working substance in the heat exchanger10. In another embodiment of this application, the liquid return pipe46is a round pipe or an oblate pipe. Specifically, the liquid return pipe46may be set to a round pipe or an oblate pipe. When the liquid return pipe46is set to a round pipe, a flowing rate of the phase change working substance in the liquid return pipe46can be improved, to further improve heat exchange efficiency of the heat exchanger10. The liquid return pipe46is set to an oblate pipe. This facilitates integrated manufacturing of the liquid return pipe46and the adjacent evaporation section42. Likewise, the evaporation section42may also be designed to an oblate pipe, to reduce difficulty of integrated manufacturing of the liquid return pipe46and the evaporation section42. In another embodiment of this application, as shown inFIG.14, two or more liquid return paths461are formed in the liquid return pipe46. Specifically, two or more liquid return paths461are formed in the liquid return pipe46. In this way, a plurality of independent liquid return paths461exist in the liquid return pipe46. Therefore, separation is better implemented between the phase change working substance in the liquid return path461and the hot environment, so that most of the phase change working substance in the liquid return pipe46returns to the liquid collection pipe30in the liquid state, to improve vapor-liquid cycle efficiency of the phase change working substance in the heat exchanger10. The foregoing descriptions are merely example embodiments of this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application may fall within the protection scope of this application. | 36,357 |
11859918 | DETAILED DESCRIPTION The present disclosure relates to a plate-fin heat exchanger. The plate-fin heat exchanger includes a first layer and a second layer. The first layer is configured for cold airflow while the second layer is configured for hot airflow. The second layer is further configured to direct hot air above or below the inlet for the first layer. The hot air above or below the inlet for the first layer helps prevent ice accretion on the inlet side of the first layer. The plate fin heat exchanger will be described below with reference toFIGS.1-4. FIG.1is a perspective view of heat exchanger10. Heat exchanger10includes first end12, second end14, first side16, second side18, first layer20, second layer22, and parting sheet23. First layer20includes inlet24and outlet26. Second layer22includes melt flow passage or first passage28, last pass passage or second passage30, counterflow passage or third passage32, inlet34, and outlet36. Parting sheet23separates first layer20from second layer22and enables heat transfer therebetween. Inlet24of first layer20is at first end12and extends from first side16to second side18. Outlet26of first layer20is at second end14and extends from first side16to second side18. First passage28of second layer22is at first end12and extends from first side16to second side18. Inlet34of second layer22is at first side16of first passage28. Second passage30of second layer22is adjacent to first passage28of second layer22and extends from first side16to second side18. Outlet36of second layer22is at first side16of second passage30. Third passage32of second layer22extends from second end14toward second passage30. First passage28is fluidically connected to third passage32proximate second end14. Third passage32is fluidically connected to second passage30such that third passage32is fluidically connected in series between first passage28and second passage30. In the aspect of the disclosure shown inFIG.1there are only two layers, first layer20and second layer22. In other aspects of the disclosure, heat exchanger10can include multiple layers alternating between first layer20and second layer22with parting sheet23between each layer. Heat exchanger10can be made from aluminum, stainless steel, titanium, or any other material suitable for heat exchangers. FIG.2is a cross-sectional view of heat exchanger10taken along line A-A inFIG.1, showing first layer20of heat exchanger10. First layer20includes first closure bar40, second closure bar42, plurality of fins44, plurality of passages46and cold flow FC. First closure bar40is on first side16and extends from first end12to second end14. Second closure bar42is on second side18and extends from first end12to second end14. Plurality of fins44are between first closure bar40and second closure bar42and extends from first end12to second end14. Plurality of fins44define plurality of passages46extending from first end12to second end14. In operation, cold flow FCenters heat exchanger10at inlet24of first layer20. Cold flow FCflows through plurality of passages46from first end12to second end14. Then cold flow FCflows out of heat exchanger10through outlet26of first layer20. As cold flow FCflows through plurality of passages46in first layer20, cold flow FCabsorbs heat from plurality of fins44and first closure bar40and second closure bar42. FIG.3is a cross-sectional view of heat exchanger10taken along line B-B inFIG.1, showing second layer22of heat exchanger10. As discussed in reference toFIG.1above, second layer22includes first passage28, second passage30, and third passage32. Third passage32includes first portion50, second portion52, third portion54, first turn56, and second turn58. Second layer22also includes first closure bar60, second closure bar62, third closure bar64, fourth closure bar66, fifth closure bar68, and sixth closure bar70. Second layer22also includes first plurality of fins72, second plurality of fins74, third plurality of fins76, fourth plurality of fins78, fifth plurality of fins80, and hot flow FH. As shown inFIG.3, first passage28is upstream to first portion50of third passage32, and third portion54of third passage32is fluidically upstream to second passage30. First portion50of third passage32extends from first side16to second side18. Second portion52of third passage32extends from first portion50toward first end12. Third portion54of third passage32is between second passage30and second portion52and extends from first side16to second side18. First turn56is between first portion50and second portion52. Second turn58is between second portion52and third portion54. First closure bar60is on first end12and extends from first side16to second side18. Second closure bar62is between first passage28and second passage30and extends from first side16to second side18and separates first passage28and second passage30. Third closure bar64is between second passage30and third portion54of third passage32and extends from first side16to second side18. Third closure bar64separates second passage30and third portion54of third passage32. Fourth closure bar66is on second end14and extends from first side16to second side18. Fifth closure bar68is on first side16and extends from third closure bar64toward fourth closure bar66. Sixth closure bar70is on second side18and extends from fourth closure bar66toward third closure bar64. Fifth closure bar68and sixth closure bar70form the sides of second portion52of third passage32. In the aspect of the disclosure depicted inFIG.3, second closure bar62has a thickness equal to two closure bars. The extra thickness of second closure bar62improves the insulation between first passage28and second passage30. The insulation between first passage28and second passage30attenuates the heat transfer between hot air flow FHin first passage28and hot air flow FHin second passage30. The attenuated heat transfer between hot air flow FHin first passage28and hot air flow FHin second passage30helps control the temperature of hot air flow FHthroughout second layer22. Controlling the of hot air flow FHthrough attenuating heat transfer between hot air flow FHin first passage28and hot air flow FHin second passage30the likelihood of damage (e.g., warping or twisting) to second layer22from exposure to extremely high temperatures. First plurality of fins72is in first passage28and extends in a direction parallel to second closure bar62and extend from first side16to second side18. Second plurality of fins74is in second passage30and extends in a direction parallel to second closure bar62and extends from first side16to second side18. Third plurality of fins76is in first portion50of third passage32and extends in a direction parallel to fourth closure bar66. Fourth plurality of fins78is in the second portion52of third passage32and extends in a direction parallel to fifth closure bar68and sixth closure bar70. Fifth plurality of fins80is in third portion54of third passage32and extends in a direction parallel to third closure bar64. In operation, hot flow FHenters heat exchanger10through inlet34of second layer22and first plurality of fins72guides hot flow FHthrough first passage28. Hot flow FHtravels in first passage28from first side16to second side18. As hot flow FHtravels in first passage28, heat is transferred from hot flow FHinto first plurality of fins72and parting sheet23to warm inlet24of first layer20and prevent ice accumulation at inlet24of first layer20. Hot flow FHflows out of first passage28at second side18and is routed into first section50of third passage32at second end14of heat exchanger10. An insulated manifold, tube, or passage, neither of which are shown inFIG.3, can connect first passage28to third passage32. In third passage32, third plurality of fins76directs hot flow FHthrough first section50of third passage32. Hot flow FHturns at first turn56and fourth plurality of fins78directs hot flow FHthrough second section52of third passage32. As hot flow FHtravels in second section52, hot flow FHtravels away from second end14and toward first end12in a direction that is counter to the flow direction of cold flow FCin first layer20. Hot flow FHturns toward second side18at second turn58and fifth plurality of fins80directs hot flow FHthrough third section54of third passage32toward second side18. Hot flow FHis then guided into second passage30. Hot flow FHcan be guided from third section54of third passage32into second passage64by a turning manifold or tube (not shown) connected to second side18. Second plurality of fins74directs hot flow FHthrough second passage30. Hot flow FHtravels in second passage30from second side18toward first side16. Lastly, hot flow FHexits second passage30at outlet36on first side16. Because hot flow FHenters second layer22at first end12, then travels from second end14toward first end12and exits between first end12and second end14, first end12and second end14are warmer than outlet36of second layer22. Thus, if the temperature at outlet36of second layer22is controlled above freezing, the rest of heat exchanger10will be above freezing and prevent ice formation and accumulation throughout heat exchanger10. FIG.4is a cross-sectional view of another embodiment of heat exchanger10taken, showing second layer22of heat exchanger10. Second layer22of heat exchanger10, as depicted inFIG.4, includes all elements of heat exchanger10as shown inFIG.3, and is configured and functions similarly to heat exchanger10ofFIG.3with the addition of seventh closure bar82and insulation zone84. As shown inFIG.4, seventh closure bar82is between second closure bar62and second passage30and extends from first side16to second side18. Insulation zone84is defined by a space between second closure bar62and seventh closure bar82extending from first side16to second side18. Insulation zone84provides insulation between first passage28and second passage30. Insulation zone84decreases the heat transfer between hot air flow FHin first passage28and hot air flow FHin second passage30. The insulation between first passage28and second passage30attenuates the heat transfer between hot air flow FHin first passage28and hot air flow FHin second passage30. The attenuated heat transfer between hot air flow FHin first passage28and hot air flow FHin second passage30helps control the temperature of hot air flow FHthroughout second layer22. Controlling the of hot air flow FHthrough attenuating heat transfer between hot air flow FHin first passage28and hot air flow FHin second passage30the likelihood of damage (e.g., warping or twisting) to second layer22from exposure to extremely high temperatures. In the aspects of the disclosure as shown inFIGS.1,3, and4second layer22includes melt pass passage or first passage28, last pass passage or second passage30, and counterflow passage or third passage22. Each of first passage28, second passage30, and third passage32will be described further in the following paragraphs. As discussed above in paragraphs [0020] and [0022] hot flow FHenters second layer22of heat exchanger10at inlet34of first passage28. As hot flow FHenters second layer22of heat exchanger10at inlet34, hot flow FHis the hottest air in heat exchanger10. Therefore, the location of first passage28, on first end12extending from first side16to second side18helps prevent ice accretion on the structure surrounding inlet24of first layer20. Eliminating ice accretion on the structure surrounding inlet24of first layer20mitigates undesirable restrictions to both cold flow FCand hot flow FHthroughout heat exchanger10. The location of last pass passage or second passage30is important as the location of second passage30enables first passage28to be proximate first end12to aid in preventing ice accretion on the structure surrounding inlet24of first layer20. Furthermore, the location of second passage30enables an increased surface area for third passage32to encourage heat transfer between first layer20and second layer22. Counterflow passage or third passage32improves the heat transfer between cold flow FCin first layer20and hot flow FHin second layer22through parting sheet37. Directing hot flow FHthrough third passage32, in a direction opposite to the cold flow FCin first layer20, improves the heat transfer between cold flow FCin first layer20and hot flow FHin second layer22. Furthermore, the configuration of third passage32decreases the pressure drop through heat exchanger10as third passage32is wider than first passage28and third passage32and contains fewer turns than traditional heat exchangers. DISCUSSION OF POSSIBLE EMBODIMENTS The following are non-exclusive descriptions of possible embodiments of the present invention. In one aspect of the disclosure, a heat exchanger includes a first end opposite a second end and a first side opposite a second side. The first side and the second side extend from the first end to the second end. The heat exchanger further includes a first layer and a second layer. The first layer includes an inlet at the first end of the heat exchanger and an outlet at the second end of the heat exchanger. The second layer includes a first passage at the first end of the heat exchanger. The first passage extends from the first side to the second side. The second layer further includes a second passage adjacent to the first passage. The second passage extends from the first side to the second side. The second layer further includes a third passage extending from the second end toward the second passage. The first passage is fluidically connected to the third passage proximate the second end and the third passage is fluidically connected to the second passage. The heat exchanger of the preceding paragraph can optionally include, additionally and/or alternatively, any one or more of the following features, configurations and/or additional components:wherein the third passage includes: a first portion extending from the first side to the second side; a second portion extending from the first portion toward the first end; a third portion between the second passage and the second portion, wherein the third portion extends from the first side to the second side; a first turn between the first portion and the second portion; and a second turn between the second portion and the third portion, wherein the first passage is fluidically upstream to the first portion of the third passage, and wherein the third portion of the third passage is fluidically upstream with the second passage;wherein the second layer further comprises: an inlet of the second layer formed on the first passage at the first side; and an outlet of the second layer formed on the second passage at the first side;wherein the first layer further comprises: a first closure bar extending from the first end to the second end on the first side; a second closure bar extending from the first end to the second end on the second side; and a plurality of fins extending from the first end to the second end between the first closure bar and the second closure bar and defining a plurality of passageways;wherein the second layer further includes: a first closure bar at the first end and extending from the first side to the second side; a second closure bar extending from the first side to the second side between the first passage and the second passage; a third closure bar extending from the first side to the second side between the second passage and the third portion of the third passage; a fourth closure bar at the second end and extending from the first side to the second side; a fifth closure bar extending from the third closure bar toward the fourth closure bar on the first side; and a sixth closure bar extending from the fourth closure bar toward the third closure bar on the second side;wherein the second layer further includes: a first plurality of fins in the first passage extending in a direction parallel to the second closure bar; a second plurality of fins in the second passage and extending in the direction parallel to the second closure bar; a third plurality of fins in the first portion of the third passage and extending in a direction parallel to the fourth closure bar; a fourth plurality of fins in the second portion of the third passage and extending in a direction parallel to the fifth closure bar and the sixth closure bar; and a fifth plurality of fins in the third portion of the third passage and extending in a direction parallel to the third closure bar;wherein the first layer is a cold layer; and/orwherein the second layer is a hot layer. In another aspect of the disclosure, a heat exchanger includes a first end opposite a second end, a first side opposite a second side, a first layer, and a second layer. The first side and the second side extend from the first end to the second end. The first layer includes an inlet at the first end of the heat exchanger and an outlet at the second end of the heat exchanger. The second layer includes a first passage at the first end of the heat exchanger. The first passage extends from the first side to the second side. The second layer further includes a second passage adjacent to the first passage. The second passage extends from the first side to the second side. The second layer further includes a third passage extending from the second end toward the second passage. The third passage is fluidically connected between the first passage and the second passage. The heat exchanger of the preceding paragraph can optionally include, additionally and/or alternatively, any one or more of the following features, configurations and/or additional components:wherein the third passage includes: a first portion extending from the first side to the second side; a second portion extending from the first portion toward the first end; a third portion between the second passage and the second portion, wherein the third portion extends from the first side to the second side; a first turn between the first portion and the second portion; and a second turn between the second portion and the third portion, wherein the first passage is fluidically upstream to the first portion of the third passage, and wherein the third portion of the third passage is fluidically upstream to the second passage;wherein the second layer further includes: an inlet of the second layer formed on the first passage of the first side; and an outlet of the second layer formed on the second passage at the first side;wherein the first layer further includes: a first closure bar extending from the first end to the second end on the first side; a second closure bar extending from the first end to the second end on the second side; and a plurality of fins extending from the first end to the second end between the first closure bar and the second closure bar and defining a plurality of passageways;wherein the second layer further includes: a first closure bar at the first end and extending from the first side to the second side; a second closure bar extending from the first side to the second side between the first passage and the second passage; a third closure bar extending from the first side to the second side between the second passage and the third portion of the third passage; a fourth closure bar at the second end and extending from the first side to the second side; a fifth closure bar extending from the third closure bar toward the fourth closure bar on the first side; and a sixth closure bar extending from the fourth closure bar toward the third closure bar on the second side;wherein the second layer further includes: a first plurality of fins in the first passage extending in a direction parallel to the second closure bar; a second plurality of fins in the second passage and extending in the direction parallel to the second closure bar; a third plurality of fins in the first portion of the third passage and extending in a direction parallel to the fourth closure bar; a fourth plurality of fins in the second portion of the third passage and extending in a direction parallel to the fifth closure bar and sixth closure bar; and a fifth plurality of fins in the third portion of the third passage and extending in a direction parallel to the third closure bar; and/orwherein the second layer further includes: a seventh closure bar, extending from the first side to the second side between the second closure bar and the second passage, wherein the seventh closure bar is spaced from the second closure bar in a direction perpendicular to the second closure bar, and wherein a space between the seventh closure bar and the second closure bar defines an insulation zone. In another aspect of the disclosure, a method for guiding a hot flow and a cold flow through a heat exchanger. The method includes directing the cold flow through an inlet of a cold layer at a first end of the heat exchanger and out an outlet at a second end of the heat exchanger opposite the first end. The method further includes directing the hot flow through an inlet of a hot layer and into a melt pass passage of the hot layer at the first end. The melt pass passage extends from a first side of the heat exchanger to a second side of the heat exchanger. The first side and the second side both extend from the first end to the second end of the heat exchanger. The method further includes directing the hot flow out of the melt pass passage, to the second end, and into a counterflow passage. The counterflow passage extends from the second end toward the first end between the first side and the second side of the heat exchanger. The method further includes directing the hot flow from the second end toward the first end in the counterflow passage and directing the hot flow out of the counterflow passage and into a last pass passage. The last pass passage is between the melt pass passage and the counterflow passage and extends from the second side to the first side. The method of the preceding paragraph can optionally include, additionally and/or alternatively, any one or more of the following features, configurations and/or additional components:the method further including: directing the hot flow out of the heat exchanger through an outlet of the hot layer connected to the last pass passage at the first side of the heat exchanger;the method further including: turning the hot flow at the second side between the counterflow passage and the last pass passage;wherein the hot flow is directed in a direction parallel to the first side and the second side in a majority of a length of the counterflow passage; and/orwherein the melt pass passage directs the hot flow over or under the inlet of the cold layer. While the invention has been described with reference to an exemplary embodiment(s), it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment(s) disclosed, but that the invention will include all embodiments falling within the scope of the appended claims. | 23,321 |
11859919 | DESCRIPTION OF EXEMPLARY EMBODIMENTS Exemplary embodiments of the disclosure shown in the drawings are explained in more detail in the following description, wherein same reference numbers relate to same or similar or functionally same components. A heat exchanger1, as it is shown for example inFIGS.1to3, is usually part of a circuit2which is shown greatly simplified and in the manner of a circuit diagram inFIG.1. In the circuit2, a temperature control medium, for example a coolant or a refrigerant circulates during the operation, which flows via an inlet3into the heat exchanger1and flows via an outlet4out of the heat exchanger1. The heat exchanger1, furthermore, is fluidically separated from the temperature control medium, flowed through by a fluid as indicated by a dashed arrow5inFIG.1. Thus, a heat exchange between the temperature control medium and the fluid takes place during the operation. The heat exchanger1can be an evaporator6for evaporating the temperature control medium or a condenser7for condensing the temperature control medium. Likewise, both an evaporator6and also a condenser7can be incorporated in the circuit2. Furthermore, the circuit2can comprise a conveying device which is not shown, for example a pump, for conveying the temperature control medium through the circuit2and an expander which is not shown for expanding the temperature control medium. The circuit2is in particular part of an air conditioning system8, which is employed in a motor vehicle9or in a building10. FIG.2shows a greatly simplified representation of the heat exchanger1. The heat exchanger1comprises at least one tube body11, which during the operation is flowed through by the temperature control medium, wherein the heat exchanger1shown inFIG.2purely exemplarily comprises four such tube bodies11. The tube bodies11extend in a longitudinal direction12and can be flowed through in the longitudinal direction12by the temperature control medium. Located opposite in the longitudinal direction12, two chambers13are provided, between which the tube bodies11are arranged and each received with a longitudinal end portion. The tube bodies11are connected with their longitudinal end portions to the respective chamber13in a firmly bonded manner, in particular soldered. An introduction of the temperature control medium into the tube bodies11and a discharging of the temperature control medium out of the tube bodies11is effected with the chambers13. In the shown example, one of the chambers13is connected with an inlet3to the circuit2and formed as a distributor14for distributing the temperature control medium into the tube bodies11. The other chamber13is connected with the outlet4to the circuit2and formed as a collector15for collecting the temperature control medium from the tube bodies11. FIG.3shows a cross section through one of the tube bodies11, i.e., a section through a plane which is defined by a width direction16running transversely to the longitudinal direction12and a height direction17running transversely to the longitudinal direction12and transversely to the width direction16. The tube body11in the shown example is formed as a flat tube18which has a width19running in width direction16, which is at least twice as great as a height20of the tube body11running in the height direction17. The tube body11has an outer cover21. The outer cover21extends closed in the longitudinal direction12and in a circumferential direction22and thus limits an interior volume23which can be flowed through in the longitudinal direction12. Furthermore, the tube body11has two outer intermediate walls24located opposite in the width direction16and inner intermediate walls25arranged in the width direction16between the outer intermediate walls24. In the width direction16, the intermediate walls24are substantially spaced apart equidistantly relative to one another. Furthermore, the outer intermediate walls24are spaced apart in the width direction16relative to the outer cover21. Thus, the intermediate walls24,25limit passages26of the tube body11within the outer cover21which are separated from one another in the width direction16and which can be flowed through in the longitudinal direction12. A wall thickness27of at least one of the at least one intermediate walls25running in the width direction16is greater than the wall thickness27of the respective outer intermediate wall24. In the shown example, both outer intermediate walls24have the same wall thickness27. In the shown example, the tube body11, furthermore, has an uneven number of inner intermediate walls25, wherein the shown tube body11purely exemplarily has seventeen such inner intermediate walls25. Thus, an intermediate wall25that is central and arranged in the middle exists in the width direction16, which in the following is also referred to as central intermediate wall28. The central intermediate wall28is such an inner intermediate wall25that has a greater wall thickness27than the wall thickness27of the respective outer intermediate wall24. Here, the wall thickness27of the respective inner intermediate wall25is maximally a third greater than the wall thickness27of the respective outer intermediate wall24. In particular, the wall thickness27of the central intermediate wall28is between 1.2 times and 1.3 times, in particular 1.26 times the wall thickness of the respective outer intermediate wall24. To better explain the size relationship, it is assumed in the following that the tube body11has a width19of 25.4 cm and a height of 1.3 cm. The wall thickness27of the respective outer intermediate wall24amounts to 0.27 cm for example. The central intermediate wall28for example has a wall thickness27of 0.34 mm. As is evident fromFIG.3, the tube body11of the shown example has an oval cross section. The outer cover21has even flat sides29located opposite in the height direction17and curved outsides30located opposite in the width direction16. The outsides30follow a curved course towards the inside so that they limit with the respective next-adjacent inner intermediate wall24a passage26with a cross section that is curved on the outside in the width direction16. The remaining passages26in the shown example have a basic shape that is square in the cross section with curved corners31. FromFIG.3, it is evident that the two passages26limited by the central intermediate wall28have corners31on their sides facing the central intermediate wall28each with a curvature radius32that is greater than a curvature radius33of the corners31of these passages26that are distant from the central intermediate wall28. In the exemplary embodiment shown inFIG.3, it is not only the central intermediate wall28that has a wall thickness27that is greater than the wall thickness27of the outer intermediate walls24, but also the intermediate walls25adjoining the central intermediate wall28in the width direction16. The tube body11can be subdivided in the width direction16into a centrally arranged inner portion33and two outer portions34, wherein the respective outer portion34comprises an outer intermediate wall24and extends from the associated outer intermediate wall24as far as to the next-adjacent outside30of the outer cover21. The inner portion33is arranged in the middle between the outer portions34and in the example extends over a third of the width19of the outer cover21. Within the inner portion33, all inner intermediate walls25have a wall thickness27that is greater than the wall thickness27of the respective outer intermediate wall24. In the shown example, the wall thickness27of the inner intermediate walls27in the inner portion33decreases emanating from the central intermediate wall28in the width direction16towards the outside, so that the central intermediate wall28is that intermediate wall25with the maximum wall thickness27. The intermediate walls24,25are formed symmetrically with regard to a symmetry plane35extending in the longitudinal direction12and in the height direction17indicated inFIG.3, which thus runs through the central intermediate wall28. In the region between the inner portion33and the outer portions34, the inner intermediate walls25have the same wall thickness27as the respective outer intermediate wall24. In the shown example, the inner portion33exemplarily has seven inner intermediate walls25, i.e., the central intermediate wall28and further six inner intermediate walls25. As is evident fromFIG.3, furthermore, the outer cover21likewise has a reinforced wall thickness36in its outsides30, which in the following are referred to as cover wall thickness36for the better distinction from the wall thickness27of the intermediate walls24,25. It is noticeable that the cover wall thickness36running in the width direction16is greater in the region of the outsides30than a height wall thickness36running in the height direction17in the region of the flat sides29. Here, the cover wall thickness36running in the width direction16is greater in the region of the outsides30than the respective wall thickness27of the intermediate walls24,25. The tube body11is extruded for example, wherein it is also conceivable to produce the tube body11from a flat strip material. It is understood that the foregoing description is that of the exemplary embodiments of the disclosure and that various changes and modifications may be made thereto without departing from the spirit and scope of the disclosure as defined in the appended claims. | 9,454 |
11859920 | DETAILED DESCRIPTION It should be understood at the outset that although illustrative implementations of one or more embodiments are illustrated below, the disclosed systems and methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, but may be modified within the scope of the appended claims along with their full scope of equivalents. Use of the phrase “and/or” indicates that any one or any combination of a list of options can be used. For example, “A, B, and/or C” means “A”, or “B”, or “C”, or “A and B”, or “A and C”, or “B and C”, or “A and B and C”. An active vortex generator adapts to a flow rate of fluid through and/or a heat flux applied through a heat exchanger channel to improve the heat transfer rate of the heat exchanger. The term “active” is used herein to denote that the vortex generator has at least one component that moves within the fluid channel of the heat exchanger. In some implementations, the movement of the active vortex generator may be induced by the fluid flow through the heat exchanger channel, such as with an “anchored” active vortex generator. For example, an anchored active vortex generator may be implemented as a normal or inverted flag in the heat exchanged channel. In some implementations, the movement of the active vortex generator may be induced through an externally applied force on the active vortex generator, or “actuated” active vortex generator. For example, an actuated active vortex generator may be implemented as an oscillating normal flag, a half-flexible oscillating normal flag, or a rigid oscillating plate. An actuated active vortex generator is particularly suited to heat exchangers with high heat flux dissipation requirements. In some implementations, a “high” heat flux is greater than or equal to 1 kW/cm 2. For example, on a power laser diode, energy from the power laser diode may be concentrated in a 50×50 micrometer surface. Likewise, on a central processor unit (CPU), the locations of each core of a multi-core processor may present high heat flux dissipation requirements. Locating an actuated active vortex generator proximate to such high heat flux dissipation locations provides for improved heat transfer that can be activated when needed (e.g., upon operation of the power laser diode or a corresponding core of a multi-core processor). FIGS.1and2are cross-sectional views of a heat exchanger with anchored active vortex generators. For an anchored flag (e.g., normal flag or inverted flag), there is a minimum fluid velocity for initiating the generation of a vortex (e.g., causing the flag to “flap”). As understood by those of ordinary skill in the art, the minimum fluid velocity for generating a vortex with an anchored flag is based on one or more of the fluid density, fluid velocity, material properties of the flag (e.g., Young's modulus of the flag, thickness of the flag, and/or density of the flag), and the length of the flag. As the fluid velocity increases above the minimum fluid velocity, the anchored flag will generate vortices at an increasing frequency. FIG.1is a cross-sectional view of a heat exchanger100with an active vortex generator comprising an anchored normal flag102. The heat exchanger100comprises a fluid channel104through which a cooling fluid106flows. In the example shown inFIG.1, the cooling fluid106flows in a direction of the inflow arrow (e.g., from left to right across the page). In various implementations, the cooling fluid106is a liquid, such as water or other cooling liquid. A heat transfer surface107of the fluid channel104is in thermally conductive contact with a substrate108to be cooled by the cooling fluid106. The substrate108comprises a heat source, such as one or more electronics components or a flow of hot fluid to be cooled. The anchored normal flag102comprises a flag110coupled to an anchor112. The anchor112is affixed across the fluid channel104in a direction perpendicular to the flow of the cooling fluid106and parallel to the heat transfer surface107. For example, when the fluid channel104has a rectangular cross-sectional shape in a direction perpendicular to the flow of the cooling fluid106, the anchor112may be affixed across lateral surfaces of the fluid channel104. Other cross-sectional shapes of the fluid channel104in a direction perpendicular to the flow of the cooling fluid106are contemplated by this disclosure, such as circular, oval, triangular, or any other desired shape. A leading edge of the flag110is affixed to the anchor112as a rotatable joint or hinge and the flag110extends from the anchor112in a direction of the flow of the cooling fluid106. A trailing edge of the flag110is free to move within the cooling fluid106. The flag110is a flexible material selected to wave within the cooling fluid106to produce a vortex114. For example, the flag110may be a thin metal plate, polymeric plate, textile sheet, or any other suitably flexible material. In an example, a length of the flag110is between 0.5-2.5 mm. A width of the flag110is substantially the same dimension as the length of the flag110. The width of the fluid channel104is 2-3 times the length of the flag110(e.g., 1-7.5 mm). A thickness of the flag110is less than 0.05 times the length of the flag110. As show, the flag110produces a vortex114upon a change in a direction of the wave of the flag110. As such, a series of vortices114are produced at a top most (e.g., direction furthest from the heat transfer surface107) motion of the flag110as well as at a bottom most (e.g., direction closest to the heat transfer surface107) motion of the flag110. The anchor112is affixed across the fluid channel104at a location closer to the heat transfer surface107than a surface of the fluid channel104opposite to the heat transfer surface107. Accordingly, the vortices114generated by the flag110are produced in a viscous/thermal boundary layer along the heat transfer surface107to thereby increase the heat transfer rate of the heat exchanger100. The vortices114generated by the flag110promote mixing of the cooling fluid106to prevent temperature stratification about the heat transfer surface and increase the heat transfer rate from the substrate108into the cooling fluid106. As understood by those of ordinary skill in the art, the motion of the flag110is induced by the fluid flow of the cooling fluid106. As the velocity of the cooling fluid106is increased, the frequency at which the vortices114are generated likewise increases. Accordingly, in some implementations, the heat transfer rate of the heat exchanger104may be modulated based on controlling the flow rate of the cooling fluid106. FIG.2is a cross-sectional view of a heat exchanger200with an active vortex generator comprising a anchored inverted flag202. The heat exchanger200is substantially the same as the heat exchanger100with the anchored normal flag102substituted for the anchored inverted flag202. A description of the common components between the heat exchanger100and the heat exchanger200is omitted and reference is made to the description of these components above. The anchored inverted flag202comprises a flag204coupled to an anchor206. The anchor206is affixed across the fluid channel104in a direction perpendicular to the flow of the cooling fluid106and parallel to the heat transfer surface107. For example, when the fluid channel104has a rectangular cross-sectional shape in a direction perpendicular to the flow of the cooling fluid106, the anchor112may be affixed across lateral surfaces of the fluid channel104. Other cross-sectional shapes of the fluid channel104in a direction perpendicular to the flow of the cooling fluid106are contemplated by this disclosure, such as circular, oval, triangular, or any other desired shape. A trailing edge of the flag204is affixed to the anchor206as a rigid joint and the flag204extends from the anchor206in a direction opposite the flow of the cooling fluid106. A leading edge of the flag204is free to move within the cooling fluid106. The flag204is a flexible material selected to wave within the cooling fluid106to produce a vortex114. For example, the flag204may be a thin metal plate, polymeric plate, textile material, or any other suitably flexible material. In various implementations, the flag204is more rigid than the flag110. In an example, a length of the flag204is between 0.5-2.5 mm. A width of the flag204is substantially the same dimension as the length of the flag204. The width of the fluid channel104is 2-3 times the length of the flag204(e.g., 1-7.5 mm). A thickness of the flag204is less than 0.05 times the length of the flag204. As show, the flag204produces a vortex114upon a change in a direction of the wave of the flag204. As such, a series of vortices114are produced at a top most (e.g., direction furthest from the heat transfer surface107) motion of the flag204as well as at a bottom most (e.g., direction closest to the heat transfer surface107) motion of the flag204. The anchor206is affixed across the fluid channel104at a location closer to the heat transfer surface107than a surface of the fluid channel104opposite to the heat transfer surface107. Accordingly, the vortices114generated by the flag204are produced in a viscous/thermal boundary layer along the heat transfer surface107to thereby increase the heat transfer rate of the heat exchanger200. FIG.3is a cross-sectional view of a heat exchanger300with an active vortex generator comprising an oscillating normal flag302. The heat exchanger300is substantially the same as the heat exchanger100with the anchored normal flag102substituted for the oscillating normal flag302. A description of the common components between the heat exchanger100and the heat exchanger300is omitted and reference is made to the description of these components above. The oscillating normal flag302comprises a flag304coupled to an anchor306. The anchor306is positioned across the fluid channel104in a direction perpendicular to the flow of the cooling fluid106and parallel to the heat transfer surface107. One or more actuators (not shown) are coupled to the anchor306and configured to move the anchor306in a direction shown by an arrow308, which is perpendicular to the heat transfer surface107while still maintaining the orientation of the anchor306parallel to the heat transfer surface107. The actuator(s) are configured to move the anchor306along an oscillation path between a top most position (e.g., a position in the oscillation path furthest from the heat transfer surface107) and a bottom most position (e.g., a position in the oscillation path closest to the heat transfer surface107). A leading edge of the flag304is affixed to the anchor306as a rigid joint and the flag304extends from the anchor306in a direction of the flow of the cooling fluid106. A trailing edge of the flag304is free to move within the cooling fluid106. The flag304is a flexible material selected to wave within the cooling fluid106to produce a vortex114. For example, the flag304may be a thin metal plate, polymeric plate, textile material, or any other suitably flexible material. In an example, a length of the flag304is between 0.5-2.5 mm. A width of the flag304is substantially the same dimension as the length of the flag304. The width of the fluid channel104is 2-3 times the length of the flag304(e.g., 1-7.5 mm). A thickness of the flag304is less than 0.05 times the length of the flag304. An amplitude of the oscillation of the anchor306along the oscillation path is 0.5-1 times the length of the flag304. The actuator(s) are configured to oscillate the anchor306along the oscillation path at a frequency of 0.05-0.2 seconds. As show, the flag304produces a vortex114upon a change in a direction of the wave of the flag304. As such, a series of vortices114are produced at a top most (e.g., direction furthest from the heat transfer surface107) motion of the flag304as well as at a bottom most (e.g., direction closest to the heat transfer surface107) motion of the flag304. The oscillation path of the anchor306is along a surface of the fluid channel104at a location closer to the heat transfer surface107than a surface of the fluid channel104opposite to the heat transfer surface107. Accordingly, the vortices114generated by the flag304are produced in a viscous/thermal boundary layer along the heat transfer surface107to thereby increase the heat transfer rate of the heat exchanger300. In some implementations, the flag304may be sufficiently rigid to resist induced generation of the vortices114due to the flow of the cooling fluid106, but sufficiently flexible that motion of the anchor306caused by the actuator(s) causes the flag304to flex and produce a vortex114. Accordingly, the generation of the vortices114is controlled by activation of the actuator(s). The actuator(s) may manipulate a frequency at which the anchor306travels along the oscillation path to produce more or fewer vortices114, as desired. FIG.4is a cross-sectional view of a heat exchanger400with an active vortex generator comprising a half-flexible oscillating normal flag402. The heat exchanger400is substantially the same as the heat exchanger100with the anchored normal flag102substituted for the half-flexible oscillating normal flag402. A description of the common components between the heat exchanger100and the heat exchanger400is omitted and reference is made to the description of these components above. The half-flexible oscillating normal flag402comprises a flag404coupled to an anchor410. The flag404comprises a rigid portion406and a flexible portion408. The anchor410is positioned across the fluid channel104in a direction perpendicular to the flow of the cooling fluid106and parallel to the heat transfer surface107. One or more actuators (not shown) are coupled to the anchor410and configured to move the anchor410in a direction shown by an arrow412, which is perpendicular to the heat transfer surface107while still maintaining the orientation of the anchor410parallel to the heat transfer surface107. The actuator(s) are configured to move the anchor410along an oscillation path between a top most position (e.g., a position in the oscillation path furthest from the heat transfer surface107) and a bottom most position (e.g., a position in the oscillation path closest to the heat transfer surface107). A leading edge of the flag404is affixed to the anchor410as a rigid joint and the flag404extends from the anchor410in a direction of the flow of the cooling fluid106. The flexible portion408of the flag404is coupled to the rigid portion406as a rigid joint. A trailing edge of the flag404is free to move within the cooling fluid106. The flexible portion408of the flag404is a flexible material selected to wave within the cooling fluid106to produce a vortex114. For example, the flexible portion408of the flag404may be a thin metal plate, polymeric plate, textile material, or any other suitably flexible material. The rigid portion406of the flag404is selected to be sufficiently rigid to resist flexing as the anchor410is moved within the cooling fluid106. For example, the rigid portion406of the flag404may be a thicker metal plate, polymeric plate, or other sufficiently rigid plate than the flexible portion408of the flag404. In an example, a length of the flag404is between 0.5-2.5 mm. A width of the flag404is substantially the same dimension as the length of the flag404. The rigid portion406of the flag404is half the length of the flag404. The width of the fluid channel104is 2-3 times the length of the flag404(e.g., 1-7.5 mm). A thickness of the flag404is less than 0.05 times the length of the flag404. An amplitude of the oscillation of the anchor410along the oscillation path is 0.5-1 times the length of the flag404. The actuator(s) are configured to oscillate the anchor410along the oscillation path at a frequency of 0.05-0.2 seconds. As show, the flag404produces a vortex114upon a change in a direction of the wave of the flag404. As such, a series of vortices114are produced at a top most (e.g., direction furthest from the heat transfer surface107) motion of the flag304as well as at a bottom most (e.g., direction closest to the heat transfer surface107) motion of the flag404. The oscillation path of the anchor410is along a surface of the fluid channel104at a location closer to the heat transfer surface107than a surface of the fluid channel104opposite to the heat transfer surface107. Accordingly, the vortices114generated by the flag404are produced in a viscous/thermal boundary layer along the heat transfer surface107to thereby increase the heat transfer rate of the heat exchanger400. In some implementations, the flexible portion408of the flag404may be sufficiently rigid to resist induced generation of the vortices114due to the flow of the cooling fluid106, but sufficiently flexible that motion of the anchor410caused by the actuator(s) causes the flag404to flex and produce a vortex114. The rigid portion406of the flag404is sufficiently rigid to resist flexing even during motion of the anchor410caused by the actuator(s). Accordingly, the generation of the vortices114is controlled by activation of the actuator(s). The actuator(s) may manipulate a frequency at which the anchor410travels along the oscillation path to produce more or fewer vortices114, as desired. FIG.5is a cross-sectional view of a heat exchanger500with an active vortex generator comprising a rigid oscillating plate502. The heat exchanger500is substantially the same as the heat exchanger100with the anchored normal flag102substituted for the rigid oscillating plate502. A description of the common components between the heat exchanger100and the heat exchanger500is omitted and reference is made to the description of these components above. The rigid oscillating plate502comprises a rigid plate504coupled to an anchor506. The anchor506is positioned across the fluid channel104in a direction perpendicular to the flow of the cooling fluid106and parallel to the heat transfer surface107. One or more actuators (not shown) are coupled to the anchor506and configured to move the anchor506in a direction shown by an arrow508, which is perpendicular to the heat transfer surface107, while still maintaining the orientation of the anchor506parallel to the heat transfer surface107. The actuator(s) are configured to move the anchor506along an oscillation path between a top most position (e.g., a position in the oscillation path furthest from the heat transfer surface107) and a bottom most position (e.g., a position in the oscillation path closest to the heat transfer surface107). A leading edge of the rigid plate504is affixed to the anchor306as a rigid joint and the rigid plate504extends from the anchor506in a direction of the flow of the cooling fluid106. The rigid plate504is positioned parallel to the heat transfer surface107. A trailing edge of the rigid plate504is maintained parallel to the leading edge of the rigid plate504within the cooling fluid106. The rigid plate504is a sufficiently rigid material selected to resist motion within the cooling fluid106. For example, the rigid plate504may be a metal, ceramic, or polymeric plate, or any other suitably rigid material. In an example, a length of the rigid plate504is between 0.5-2.5 mm. A width of the rigid plate504is substantially the same dimension as the length of the rigid plate504. The width of the fluid channel104is 2-3 times the length of the rigid plate504(e.g., 1-7.5 mm). A thickness of the rigid plate504is less than 0.05 times the length of the rigid plate504. An amplitude of the oscillation of the anchor506along the oscillation path is 0.5-1 times the length of the rigid plate504. The actuator(s) are configured to oscillate the anchor506along the oscillation path at a frequency of 0.05-0.2 seconds. As show, the rigid plate504produces a vortex114upon a change in a direction of the rigid plate504as caused by the oscillation of the anchor506by the actuator(s). As such, a series of vortices114are produced at a top most (e.g., direction furthest from the heat transfer surface107) motion of the rigid plate504as well as at a bottom most (e.g., direction closest to the heat transfer surface107) motion of the rigid plate504. The oscillation path of the anchor506is along a surface of the fluid channel104at a location closer to the heat transfer surface107than a surface of the fluid channel104opposite to the heat transfer surface107. Accordingly, the vortices114generated by the rigid plate504are produced in a viscous/thermal boundary layer along the heat transfer surface107to thereby increase the heat transfer rate of the heat exchanger500. Because the rigid plate504is sufficiently rigid to resist induced generation of the vortices114due to the flow of the cooling fluid106, the generation of the vortices114is controlled by activation of the actuator(s). The actuator(s) may manipulate a frequency at which the anchor506travels along the oscillation path to produce more or fewer vortices114, as desired. FIG.6is a cross-sectional simulation of heat transfer and fluid dynamics across a heat exchanger without a vortex generator.FIG.7is a cross-sectional simulation of heat transfer and fluid dynamics across the heat exchanger400with an active vortex generator comprising the half-flexible oscillating normal flag402.FIG.8is a cross-sectional simulation of heat transfer and fluid dynamics across the heat exchanger500with an active vortex generator comprising the rigid oscillating plate502.FIG.9is a cross-sectional simulation of heat transfer and fluid dynamics across the heat exchanger300with an active vortex generator comprising the oscillating normal flag302.FIGS.6-9show the active vortex generators at a top most position along the oscillation path. FIG.10is a cross-sectional simulation of two-phase heat transfer and fluid dynamics across a heat exchanger without a vortex generator.FIG.11is a cross-sectional simulation of two-phase heat transfer and fluid dynamics across the heat exchanger400with an active vortex generator comprising the half-flexible oscillating normal flag402.FIG.12is a cross-sectional simulation of two-phase heat transfer and fluid dynamics across the heat exchanger500with an active vortex generator comprising the rigid oscillating plate502.FIG.13is a cross-sectional simulation of two-phase heat transfer and fluid dynamics across the heat exchanger300with an active vortex generator comprising the oscillating normal flag302. In each ofFIGS.10-13, the enclosed shape represents a bubble, such as a bubble of steam caused by the boiling of water.FIGS.10-13show the active vortex generators at a bottom most position along the oscillation path. FIG.14is a graph1400of the Nusselt number1402over time1404for heat exchangers without an active vortex generator (control)1406and with a vortex generator comprising the half-flexible oscillating normal flag402, the rigid oscillating plate502, or the oscillating normal flag302. As shown, a first plot1408shows a baseline level of heat transfer provided by a heat exchanger without an active vortex generator. A second plot1410shows the heat transfer provided by the heat exchanger300, described above. A third plot1412shows the heat transfer provided by the heat exchanger400, described above. A fourth plot1414shows the heat transfer provided by the heat exchanger500, described above. As shown in each of the second, third, and fourth plots1414, there is a periodicity to the heat transfer induced by the oscillation of the respective anchors306,410,506along the oscillation path by the actuator(s). At a transition time1418, the heat transfer transitions from a single phase heat transfer to a two-phase heat transfer (e.g., boiling). As shown, the rigid oscillating plate502provides the greatest amount of heat transfer with the half-flexible oscillating normal flag402closely tracking to within about 90-96% of the level of heat transfer provided by the rigid oscillating plate502. However, the amount of work required to oscillate the half-flexible oscillating normal flag402is 70% of the amount of work required to oscillate the rigid oscillating plate502. Likewise, the amount of pressure drop caused by the half-flexible oscillating normal flag402is 75% of the amount of pressure drop caused by the rigid oscillating plate502. After the transition time1418, the heat transfer provided by the rigid oscillating plate502and the half-flexible oscillating normal flag402are substantially the same. The oscillating normal flag302provides about 50-60% less heat transfer than the rigid oscillating plate502and the half-flexible oscillating normal flag402during single phase heat transfer, but still provides five times or more the amount of heat transfer in the single phase heat transfer than in a heat exchanger without a vortex generator. At the same time, the amount of work required to oscillate the oscillating normal flag302is 20% of the amount of work required to oscillate the rigid oscillating plate502. Likewise, the amount of pressure drop caused by the oscillating normal flag302is 50% of the amount of pressure drop caused by the rigid oscillating plate502. After the transition time1418, the heat transfer provided by the oscillating normal flag302jumps to as much as ten times or more than the heat transfer provided by a heat exchanger without a vortex generator. At the same time, the heat transfer provided by the oscillating normal flag302closes to within about 70-75% of the heat transfer as that provided by the rigid oscillating plate502and the half-flexible oscillating normal flag402during two-phase heat transfer. Looking back toFIGS.7,8,11, and12, oscillating the flag404with the rigid portion406or oscillating the rigid plate504produces vortices114closer to the heat transfer surface107than with the flexible flag304. Accordingly, for the rigid plate504and semi-rigid flag404, the flow velocity of the cooling fluid106is increased near the heat transfer surface107. As such, in the two-phase heat transfer shown inFIGS.11and12, the bubbles are smaller and pushed away from the heat transfer surface107faster than the bubbles shown inFIGS.10and13. The increase flow velocity and closer placement of the vortices114to the heat transfer surface107allows for continued boiling before reaching a critical heat flux. Specifically, the increase flow velocity and closer placement of the vortices114to the heat transfer surface107operate to prevent the transition to film boiling. In operation, oscillating the flag404with the rigid portion406or oscillating the rigid plate504result in a higher heat transfer, but requires more work by the actuator(s), and result in a greater pressure drop across the heat exchangers as compared to the flexible flag304. Accordingly, the flag404with the rigid portion406or oscillating the rigid plate504are particularly suited to higher heat flux heat exchangers that are not impacted by the larger pressure drop. For example, on a power laser diode, energy from the power laser diode may be concentrated in a 50×50 micrometer surface. Likewise, on a central processor unit (CPU), the locations of each core of a multi-core processor may present high heat flux dissipation requirements. Locating a heat exchanger with the active vortex generator including the flag404with the rigid portion406or oscillating the rigid plate504proximate to such high heat flux dissipation locations provides for improved heat transfer that can be activated when needed (e.g., upon operation of the power laser diode or a corresponding core of a multi-core processor). Likewise, because the oscillating normal flag302results in a substantial increase in heat transfer while at the same time limiting the pressure drop across the heat exchanger300, the oscillating normal flag302is suited to implementations that are more sensitive to pressure drops. For example, the oscillating normal flag302may be more suited for inclusion at the inlet of a tube-in-tube heat exchanger. Other implementations are contemplated by this disclosure. FIGS.15-18are views of an implementation of a heat exchanger1500with an active vortex generator1502comprising an oscillating normal flag1504. The heat exchanger1500comprises a cooling fluid channel1506. As shown, the fluid channel1506comprises a top surface1508, a bottom surface1510, a first side surface1512, and a second side surface1514. The first and second side surfaces1512extend between the top and bottom surfaces1508,1510to enclose the fluid channel1506. The bottom surface1510is in contact with a heat source which imparts a heat flux1516. In the example shown, the heat flux1516leads to two-phase heat exchange as indicated by the presence of the bubble1518. The active vortex generator1502comprises a first actuator1520and a second actuator1522positioned about the first and second side surfaces1512,1514, respectively. The first actuator1520comprises a first solenoid1524coupled to a first drive arm1526. Likewise, the second actuator1522comprises a second solenoid1528coupled to a second drive arm1530. The first drive arm1526comprises a linkage1532that is affixed to a first end of an anchor1534for the flag1504. In the example shown, the anchor1534is a rod, though other rigid support structures are contemplated by this disclosure, such as a bar, tube, beam or any other sufficiently rigid support structure to anchor the flag1504in the fluid channel1506. The second drive arm1530likewise comprises a linkage1536that is affixed to a second end of the anchor1534. Following the example ofFIG.3above, the flag1504is coupled to the anchor1534at a rotatable joint and the flag1504is adapted to move freely within a cooling fluid passing through the fluid channel1506. The anchor1534passes through a first aperture1538in the first side surface1512and a second aperture1540in the second side surface1514. A first seal1542is positioned within the first side surface1512and around the anchor1534to prevent the cooling fluid from escaping from the fluid channel1506through the first aperture1538. Likewise, a second seal1544is positioned within the second side surface1514and around the anchor1534to prevent the cooling fluid from escaping from the fluid channel1506through the second aperture1540. The first and second seals1542,1544are larger than the first and second apertures1538,1540and extend within the first and second side surfaces1512,1514beyond the first and second apertures1538,1540. In operation, the solenoids1524,1528are instructed by a controller (not shown) to move the drive arms1526,1530in and out between a first position and a second position and thereby move the anchor1534along an oscillation path, as discussed above. As the anchor1534moves within the apertures1538,1540, the first and second seals1542,1544move within the first and second side surfaces1512,1514to continue to prevent cooling fluid from escaping from the fluid channel1506through the apertures1538,1540. Additionally, as the anchor1534moves along the oscillation path, the flag1504waves within the cooling fluid to generate the vortices114, as described above. In an example, a length of the flag1504is between 0.5-2.5 mm. A width of the flag1504is substantially the same dimension as the length of the flag1504. The width of the fluid channel1506is 2-3 times the length of the flag1504(e.g., 1-7.5 mm). A thickness of the flag1504is less than 0.05 times the length of the flag1504. An amplitude of the oscillation of the anchor1534along the oscillation path is 0.5-1 times the length of the flag1504. The actuators1520,1522are configured to oscillate the anchor1534along the oscillation path at a frequency of 0.05-0.2 seconds. While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods may be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted or not implemented. Also, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component, whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein. | 33,149 |
11859921 | The images in the drawings depict the heat exchanger for “left hand” installation. The heat exchanger may alternatively be manufactured for “right hand” installation. In such case, the figures for the “right hand” installation will be mirror images of the “left hand” installation figures. There may be slight differences in design between “left hand” and “right hand” heat exchangers, but the major design concepts of the two units are identical. The images in the drawings are simplified for illustrative purposes. Within the descriptions of the figures, similar elements are provided similar names and reference numerals as those of the previous figure(s). The specific numerals assigned to the elements are provided solely to aid in the description and are not meant to imply any limitations (structural or functional) on the invention. The appended drawings illustrate exemplary configurations of the invention and, as such, should not be considered as limiting the scope of the invention that may admit to other equally effective configurations. It is contemplated that features of one configuration may be beneficially incorporated in other configurations without further recitation. DETAILED DESCRIPTION OF THE INVENTION Heat Exchanger with Single Heat Exchanging Unit FIGS.24,25,26AND27illustrate an exemplary embodiment of an aircraft heat exchanger1000comprising a single heat exchanging unit1100used, for example, to cool bleed air from an aircraft engine. However, the invention is not limited to the exemplary embodiment contained inFIGS.24,25,26and27. The embodiment of the single heat exchanging unit1100depicted inFIGS.24-27is advantageous insofar that it pre-cools bleed air of the aircraft engine in a single, compact, light-weight unit. The heat exchanging unit1100is made from stainless steel or a higher temperature alloy such as one of the high temperature nickel alloys. In one embodiment, the stainless steel is 304 stainless steel, in another embodiment, the nickel alloy is Inconel 625. Other suitable materials may be used. The stainless-steel or nickel alloy design allows for each element of the unit to be welded to each other (as compared to using stainless steel and aluminum—which cannot be welded together). Welding is especially advantageous in designing a heat exchanger1000for an aircraft due to the shock and vibration incurred during aircraft use. The single heat exchanging unit comprises two tanks—an inlet tank1101above the core2307and an outlet tank1102below the core2307. An inlet tube sheet1103separates the inlet tank1101from the core2307and an outlet tube sheet1104separates the outlet tank1102from the core2307. Additionally, an inlet port1105is connected to the inlet tank1101and an outlet port1106is connected to the outlet tank1102. FIGS.24-25further depict the elements of the core2307(the inlet tube sheet1103, the microtubes2302, midplates2301, sideplates2300and solid rods2306(if present) and the outlet tube sheet1104). The microtubes2302are attached to the inlet tube sheet1103at the tops and to the outlet tube sheet1104at the bottoms. Also, midplates2301are spaced along the length of the microtubes2302to provide additional support to the microtubes2302. Solid rods2306are also located along the leading face6000(and further optionally at the trailing face of the core to prevent damage during handling of the single heat exchanging unit1100) of the core2307at which air enters the heat exchanging unit1100. The bleed air (or other fluid to be cooled) enters the heat exchanging unit1100through the inlet port1105and flows to the inlet tank1101. From the inlet tank1101the bleed air (or other fluid to be cooled) flows through the inlet tube sheet1103and into the microtubes2302. Upon leaving the microtubes2302, the bleed air (or other fluid to be cooled), now cooled, flows through the outlet tube sheet1104, into the outlet tank1102and out through the outlet port1106. Heat Exchanger with Multiple Heat Exchanging Subunits FIGS.1,2and3illustrate an exemplary embodiment of a heat exchanger1000for an aircraft—specifically for the Bell Boeing V-22 Osprey—used to cool the fluids of the engine, gearbox and hydraulic systems. However, the invention is not limited to the exemplary embodiment contained inFIGS.1,2and3. The V-22 Osprey's propulsion system consists of dual counter rotating proprotors attached to gearboxes driven by turboshaft engines. The engines, proprotor gearboxes, tilt-axis gearboxes, and proprotor controls are all housed in the rotating nacelle on the end of each wing. FIGS.1,2and3depict certain features of an exemplary embodiment of heat exchanger1000for a V-22 Osprey, namely, a PRGB subunit1003, TAGB/GEN subunit1004and HYD subunit1005that are secured to a baseplate1001via mechanical means (such as screws, bolts, or similar means known in the industry). The embodiment of the heat exchanger1000depicted inFIG.1is advantageous insofar that it cools multiple lubricating oil systems for the Osprey aircraft—the oil system for the tilt axis gear box (TAGB), the engine (GEN), prop-rotor gearbox (PRGB) and the hydraulic (HYD) system—in a single, compact, light-weight unit. The baseplate1001further includes a shaft aperture1002. When installed in a V-22 Osprey, the engine shaft (not depicted) will pass through the shaft aperture1002and it is the engine fan that suctions surrounding air through the heat exchanger1000. The PRGB subunit1003, TAGB/GEN subunit1004and HYD subunit1005of the heat exchanger1000are made from stainless steel. In one embodiment, the stainless steel is 304 stainless steel. This stainless-steel design allows for each element of these units to be welded to each other (as compared to using stainless steel and aluminum—which cannot be welded together). Welding is especially advantageous in designing a heat exchanger1000for an aircraft due to the shock and vibration incurred during aircraft use. In one embodiment, the baseplate1001is comprised of 5052 aluminum and the PRGB subunit1003, TAGB/GEN subunit1004and HYD subunit1005are mechanically fastened (using screws, bolts or other means known in the industry) to the baseplate1001via the exterior supports2304. HYD Subunit: FIGS.4,5,6and7depict an exemplary embodiment of the HYD subunit1005. The heat exchanger core2307of HYD subunit1005comprises the HYD port tube sheet2103, the microtubes2302, midplates2301, sideplates2300and solid rods2306(if present) and the HYD port tube sheet2201. Joined to the top side of the HYD subunit1005core2307are two parallel portside tanks2100—a HYD inlet tank2101and a HYD outlet tank2102. A HYD inlet port2104is located above the HYD inlet tank2101and the oil to be cooled is flowed through the HYD inlet port2104into the HYD inlet tank2101. A HYD outlet port2105is located above the HYD outlet tank2102and the oil, after it is cooled, flows out of the HYD outlet tank2102, through the HYD outlet port2105and away from the heat exchanger1000. The HYD turnside tank2200is attached to the bottom of core2307. The HYD subunit1005further comprises a HYD oil bypass valve port2107located above the portside tanks2100, and a HYD bleed port2106connected to the HYD outlet port2105. As shown inFIG.6, from the HYD inlet tank2101, the oil flows through the HYD port tube sheet2103and into the microtubes2302(further described below) below the HYD inlet tank2101. Oil then flows through the HYD turn tube sheet2201into the HYD turnside tank2200. From the HYD turnside tank2200the oil flows though the HYD turn tube sheet2201and into the microtubes2302located below the HYD outlet tank2102. Finally, the cooled oil flows out the microtubes2302of the core2307, through the HYD port tube sheet2103, into the HYD outlet tank2102and through the HYD outlet port2105. FIG.6further depicts the elements of the core2307for the HYD subunit1005. The microtubes2302are attached to the HYD port tube sheet2103at the tops and to the HYD turn tube sheet2201at the bottoms. Also, midplates2301are spaced along the length of the microtubes2302to provide additional support to the microtubes2302. Solid rods2306are also located along the leading face6000of the core2307(and further optionally at the trailing face of the core2307to prevent damage during handling of the HYD subunit1005) at which air enters the heat exchanger1000. TAGB/GEN Subunit: FIGS.8,9,10,11and12depict an exemplary embodiment of the TAGB/GEN subunit1004. The TAGB/GEN subunit1004comprises two side-by-side subunits—the TAGB subunit3400and the GEN subunit3500. The cores2307for each of the TAGB subunit3400and GEN subunit3500comprises the port tube sheet3103, the microtubes2302, midplates2301, sideplates2300and solid rods2306(if present), and the turn tube sheet3201. In practice, the core2307which serves both the TAGB and GEN subunits of1004is one assembly. Connected above the portion of the core2307associated with the TAGB subunit3400are two parallel TAGB portside tanks3100—a TAGB inlet tank3109and a TAGB outlet tank3110. A TAGB inlet port3112is located above the TAGB inlet tank3109and the oil to be cooled is flowed through the TAGB inlet port3112into the TAGB inlet tank3109. An TAGB outlet port3113is located above the TAGB outlet tank3110and the oil, after it is cooled, flows out of the TAGB outlet tank3110, through the TAGB outlet port3113and away from the heat exchanger1000. Connected to the bottom of the TAGB subunit3400(below the core2307associated with the TAGB subunit), is the TAGB turnside tank3202. Similar to the port tube sheet3103, the turn tube sheet3201separates the core2307from the TAGB turnside tank3202. The TAGB subunit3400further comprises a TAGB bypass valve port3114located above the TAGB inlet tank3109and a TAGB oil filter port3115. Connected above the portion of the core2307associated with the GEN subunit3500are two parallel GEN portside tanks3100—a GEN inlet tank3101and a GEN outlet tank3102. A GEN inlet port3104is located above the GEN inlet tank3101and the oil to be cooled is flowed through the GEN inlet port3104into the GEN inlet tank3101. A GEN outlet port3105is located above the GEN outlet tank3102and the oil, after it is cooled, flows out of the GEN outlet tank3102, through the GEN outlet port3105and away from the heat exchanger1000. The port tube sheet3103separates the GEN inlet tank3101and GEN outlet tank3102from the core2307of the GEN subunit3500. At the bottom of the GEN subunit3500(below the core2307), is the GEN turnside tank3200. Similarly to the port tube sheet3103, the turn tube sheet3201separates the core2307from the GEN turnside tank3200. The GEN subunit3500further comprises a GEN bypass valve port3107located above the GEN inlet tank3101. For each of the GEN subunit3500and the TAGB subunit3400, from the inlet tanks3101,3109, the oil flows through the port tube sheet3103and into the microtubes2302(further described below) below the inlet tanks3101,3109. Oil then flows through the turn tube sheet3201into the turnside tanks3200,3202. From the turnside tanks3200,3202the oil flows though the turn tube sheet3201and through the microtubes2302below the outlet tanks3102,3110. Finally, the cooled oil flows out the microtubes2302, through the port tube sheet3103into the outlet tanks3102,3110and through the outlet ports3105,3113. FIG.12further depicts the elements of the cores2307. The microtubes2302are attached to the port tube sheet3103at the top and to the turn tube sheet3201at the bottom. Also, midplates2301are spaced along the length of the microtubes2302to provide additional support to the core2307. Solid rods2306are also located on the front face of the cores2307at which air enters the heat exchanger1000. PRGB Subunit FIGS.13,14,15,16,17and18depict an exemplary embodiment of the PRGB subunit1003. PRGB subunit1003is comprised of two PRGB cores2307—the inlet core4300(comprising the microtubes2302transporting oil from the PRGB inlet tank4101to the PRGB turnside tank4200) and the outlet core4301(comprising the microtubes2302transporting oil from the PRGB turnside tank4200to the PRGB outlet tank4102). The inlet core4300is located beside the outlet core4301. PRGB subunit1003further comprises two PRGB portside tanks—a PRGB inlet tank4101located above the PRGB inlet core4300and the PRGB outlet tank4102above the PRGB outlet core4301. The PRGB turnside tank4200and PRGB turn tube sheet4201extend below both the PRGB inlet core4300and the PRGB outlet core4301. The PRGB subunit1003further comprises a PRGB oil filter port4107and a PRGB bypass valve port4108. A PRGB inlet port4104is located above the PRGB outlet tank4102and the oil to be cooled is flowed through the PRGB inlet port4104into the PRGB inlet tank4101through an inlet pipe4203. A PRGB outlet port4105is located above the PRGB outlet tank4102and the oil, after it is cooled, flows out of the PRGB outlet tank4102, through the PRGB outlet port4105and away from the heat exchanger1000. Each of the PRGB inlet and outlet cores4300,4301comprise microtubes2302, midplates2301, sideplates2300and solid rods2306. As shown inFIG.17, from the PRGB inlet tank4101, the oil flows through the PRGB inlet port tube sheet4103and into the microtubes2302(further described below) of the PRGB inlet core4300. Oil then flows through the PRGB turn tube sheet4201into the PRGB turnside tank4200. From the PRGB turnside tank4200the oil flows though the PRGB turn tube sheet4201below the PRGB outlet core4301and into the microtubes2302of the PRGB outlet core4301. Finally, the cooled oil flows out the microtubes2302of the PRGB outlet core4301, through the PRGB outlet tube sheet4202, into the PRGB outlet tank4102and through the PRGB outlet port4105. FIG.17further depicts the elements of the PRGB inlet and outlet cores4300,4301. The microtubes2302are attached to the PRGB inlet and outlet port tube sheets4103,4202at the top and to the PRGB turn tube sheet4201at the bottom. Also, midplates2301are spaced along the length of the microtubes2302to provide additional support to the cores4300,4301. Solid rods2306are also located along the leading face (and further optionally at the trailing face of the core to prevent damage during handling of the PRGB subunit1003) of the core at which air enters the heat exchanger1000. Fluid Flow As shown inFIG.21D, the single unit heat exchanging unit1100comprises a cross-flow, one-pass design. This figure further shows the pathway of the bleed air (F) through the heat exchanging unit1100—entering through the inlet port1105and flowing to the inlet tank1101. From the inlet tank1101the bleed air flows through the inlet tube sheet1103and into the microtubes2302. Upon leaving the microtubes, the bleed air flows through the outlet tube sheet1104, into the outlet tank1102and out through the outlet port1106. At the same time that the bleed air (F) is taking this path, the surrounding air (A) is drawn through the core2307and around the microtubes2302. The heat of the bleed air (F) is transferred through the walls of the microtubes2302to the surrounding air (A) thereby resulting in the cooling of the bleed air (F). As shown inFIG.21Aand B, the HYD subunit1005and the TAGB/GEN subunit1004each comprise a cross flow—counter flow, two-pass design. In a cross flow—counter flow design, the first fluid flow (i.e., the flow of the oil) makes at least two passes through the heat exchanger, with each pass being progressively closer to the front face of the heat exchanger. Macroscopically, the oil enters core near the back of the heat exchanger and exits near the front. The air, on the other hand, enters the front face of the heat exchanger and exits the back face. Each individual pass defines cross flow (the fluid velocities of the two fluids are orthogonal to each other), but macroscopically, the oil flows from the back of the heat exchanger forward, while the air flows in the opposite direction. This is the definition of counter flow. As is typical of two pass cross flow-counter flow heat exchangers, one fluid enters and exits on the same end of the heat exchanger1003-1005. FIG.21Ashows the pathway of the hydraulic fluid (B) through the HYD subunit1005—entering through the HYD inlet port2104, into the HYD inlet tank2101, down through the microtubes2302(located underneath the HYD inlet tank2101) of the core2307, into the HYD turnside tank2200, up through the microtubes2302(located underneath the HYD outlet tank2102) of the core2307, into the HYD outlet tank2102and out through the HYD outlet port2105. At the same time that the hydraulic fluid (B) is taking this path, the surrounding air (A) is being drawn through the core2307and around the microtubes2302. The heat of the hydraulic fluid (B) is transferred through the walls of the microtubes2302to the surrounding air (A) thereby resulting in the cooling of the hydraulic fluid (B). FIG.21Bshows the pathway of two separate fluids—the GEN fluid (C) and the TAGB fluid (D)—through the TAGB/GEN subunit1004. The GEN fluid (C) enters through the GEN inlet port3104, into the GEN inlet tank3101, down through the microtubes2302(located underneath the GEN inlet tank3101) of the core2307, into the GEN turnside tank3200, up through the microtubes2302(located underneath the GEN outlet tank3102) of the core2307, into the GEN outlet tank3102and out through the GEN outlet port3105. The TAGB fluid (D) enters through the TAGB inlet port3112, into the TAGB inlet tank3109, down through the microtubes2302(located underneath the TAGB inlet tank3109) of the core2307, into the TAGB turnside tank3202, up through the microtubes2302(located underneath the TAGB outlet tank3110) of the core2307, into the TAGB outlet tank3110and out through the TAGB outlet port3113. At the same time that the GEN fluid (C) and/or the TAGB fluid (D) is taking these paths, the surrounding air (A) is being drawn through the core2307and around the microtubes2302. The heat of the GEN fluid (C) and the TAGB fluid (D) is transferred through the walls of the microtubes2302to the surrounding air (A) thereby resulting in the cooling of the GEN fluid (C) and the TAGB fluid (D). As shown inFIG.21C, the PRGB subunit1003comprises a transverse (u-turn), two-pass design and shows the pathway of the PRGB fluid (E) through the PRGB subunit1003. Unlike the cross flow—counter flow design where the macroscopic flow of the fluid in progressive cross flow passes is in a direction opposite the air velocity, in a transverse cross flow design, the macroscopic flow of fluid in progressive cross flow passes typically is in a direction orthogonal to the direction of the air flow. The PRGB fluid (E) enters through the PRGB inlet port4104, through the4203inlet pipe, into the PRGB inlet tank4101, down through the microtubes2302(located underneath the PRGB inlet tank4101) of the core2307, into the PRGB turnside tank4200, up through the microtubes2302(located underneath the PRGB outlet tank4102) of the core2307, into the PRGB outlet tank4102and out through the PRGB outlet port4105. At the same time that the PRGB fluid (E) is taking this path, the surrounding air (A) is being drawn through the core2307and around the microtubes2302. The heat of the PRGB fluid (E) is transferred through the walls of the microtubes2302to the surrounding air (A) thereby resulting in the cooling of the PRGB fluid (E). The cores2308of each of the single unit heat exchanger1000, the PRGB subunit1003, the TAGB/GEN subunit1004and the HYD subunit1005comprise a plurality of microtubes2302. Microtubes2302are laser welded to an exemplary tube sheet are shown inFIG.19and an exemplary in-line pattern of an array of laser welded microtubes2302is shown inFIG.20. By way of example, an ytterbium fiber laser (such as the YLR-MM laser sold by IPG Photonics) may be used for the laser welding. In one embodiment a thin layer of epoxy is also deposited on the backside of the laser-welded joints thereby providing a redundant mechanism for additional strength and creation of a leak tight joint. The microtubes2302transport a first fluid (i.e., lubricating oil or bleed air) to be cooled while a second fluid (i.e., air) flows over the outer surface of the microtubes2302. The temperature differential between the hotter first fluid compared to the cooler second fluid results in the exchange of heat between the fluids. The number of microtubes2302provided will depend on the design chosen and the performance requirements desired. In certain embodiments, the heat exchanger1000will utilize thousands, tens of thousands, or even millions of microtubes2302. In one embodiment, the heat exchanging unit1100comprises between 5,000 and 15,000 microtubes. In one embodiment the PRGB subunit1003comprises around 4,000 microtubes2302in the first pass (from PRGB inlet tube sheet4103to PRGB turn tube sheet4201) and around 7,000 microtubes2302in the second pass (from PRGB turn tube sheet4201to PRGB outlet tube sheet4202); the TAGB/GEN subunit1004comprises around 4,000 microtubes2302(around 2,500 microtubes2302in the TAGB subunit3400and around 1,500 in the GEN subunit3401); and the HYD subunit1005comprises around 5,000 microtubes2302. Microtubes2302may have an outer diameter of less than 3.5 mm, but most commonly the outer diameter is between around 0.5 mm (0.020 inches) and 2.0 mm (0.08 inches). Microtubes2302typically may be made from polymer or metal alloys. Such metal alloys may include steel, nickel alloy, aluminum or titanium. In one embodiment the microtubes2302are made of 304 stainless steel or Inconel 625. The plurality of microtubes2302are substantially parallel to each and other and are substantially perpendicular to the inlet or port tube sheets1103,2103,3103,4103,4202and the outlet or turn tube sheets1104,2201,3201,4201. The microtubes2302are also substantially perpendicular to one or more midplates2301. The midplates2301are located between an inlet or port tube sheets1103,2103,3103,4103,4202and an outlet or turn tube sheet1104,2201,3201,4201with the microtubes2302extending through the midplates2301. The number of midplates2301(or the existence of them at all) and the location of the midplates2301will be a design consideration dependent, amongst other considerations, on the physical characteristics of the heat exchanger1000. The inlet tube sheets, outlet tube sheets, port tube sheets, turn tube sheets and midplates (when present), each comprise an array of tube apertures5000in a certain pattern. The pattern of the array of tube apertures5000for corresponding inlet tube sheets, outlet tube sheets, port tube sheets, turn tube sheets and midplates shall be substantially identical. The pattern of the array of tube apertures5000defines the spacing/position of the microtubes2302in the heat exchanger1000. In one embodiment of the instant invention, the microtubes2302are circular in shape and configured in rectangular (or sometime referred to as “in-line”) pattern, as depicted inFIG.22. In one exemplary embodiment, the microtubes2302have an outer diameter of less than 3.5 mm. In another exemplary embodiment, the microtubes2302have an outer diameter between around 0.5 mm and 2.0 mm. In such a rectangular pattern, the distance between the centers of the microtubes2302in the longitudinal direction (or, in other words, in the direction as the air flow A) is referred to as the longitudinal distance (SL) and the distance between the centers of the microtubes2302in the transverse direction (or, in other words, in the direction perpendicular to the direction of the air flow A) is referred to as the transverse distance (ST). In one embodiment, the longitudinal distance is less than the transverse distance, and the transverse distance is greater than two times the outer diameter of the microtube. In one embodiment, the longitudinal distance is one and one-half (1.5) times the outer diameter of the microtube2302and the transverse distance is three (3) times the outer diameter of the microtube2302. In another embodiment, the longitudinal distance is between 1.25 and 1.75 times the outer diameter of said microtube and said transverse distance is between 2.0 and 5.5 times the outer diameter of said microtube. The rectangular array pattern is advantageous because sand and dust easily pass around the microtubes2302thereby preventing fouling of the heat exchanger1000. The largest sand particle is generally less than 1 mm (approximately 0.040 inches), so we speculate that virtually all sand and dust will pass through the microtubes2302of the aforementioned embodiment. Additionally, the rectangular array pattern is relatively easy to clean, and can usually be easily cleaned using a high-pressure air gun and without requiring removal of the heat exchanger1000from the aircraft. Rectangular array patterns of microtubes2302have enhanced thermal performance characteristics. It will seldom be the case that within the spacial envelope of a given heat exchanger1000and with given flow rates of air and fluid, that microtube heat exchangers cannot provide a superior, or at least very competitive, combination of heat transfer and air-side pressure drop compared to more conventional architectures (such as plate-fin heat exchangers). In other words, in the present microtube heat exchanger invention, the improved benefits with respect to fouling resistance or weight savings are achieved without sacrificing thermal performance and compactness. In summary, the combination of the use of the microtubes and the rectangular pattern is important and unique. As an additional benefit, when experiencing the same face velocities (the velocity of the air entering the heat exchanger), the rectangular microtube array pattern results in an air side pressure drop typically 30-40% lower than aluminum plate-fin products thereby resulting in a more efficient heat exchanger. FIG.23also depicts the use of solid rods2306located along the leading face (or, in other words, the face at which the air enters the core) of the core2307. These solid rods2306are typically of the same diameter as the microtubes2302within the core2307and made of the same material as the core2307(usually stainless steel). The array of microtubes2302are shielded behind solid rods2306, and thus the solid rods2306protect the microtubes2302from damage due to high velocity debris entering the heat exchanger1000. It should be noted that the stainless steel microtubes2302are tough, even without the protection provided by the solid rods2306, and resist damage fairly well from high velocity debris. Further, optionally, solid rods2306may be located at the trailing face of the core to prevent damage during handling of the heat exchanger1000. In alternative embodiment that solid rods2306may be replaced with thick-walled tubes wherein the diameter of the wall of the thick-walled tubes are at least twice as thick as the diameter of the microtubes2302. The foregoing description of the embodiments of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. The embodiments were chosen and described in order to explain the principles of the invention and its practical application to enable one skilled in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. This invention is susceptible to considerable variation in its practice. Therefore, the foregoing description is not intended to limit, and should not be construed as limiting, the invention to the particular exemplifications presented hereinabove. Rather, what is intended to be covered is as set forth in the ensuing claims and the equivalents thereof as permitted as a matter of law. | 27,882 |
11859922 | DETAILED DESCRIPTION Disclosed herein is a selectively transparent film including polyacrylonitrile (PAN) nanofibers that can be seasonally deployed over existing surfaces to enable radiative cooling during the summer, while allowing solar heating when removed during the winter. As disclosed herein, the morphology of the PAN nanofibers is tailored to exhibit high solar cross-sections. Such morphology decreases the amount of material needed and, in turn, causes the film to exhibit high infrared transmittance despite PAN's intrinsic absorption in the 8-13 μm range. As disclosed herein, beaded nanofiber electrospun film boosts the total solar reflectance on an unpolished aluminum surface from ˜80% to nearly 99%. When scaled up and tested outdoors, the film shields a PDMS-coated aluminum sheet from solar radiation while allowing the PDMS-coated aluminum sheet to radiate heat to space, resulting in a 7° C. temperature drop under unoptimized sky conditions. Heat transfer modeling agrees with the outdoor experiments and predicts temperature drops exceeding 10° C. below ambient under standard sky conditions with the beaded nanofiber film. The flexible and freestanding nature of the film may allow it to be deployed seasonally in regions where it is favorable to reflect sunlight during warmer months but absorb solar heat during colder months. This would widen the geographical space where radiative cooling is applicable. Passive radiative cooling is one approach that has the potential to alleviate urban heat island effects and decrease the energy consumed for building thermal regulation. This approach takes advantage of the atmospheric transparency windows in the infrared (3.4 μm-4.1 μm, 8 μm-13 and 16 μm-28 μm) to allow terrestrial materials to radiate heat to Space (˜3 K). As used herein, selectivity with respect to a radiator means the radiator can emit heat to Space while preventing absorption of solar irradiation. As used herein, “mid-wavelength infrared” refers to electromagnetic radiation having a wavelength from about 3 μm to about 8 μm. As used herein, “long-wavelength infrared” refers to electromagnetic radiation having a wavelength from about 8 μm to about 15 μm. Before the present disclosure, radiative cooling approaches were best suited for climates with high atmospheric clarity and low humidity levels. In seasonal climates, a thermal management system, as disclosed herein, with two operational modes, would use less energy. In a first operational mode, the thermal management system radiatively cools during the summer. In a second operational mode, the thermal management system absorbs sunlight during the winter. The bimodal approach of the present disclosure could offset cooling and heating demands that are otherwise met using renewable electricity or fossil fuels. According to the present disclosure, a selectively transparent film can be seasonally deployed over existing materials to lower their temperatures. A freestanding, selectively transparent film is a versatile device that can be placed in contact with an emitting/radiating surface or separated from the emitting/radiating surface by a transparent insulation layer. In examples, the film may include electrospun polyacrylonitrile (PAN) fibers, supported by thin polyethylene (PE) sheets. By tailoring the hierarchical morphology of the PAN structures, examples of the freestanding PAN-based film of the present disclosure may achieve greater than 95% solar-weighted total reflectance (SR) and greater than 70% atmospheric-window-weighted total transmittance (AWT). As such, the selectively transparent film can be paired with emitting/radiating surfaces that have relatively low solar reflectance. It is to be understood that many materials used in urban settings (e.g., concrete, asphalt, roofs, etc.) have relatively low reflectance. During warmer months, examples of the PAN film of the present disclosure can be deployed to provide passive radiative cooling. During colder months, when a solar-absorbing surface is favorable, the PAN film of the present disclosure can be easily removed and stored for later use. In addition to being seasonally versatile, examples of the film of the present disclosure can provide optimized cooling regardless of the cooling power demand. When placed directly on the emitter/radiator, radiation can augment natural thermal regulation mechanisms such as conduction and convection to the environment. As disclosed herein, a thermal management system20having a combined operational mode, with conduction, convection, and radiation aspects, is useful for thermal management in applications where heat dissipation exceeds the radiative cooling power (e.g. solar panels). In low power applications, where sub-ambient temperatures are possible, a thermal break between the emitter/radiator and the environment can enable higher performance. In this scenario, the film35of the present disclosure can be a stand-alone cover or be paired in tandem with an IR-transparent insulator to decrease heat transfer between the emitter/radiator and warmer environment. The seasonal versatility and cooling power versatility of examples of the present disclosure provide enhanced utility in a wider range of climates. Referring toFIG.2, in an example, a thermal management system20for a body17to be exposed to solar radiation19, includes an infrared radiating element22and a solar-scattering cover24disposed on the infrared radiating element22. In some examples, the solar-scattering element can be integrated with the infrared radiating element. It is to be understood that emission may occur partly from the solar-scattering element and partly from the infrared radiating element, thereby increasing cooling power. As used herein, a “body” means any structure. By way of non-limiting example, a body may include: a building, a roof, a wall, a window, a door, a hatch, a boat, an automobile, an airplane, a lighter-than-air vehicle; a machine, a housing, an electronic device, a solar panel, a container, a greenhouse, a swimming pool, a water reservoir, or combinations thereof. In some examples of the thermal management system20, the solar-scattering cover24may be to scatter sunlight11diffusely or directionally away from the body17. The solar-scattering cover24may be substantially transparent to infrared radiation26. As used herein, substantially transparent to infrared radiation means that at least 75 percent of the infrared spectrum passes through the solar-scattering cover24with less than a 25 percent attenuation of intensity. The infrared radiating element22may be to emit infrared radiation26through the solar-scattering cover24. In some examples, the solar-scattering cover24comprises a nanostructured, IR-transparent polymer21. In some examples, the nanostructured, IR-transparent polymer21is nanostructured polyacrylonitrile (nanoPAN)29(FIG.3). Referring toFIG.4, in some examples, the solar-scattering cover24includes a film35; the film35includes the nanostructured, IR-transparent polymer21. In some examples, the film35includes a first layer36, a second layer37, and a third layer38. The first layer36may include a first flexible polymeric material41. The second layer37may be disposed on the first layer36. The second layer37may include the nanostructured, IR-transparent polymer21. The third layer38may be disposed on the second layer37. The third layer38may include a second flexible polymeric material42. In an example, the film35is to be removably installed on the infrared radiating element22. In some examples, the film35is further to be reinstalled on the infrared radiating element22after being removed from the infrared radiating element22. In some examples, the nanostructured, IR-transparent polymer21includes nanostructured polyacrylonitrile (nanoPAN)29. In some examples, the first flexible polymeric material41includes polyethylene. In some examples, the second flexible polymeric material42includes polyethylene. In examples, the first layer36may be a PE sheet18. In some examples, the third layer38may also be a PE sheet18. The first layer36and the third layer38may include the same materials, or they may include different materials. The first layer36and the third layer38may have the same thickness, or they may have different thicknesses. In some examples, the thermal management system20further includes a dielectric material disposed on or embedded in the nanostructured, IR-transparent polymer21to increase solar scattering of the solar-scattering cover24and to protect the nanostructured, IR-transparent polymer21from ultra-violet radiation. FIG.5is a photograph and infrared image of a solar-scattering infrared-transparent nanostructured cover depicted inFIG.4showing visible opacity on the left and infrared transparency on the right. A Block M was covered on the left side of the Block M by a solar-scattering cover24(seeFIG.4). The solar-scattering cover24(FIG.4) includes the film35(FIG.4). The film35includes nanoPAN29. The left side ofFIG.5is a visible light photograph. The right side ofFIG.5is an infrared image.FIG.5demonstrates that the nanoPAN is opaque to visible light, and substantially transparent to infrared light. As disclosed herein, fiber morphology affects the scattering and absorption properties of the selectively transparent films. The fiber morphology may be modified by varying the polymer concentration in the electrospinning solution. As disclosed herein, scattering mechanisms were unexpectedly and fortuitously discovered by experimental studies combined with electromagnetic simulations. To further illustrate the present disclosure, an example is given herein. It is to be understood that this example is provided for illustrative purposes and is not to be construed as limiting the scope of the present disclosure. Example To demonstrate the versatility of present disclosure, the selectively transparent cover is combined with an unpolished aluminum sheet. For outdoor testing, the aluminum sheet is coated with polydimethylsiloxane (PDMS) to increase its thermal emittance. Without the nanoPAN film, the unpolished aluminum sheet has ˜80% SR. With the PAN film, the SR increases to nearly 99%. The morphology of polymer fibers may be controlled by tuning various spinning parameters in an electrospinning process. Voltage, polymer concentration, spin time, stage height, flowrate, and syringe gauge are all parameters that can affect the resulting electrospun fiber. In an example, the polymer solution concentration and spin time may be varied while keeping all other variables constant. The polymer solution concentration and spin time may be the simplest to tune to change fiber morphology and film thickness. PAN concentration directly influences the solution viscosity which influences the morphology of the spun fibers. Spin time influences the mass deposited, which in turn, affects the optical thickness. Ideally, the film needs to be thick enough to attenuate solar rays before they reach the emitter, but thin enough to be transparent in the infrared and enable heat exchange with Space. In some examples, high-purity polyethylene (PE) may be used as a convection cover for passive radiative cooling. The simple chemistry of PE (C2H4)n means that absorption peaks only occur for C—H and C—C bonds, resulting in high transmission in the infrared. PAN is more absorptive in the IR than polyethylene due to its extra triple nitrogen bond (C3H3N)n and has never been reported as being used as a passive radiative cooling cover. Nonetheless, PAN was fortuitously chosen. PAN is compatible with electrospinning, unlike PE which requires additional heating and solvent treatment to enhance its electrostatic properties. By tailoring the morphology using electrospinning, a decrease in the amount of material needed to achieve high SR while retaining high AWT was discovered. Four different concentrations of PAN dissolved in dimethylformamide (DMF) were prepared and electrospun onto transparent PE films: 2.5 wt %, 5 wt %, 7 wt %, 9 wt %. The resulting nanoPAN films were qualitatively opaque in the visible region but transparent in the IR. These are advantageous traits to scatter solar radiation but allow emission in the atmospheric windows. The 2.5 wt % and 5 wt % concentrations both resulted in a beaded fiber morphology, while the 7 wt % and 9 wt % concentrations produced cylindrical fibers. For low concentrations and viscosities, high surface tension causes instabilities in Taylor cone formation resulting in droplets and bead formation. When the concentration is increased, viscous forces dominate resulting in more uniform cylindrical fibers. To protect the nanoPAN films and prevent the nanoPAN from sticking while handling, the nanoPAN may be sandwiched between two transparent PE sheets. The resulting freestanding film can either be used as a convective cover or applied directly on the emitter depending on the cooling application. In both applications the transparency of the fiber films in the infrared enables the heat exchange with Space. The AWT decreases with increasing concentrations of PAN. This effect may be attributed to the increase in area density (i.e., mass per area) of PAN with increasing concentration (electrospinning time is held constant). The increase in area density may lead to a decrease in infrared transparency, consistent with the Beer-Lambert law. This effect is also corroborated by the AWT results with increasing fiber thickness for a fixed PAN concentration. In addition to the infrared properties, solar reflectance of the PAN film enhances daytime radiative cooling and seasonal thermal management. UV-Vis measurements show that a beaded 5 wt % PAN fiber (freestanding film) has a solar reflectance of 95%, which is the highest across the samples in the present disclosure, despite the intermediate area density. The infrared and solar specular measurements for the 5 wt % PAN, 720 μm thick film matches both the atmospheric transparency and solar irradiance spectrum. Although the beaded 5 wt % PAN film had nearly the same area density as a cylindrical 7 wt % film, its SR and AWT values were substantially higher. To demonstrate why the beaded fiber morphology exhibits higher solar reflectance, SCUFF-EM simulation was used to compare the electromagnetic response of this composite structure to its constituent structures (bead, cylinder). The beaded morphology results in a higher scattering cross-section compared to a uniform cylindrical fiber. The dielectric nanostructures exhibit scattering resonances when their size is comparable to the wavelength of light, consistent with Mie theory. Increasing the diameter leads to a red shift of the scattering peak for cylinders. In the case of a beaded morphology, the resulting cross-section can be largely explained by a sum of the individual cross-sections of the bead and cylinder. It is to be understood, however, that the combination of structures and length scales in the beaded fiber morphology results in a higher solar-weighted scattering efficiency than either the fiber or bead alone. This may be due to the smaller geometrical cross-section of the beaded fiber than the sum of the constituent structures (due to overlapping volumes in the beaded fiber). The cylindrical fibers scatter shorter wavelengths due to their thinner diameters, while the beads primarily scatter in the near-IR due to their larger characteristic length scale. Thus, electrospinning provides a means to include cylindrical and bead morphologies in a mechanically interconnected system and take advantage of both dielectric micro/nanostructures. In addition to morphological effects, polydispersity can also be responsible for broadening the overall solar reflectance. Both the 7 wt % (cylinder) and 5 wt % (beaded) films feature relatively broader particle size distributions compared to the 9 wt % (thicker cylinder) and 2.5 wt % samples (thinner beaded). However, the 5 wt % beaded morphology exhibits notably higher SR than the 7 wt % cylindrical geometry, suggesting that polydispersity cannot entirely explain the difference in SR. Outdoor Tests Outdoor tests were conducted to validate heat transfer models and to test whether the nanoPAN-based configurations can outperform conventional roofing materials and existing commercial films under realistic sky conditions. Based on the UV-Vis and Fourier Transform Infrared (FTIR) results, discussed above, the beaded morphology (5 wt % PAN) was chosen as the best candidate for daytime cooling. An unpolished aluminum sheet (Al) was chosen as a conventional roofing surface that is unoptimized for radiative cooling. The stagnation temperature of three emitter systems (Table 1) was simultaneously monitored under clear sky conditions in Ann Arbor, Michigan TABLE 1Description of emitter-cover systems tested outdoorsSystemIIIIIIlabelunpolished Alwith nanoPANESP controlEmitterPDMS-coatedbeaded PANESR film adheredunpolishednanofibersto an aluminumaluminumdeposited onto Al-sheetPDMS emitter FIG.6is a graph depicting reflectance spectra for the samples listed in Table 1 above.FIG.6depicts total reflectance of the ˜6 cm2emitters fabricated using electrospinning. These results were used in the “standard clear sky” heat transfer model of cooling power versus emitter temperature for the (I) unpolished Al, (II) with nanoPAN, and (III) ESR control samples listed in Table 1.FIG.6shows that sample II (with nanoPAN) had the highest SR, and sample I (unpolished Al) had the lowest SR. Outdoor Test Results The outdoor test results show that the unpolished aluminum (sample I) exhibited the highest temperature (20.1° C.) due to its low solar reflective properties (SR=80%). In contrast, the coldest average temperature was achieved with the “with nanoPAN” configuration (II) system which also resulted in lower absolute temperatures than the Enhanced Specular Reflector (ESR) control (sample III). Furthermore, by shielding the unpolished aluminum with the nanoPAN film, a ˜7° C. temperature drop was observed because of the high SR of the nanoPAN film. Further, placing the nanoPAN film over a calibrated blackbody surface under AM1.5G 1-sun irradiation resulted in a 38.5° C. reduction in stagnation temperature. The outdoor stagnation temperature measurements agree with the semi-empirical heat transfer model. Inputs to the heat transfer model include measured optical properties of the covers and emitters (for example, seeFIG.6), measured thermal insulation of the housing (i.e., thermal resistance between the emitter and the surroundings), ambient temperature, and the solar orientation. Small differences between the model and the experimental stagnation temperatures can be attributed to humidity and cloud coverage, which are not explicitly taken into account in the model. It is to be understood that these results should not be interpreted as the best possible cooling performance that the present disclosure can achieve because during scale up from 5.9 cm2samples, which had ˜99% SR, to wafer-scale covers (38.5 cm2) used for outdoor measurements, a decrease in SR due to non-uniform deposition was observed. Furthermore, the average ambient temperature during the reporting window was 14° C., which suppresses the radiative cooling power relative to conditions with warmer ambient temperatures. Nevertheless, the results demonstrate that the addition of scattering fibers, albeit unoptimized, onto an unpolished surface resulted in better performance than the highly reflective ESR control. Fabrication of Polymer Films PAN fibers were fabricated using a home-built electrospinning setup. PAN powder with a MW of 200,000 (Polysciences, Inc.), was dissolved in dimethylformamide (Sigma) for a 2.5, 5, 7, and 9 wt % concentrations and mixed at 40° C.-50° C. overnight or until the powder was fully dissolved. Below 2.5 wt %, the solution was not viscous enough to support fiber formation, while above 9 wt %, the solution was too viscous. The solution was loaded into a syringe with a 25-gauge blunt tip needle and placed in a syringe pump to ensure a constant flowrate. The PAN solution was electrospun at a flow rate of 0.4 mL/hr and stage height of 11.5 cm for 10, 20, 40, and 60 minutes. The voltage was adjusted for each concentration to ensure formation of a Taylor cone. The substrates consisted of PDMS on aluminum emitter and PE plastic wrap placed over aluminum for grounding. Post fabrication treatment included leaving the films to rest overnight and carefully placing a clean PE plastic wrap on top of the exposed PAN fibers as a protective layer. Fabrication of PDMS Emitters The sides of aluminum weigh boats were removed and used as substrates for the unpolished aluminum surfaces. A 10:1 base elastomer to curing agent was used to make the pre-cured PDMS mixture. A 100 μm layer of PDMS was deposited on the aluminum substrate by spin coating at 700 rpm for 30 seconds and curing at 80° C. for 24 hours. Commercially available 3M Vikuiti™ ESR reflective emitters were purchased and placed over an aluminum substrate as a control. Optical Measurements and Microscopy The optical properties of the film were measured using UV-Vis and FTIR spectrometers with integrating sphere attachments. Total reflectance was measured from 0.3-1.2 μm using a Shimadzu UV-3600 Plus UV-Vis. Total infrared transmittance and reflectance was measured from 2-18 μm using a Cary 670 benchtop FTIR. Optical measurements were taken for both free-standing PAN fibers and emitters. Fiber morphology was visualized using the TESCAN MIRA3 scanning electron microscope. Bead and fiber diameter distributions were analyzed using the TESCAN images. Scuff-EM Model The scattering cross-sections of the cylindrical and beaded fiber morphologies were computed numerically with an open-source software implementation of the boundary-element method (BEM). Mesh-refinement was completed to ensure accurate results at smaller wavelengths. The BEM was verified by comparing the results to an analytical solution for Mie scattering via a PAN microsphere. Outdoor Measurements Outdoor tests were taken over 24-hour periods in Ann Arbor, Michigan when there was minimal cloud coverage. The emitter temperature, ambient temperature, and humidity were logged as a function of time for the emitter samples and a 3M ESR emitter control. Emitter temperatures were measured using T-type thermocouples and Extech SDL200 datalogger while transient ambient temperatures and humidity were logged using an OMEGA OM-24 logger. The emitters were placed in a foam enclosure to prevent bottom and side heating and the outside of the foam enclosure was wrapped with reflective Mylar®. An infrared transparent cover consisting of polyethylene (Glad® Cling Wrap) was placed taut over the emitter as a convective cover. Examples of the thermal management system of the present disclosure may be combined with solar panels (i.e. photovoltaic (PV) modules). A Radiation-Assisted PV Thermal (RAPT) management system is disclosed herein. Examples of the RAPT system disclosed herein may help regulate PV module temperature while enhancing back-side illumination levels for bifacial PV modules. Examples of the RAPT system disclosed herein may continuously maintain solar modules/panels at 7.5 (+/−15) ° C. above the average daytime ambient temperature. According to the present disclosure, the RAPT system accomplishes this by (i) disposing a solar-scattering radiator as disclosed herein between the rows of tracking solar arrays (equal in the area as the arrays) and (ii) advantageously applying stored nighttime radiative cooling/convection using a liquid reservoir. The radiator provides an average cooling rate of about 120 W/m2by emitting heat through the atmosphere's infrared (IR) transparency bands. The radiator works in tandem with natural convection from the above-ambient PV modules, which provides an additional average cooling rate of about 160 W/m2. Assumptions for this calculation are shown in Table 2: TABLE 2Assumptions for calculationsTemperatures:proposed radiator/reservoir/panels: 32.5 +/− 2.5° C.;average daytime ambient: 25° C.;average nighttime ambient: 18° C.;current panel (power-output-weighted): 49° C.Radiator:80% radiator capacity factor (daytime and nighttime);radiating area matches panel area.Liquid loop and reservoir:pressure drop calculated based on 8 m pipe length per 1 m2ofpanels (¼ in. diameter);coolant: glycol-water mixture;5 cm reservoir depth;reservoir: aluminum container;85% liquid pump efficiencyNatural convection:heat transfer coefficient (20 W/m2K) is based on empirical datafor on-sun temperature rise (~25 C.) and on-sun heat dissipationrates (~500 W/m2) of representative panels.Solar panels/farms:15% average solar reflectance;20% power conversion efficiency at room temperature (projectingto 2025);50% panel area coverage;30% solar capacity factor;all-in installation costs $0.7/We(projecting to 2025). Together, these two nearly continuous modes of heat transfer exceed the on-sun heat gain by the solar panels as long as excess nighttime cooling energy is stored in the liquid reservoir and circulated during the day. The pumped coolant heat exchanges with the panels using a rear-mounted serpentine tube (covering <6% of the panel area for compatibility with bifacial cells). Flexible tubing connects the reservoir to the rear-mounted coolant lines and allows the panels to track freely. The power consumed for circulating the coolant is less than about 5% of the panel output power. In an example, the RAPT system is applied for PV system cooling. The RAPT system may decrease average module temperatures by 15-20° C. (power-output weighted) and buffer temperature swings. The decreased average module temperatures may translate into a 6-8% increase in module power output and efficiency (based on the crystalline Si temperature coefficient). Based on the temperature drop, RAPT may also improve PV module reliability and extended lifetimes that surpass Department of Energy (DOE) targets (>30 years). A Levelized Cost of Energy (LCOE) analysis was performed to determine the allowable additional cost for the RAPT system while maintaining a baseline LCOE of 0.056 $/kWh (25 yr baseline lifetime). The LCOE analysis applied an LCOE calculator developed by the National Renewable Energy Laboratory (NREL) and a market overview organized by NREL to establish the baseline cost for a photovoltaic Megawatt (MW) facility. The estimated 7.2% efficiency gain and prolonged lifetime (30 years) are worth over $50/m2(or ˜25 cents/Watt) even when accounting for higher operation and maintenance costs ($2.00/kW/year increase). The projected cost of goods for the RAPT system disclosed herein is about 3-5 US Dollars ($)/m2. The projected cost of goods allows for a sufficient budget for installation and maintenance if those tasks are coordinated/integrated with the overall solar farm. Further economic benefits are expected for next-generation high-efficiency PV modules. Bifacial solar modules can particularly benefit from RAPT system as disclosed herein by leveraging scattered sunlight by the cover for an additional 5-10% relative increase in efficiency. Furthermore, panel thermal stability may also facilitate the commercialization of emerging PV technologies, such as tandems and perovskites, which are expected to be more susceptible to temperature swings. In some examples, the RAPT system of the present disclosure may be implemented in off-the-grid applications requiring cooling such as atmospheric dew harvesting, cold storage, etc. In some examples, the RAPT system of the present disclosure may be implemented in passive cooling of roofs for thermal management and air conditioning of buildings. In some examples, the RAPT system of the present disclosure may be implemented in energy efficient cooling of wireless infrastructure. The inventors of the present disclosure have unexpectedly and fortuitously discovered the solar-scattering radiator and the integrated cooling storage of the RAPT system as disclosed herein. Some existing radiative cooling approaches can be characterized as (Type A) photonic/optical modifications of the PV module or architecture, or (Type B) stand-alone radiative cooling systems. Existing PV-integrated systems (Type A) have not demonstrated more than about 1° C. temperature reduction compared to conventional glass covers. The existing Type A systems may typically include undesirable modification of the panel manufacturing process. The overall benefits of existing Type A approaches may be limited by the fact that the instantaneous solar heating rates (˜0.5 Suns) are significantly higher than radiative cooling rates (˜0.1 Suns). As disclosed herein, RAPT systems may overcome the ratio of rates of instantaneous solar heating to radiative cooling and improve temperature stability. As disclosed herein, the RAPT system utilizes the area between the PV panel rows and extends the duration of cooling by advantageously applying stored nighttime radiative/convective cooling. Some existing stand-alone radiative cooling systems (Type B) may rely on thermally-emitting thin coatings on top of a solar-reflective substrate. Such Type II radiators may reflect sunlight at (or below) the thermal emitter leading to parasitically absorbed sunlight and limited cooling rates. As shown inFIG.2, unlike the stand-alone radiative cooling systems, some examples of the RAPT radiator disclosed herein include a ZnS-coated nanostructured polyethylene (nanoPE) layer to scatter sunlight near the top of the cover24. Examples of the radiative cooler disclosed herein may include a solar-scattering infrared-transparent nanostructured cover24(ZnS-coated nanoPE) which blocks solar heat from reaching the thermal emitter (infrared radiating element)22(PDMS-coated mirrored Al). In some examples, PDMS can be interchanged for a combination of other polymers such as TPX, polyimide, or rubber, as well as oxides such as silica and alumina, while maintaining the desired near-blackbody thermal emittance. The inverted structure of the RAPT radiator (top: scatters sunlight, bottom: emits IR) partially decouples regions that scatter sunlight from regions that emit heat. The high refractive index of ZnS enhances the scattering properties of the nanoPE. The nanoPE may exhibit relatively high total solar transmittance (˜40%) without ZnS enhancement based on preliminary measurements as disclosed herein. Examples of the RAPT system disclosed herein compared to existing non-radiative systems may advantageously be dry (unlike evaporative cooling) and more passive (i.e. have negligible power consumption). Examples of the RAPT system disclosed herein may be integrated with other (i.e. non-RAPT) thermal management systems. Referring toFIG.2, in some examples, the cover24has a low thermal conductivity coefficient and blocks incoming solar radiation with wavelengths <8 μm. The nanoparticle size and layer thickness of the cover are tuned to allow transmission in the infrared spectra between 8-13 μm which is ideal for transmission of the waves through the atmosphere and into Space. The cover material can include BaF2, ZnS, and polyethylene. TiO2and polyacrylonitrile (PAN) may also be included in the cover24. In examples of the present disclosure, the radiators22can be tilted by 10-15 degrees without loss of radiative power to enhance roll-off of rainwater and particulates. In some examples, hydrophobic coatings may be applied to promote self-cleaning. FIG.1depicts a schematic view of an example of a Radiation-Assisted PV thermal management system20as disclosed herein. The ground-based radiative cooler46provides continuous cooling of modules (solar panels12). The liquid reservoir34stores the excess nighttime radiative/convective cooling energy and releases it during daytime operation. The liquid reservoir34may have thermal insulation disposed on at least portions thereof. The nanostructured cover and emitter are connected to a shallow liquid reservoir34placed behind a solar array. A pump circulates the coolant to tubes27behind the active components of the solar panel12. The coolant returns to the liquid reservoir34and is cooled by the emitter (infrared radiating element) that passes the heat into the atmosphere and through the atmosphere to Space. The energy required to operate the pump and emitter are provided by the solar panel12. With the radiative cooling system, the panel maintains an operational temperature of about +7° C. to the ambient temperature of 25° C. at times of peak solar exposure in which an uncooled panel will reach temperatures in excess of 50° C. Additionally, the system moderates the temperature decrease of the panels during nighttime. The temperature moderation provides at least two distinct benefits: first, it enables the field use of photovoltaic materials that are sensitive to thermal cycling; and second, the reduction of thermal cycling extends the service life of the photovoltaic which helps lower the LCOE by providing a larger discount timeframe. Referring toFIG.1andFIG.2, in examples, a thermal management system20for a photovoltaic (PV) power generator10may include an infrared radiating element22, a solar-scattering cover24disposed on the infrared radiating element22; and a thermal storage sub-system30in fluid connection with a solar panel12via thermal interconnections14. In some examples, the solar-scattering cover24is to scatter sunlight11diffusely (as depicted at reference numeral39) or directionally toward an underside16of the solar panel12. The solar-scattering cover24may be substantially transparent to infrared radiation to allow the infrared radiating element22to emit infrared radiation through the solar-scattering cover24. In some examples, the solar-scattering cover24may include a nanostructured, IR-transparent polymer21. In some examples, the nanostructured, IR-transparent polymer21may be nanostructured polyethylene (nanoPE). In some examples, the nanostructured, IR-transparent polymer21may be nanostructured polyacrylonitrile (nanoPAN)29. In some examples, the thermal management system20further includes a dielectric material disposed on or embedded in the nanostructured, IR-transparent polymer21to increase solar scattering of the solar-scattering cover24and to protect the nanostructured, IR-transparent polymer21from ultra-violet radiation. In some examples, the dielectric material is deposited on the nanostructured, IR-transparent polymer21by physical vapor deposition or by a solution-based process, or embedded into the nanostructured, IR-transparent polymer21by electrospinning. In some examples, the thermal storage sub-system30is to shift and distribute a peak solar heat load over a twenty-four hour time period; the thermal storage sub-system30is to store excess off-peak cooling for use during peak hours; the thermal storage sub-system30is to store natural convection energy; and the thermal storage sub-system30comprises a container32to store a coolant. In some examples, the container32is located under the infrared radiating element22or the container32is thermally connected to the infrared radiating element22via the thermal interconnections14. In some examples, the container32is connected to the solar panel12with a circulating coolant line28or with heat pipes. In some examples, the heat pipes may be stationary, in other examples, the heat pipes may be to oscillate. In some examples, the thermal interconnections14include a circulating fluid loop or a heat pipe. In some examples, the thermal interconnections14are passive. As used herein, “passive,” with respect to thermal interconnections means that the thermal interconnection is by natural circulation of air. The natural circulation of air may be directed by a structure (for example, a shroud or a vane), or the natural circulation may be undirected. In undirected natural circulation, there is open air between the container32and the infrared radiating element22. In some examples, the dielectric material is selected from the group consisting of ZnS, ZnO, TiO2 and combinations thereof. In some examples of the thermal management system20, the solar panel12may be a member43of an array13of tracking solar panels arranged in rows15. The infrared radiating element22may be a solar-scattering radiator40located between the rows15of the array13of tracking solar panels. A radiating area of the solar-scattering radiator40may be about equal to an area of the solar panel12. The solar-scattering radiator40may be to work in tandem with natural convection44from the array13of tracking solar panels. The thermal storage sub-system30may include a ground-based liquid reservoir34. In some examples, the thermal management system20further includes a sun-facing surface23defined on at least one member43of the array13of tracking solar panels, a distal surface25defined on the at least one member43of the array13of tracking solar panels opposite the sun-facing surface23, and a heat exchanger33attached to the distal surface25of the at least one member43of the array13of tracking solar panels. In some examples, the heat exchanger33includes a serpentine tube27, and the heat exchanger33is to obscure less than 20 percent of the distal surface25of the solar panel12to which the heat exchanger33is attached. In some examples, the array13of tracking solar panels includes at least one bifacial solar panel12. In some examples, the thermal interconnections14include flexible tubing to fluidly connect the ground-based liquid reservoir34to the serpentine tube27, wherein the flexible tubing remains connected to the serpentine tube27throughout a range of motion of a tracking solar panel12in the array13of tracking solar panels to which the serpentine tube27is attached. In some examples of the thermal management system20, the ground-based liquid reservoir34is covered by the solar-scattering radiator40. In some examples, the solar-scattering radiator40includes: a layer of aluminum31; a coating45disposed on the layer of aluminum31, the coating45to absorb or emit radiation having wavelengths ranging from mid-wavelength infrared to long-wavelength infrared; and a solar scattering cover overlaid on the coating45, wherein the solar scattering cover is substantially transparent to infrared radiation26to allow the solar-scattering radiator40to emit infrared radiation26through the solar-scattering cover24. In some examples, the layer of aluminum31may have a mirror finish for specular reflection of the sunlight11. In other examples, the layer of aluminum31may have a rough surface that causes diffuse reflection of the sunlight11. As used herein, the reflection of light is categorized into two types of reflection: specular reflection is defined as light reflected from a smooth surface at a definite angle, and diffuse reflection, which is produced by rough surfaces that tend to reflect light in all directions. In some examples, the coating45is a selected from the group consisting of: polydimethylsiloxane (PDMS) or an other polymer; an inorganic material; and combinations thereof. In some examples, the coating45is polydimethylsiloxane (PDMS). In some examples, the solar-scattering cover24includes a nanostructured, IR-transparent polymer21. In some examples, the nanostructured, IR-transparent polymer21is nanostructured polyethylene (nanoPE). In some examples, the nanostructured, IR-transparent polymer21is nanostructured polyacrylonitrile (nanoPAN)29. In some examples, the thermal management system20further includes a dielectric material disposed on or embedded in the nanostructured, IR-transparent polymer21to increase solar scattering of the solar-scattering cover24and to protect the nanostructured, IR-transparent polymer21from ultra-violet radiation. In some examples, the dielectric material is deposited on the nanostructured, IR-transparent polymer21by physical vapor deposition or by a solution-based process, or embedded in to the nanostructured, IR-transparent polymer21by electrospinning. In some examples, the dielectric material is selected from the group consisting of ZnS, ZnO, TiO2 and combinations thereof. It is to be understood that the terms “connect/connected/connection” and/or the like are broadly defined herein to encompass a variety of divergent connected arrangements and assembly techniques. These arrangements and techniques include, but are not limited to (1) the direct communication between one component and another component with no intervening components therebetween; and (2) the communication of one component and another component with one or more components therebetween, provided that the one component being “connected to” the other component is somehow in operative communication with the other component (notwithstanding the presence of one or more additional components therebetween). In describing and claiming the examples disclosed herein, the singular forms “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise. It is to be understood that the ranges provided herein include the stated range and any value or sub-range within the stated range, as if such value or sub-range were explicitly recited. For example, a range of from about 8 μm to about 15 μm should be interpreted to include not only the explicitly recited limits of about 8 μm to about 15 μm, but also to include individual values, such as 9 μm, 11.8 μm, etc., and sub-ranges, such as from about 10 μm to about 12 μm, etc. Furthermore, when “about” or “—” is utilized to describe a value, this is meant to encompass minor variations (up to +/−10%) from the stated value. While several examples have been described in detail, it is to be understood that the disclosed examples may be modified. Therefore, the foregoing description is to be considered non-limiting. No language in this disclosure should be construed as indicating any unclaimed element as essential to the practice of the examples. | 42,122 |
11859923 | DETAILED EMBODIMENTS OF THE INVENTION To make description of the invention more specific and complete, the accompanying drawings and various examples may be referred, and the same numbers in the drawings represent the same or similar components. On the other hand, the commonly known components and steps are not described in the examples to avoid unnecessary limit to the invention. In addition, for sake of simplifying the drawings, some known common structures and elements are illustrated in the drawings in a simple manner. As shown inFIG.2, in the first embodiment of the present disclosure, a cooling system comprises a heat exchanger, a converter, and a liquid cooling pipeline connecting the heat exchanger with the converter. A circulation pump8is provided in the liquid cooling pipeline. The cooling system further comprises a coolant tank4, an injection pump5and a control unit. The coolant tank4is used to store a coolant (such as a liquid coolant) and the coolant tank4is equipped with a liquid level sensor3. The liquid level sensor3real-timely detects the liquid level of the coolant tank4. The injection pump5is connected to the coolant tank4and used to inject the coolant from the coolant tank4into the liquid cooling pipeline during its operation. The control unit is used to turn on or turn off the injection pump5, turn on or turn off the circulation pump8, and set dynamic hydraulic threshold and static hydraulic threshold. The dynamic hydraulic threshold and the static hydraulic threshold correspond to the state of the coolant in the liquid cooling pipeline. When the coolant flows in the liquid cooling pipeline, the liquid pressure is enormously different from the coolant being static, the hydraulic thresholds of the liquid pressure are set respectively depending on operating conditions of the system. For example, when the liquid level in the coolant tank4reaches an upper liquid level threshold, the control unit is configured to turn on the injection pump5so that the coolant is injected into the liquid cooling pipeline by the injection pump5. When the pressure of the coolant in the liquid cooling pipeline reaches an upper static hydraulic threshold, the control unit is configured to turn off the injection pump5. Therefore, the circulation pump8are executed to turn on and turn off with a preset circulation period. And the circulation pump8forces the coolant to circulate in the liquid cooling pipeline when being turned on. In some embodiments, the coolant-injection means of the cooling system may be integrated into the cooling system, and the injection operation is controlled by the control unit of the cooling system. The liquid level sensor3is provided to detect the liquid level of the coolant within the coolant tank4, thereby determining whether the amount of the coolant in the coolant tank4satisfies the requirement for injection. Further, a respiration valve2is disposed on the coolant tank4to communicate the coolant tank4with the external, maintaining the gas pressure in the coolant tank4in balance with environment, facilitating the adjustment of the liquid level, and also facilitating the injection pump5to draw the coolant from the coolant tank4. It can be understood that the respiration valve2communicates inside and outside of the coolant tank4, such that the coolant in the coolant tank4can automatically flow into the injection pump5under action of gravity to complete the filling of the pump. Thus, the cooling system realizes an automatic filling of the injection pump in the initial coolant-injection, and then turns on the injection process according to judgement to the detected pressure in a cooling loop, thereby realizing the automatic injection of the cooling system. Additionally and/or alternatively, in other embodiments, the coolant tank4is provided with an inlet port1for realizing coolant-adding operation to the coolant tank4. It can be understood that the liquid cooling pipeline can be further provided with an exhaust valve. Additionally and/or alternatively, in other embodiments, a unidirectional valve6is provided between the injection pump5and the liquid cooling pipeline for preventing the coolant in the liquid cooling pipeline from flowing back into the coolant tank4. According to a further embodiment of the invention, in order to timely supplement the coolant consumed during normal operation of the device, the cooling system is further configured to realize automatic coolant-supplementing function. The pressure of the coolant in the system is real-timely monitored. Once the monitored pressure is less than a preset value under the current operation state, the injection pump can be timely turned on to supplement the coolant. Thereby, the heat dissipation efficiency of the cooling system is improved, and the manual maintenance can be reduced. In addition, since no need of the turning-off operation, the utilization of the device is thus improved. In particular, the control unit is configured to, when the circulation pump8is turned on (i.e., the coolant in the liquid cooling pipeline is in the flow state), turn on the injection pump5when the pressure of the coolant in the liquid cooling pipeline is less than a lower dynamic hydraulic threshold so as to supplement the coolant to the liquid cooling pipe, allowing the pressure of the coolant to be continuously increased, and turn off the injection pump5when the pressure of the coolant reaches an upper dynamic hydraulic threshold. Furthermore, the control unit is configured to, when the circulation pump8is turned off (i.e., the coolant in the liquid cooling pipeline is in a static state), turn on the injection pump5when the pressure of the coolant in the liquid cooling pipeline is less than a lower static hydraulic threshold so as to supplement the coolant to the liquid cooling pipeline, allowing the pressure of the coolant to be continuously increased; and turn off the injection pump5when the pressure of the coolant reaches an upper static hydraulic threshold. Additionally and/or alternatively, in other embodiments, the cooling system comprises a temperature sensor. The temperature sensor is disposed on the liquid cooling pipeline and used to monitor temperature of the coolant at a inlet end of the converter, and output a temperature detection signal to the control unit. When the circulation pump8is turned on, the control unit is configured to turn on the injection pump5, and when the temperature of the coolant reaches the upper operating temperature threshold, and the pressure of the coolant is less than the upper dynamic hydraulic threshold, so as to supplement the coolant to the liquid cooling pipeline to allow the pressure of the coolant to be continuously increased. When the pressure of the coolant reaches the upper dynamic hydraulic threshold, the control unit is configured to turn off the injection pump5so that the heat dissipation capability of the cooling system can be quickly improved. It can be understood that, during the coolant-supplementing of the cooling system, firstly, the operation state of the cooling system is judged, and then thresholds for coolant-injection of the cooling system are set depending on different operation states, thereby realizing optimum control to the pressure of the cooling system. According to still another embodiment of the invention, referring toFIG.3, the cooling system further comprises a first three-way valve10having a first end, a second end and a third end, and a second three-way valve11having a first end, a second end and a third end. The first end of the first three-way valve10is connected to the coolant tank4. The second end of the first three-way valve10is connected to the injection pump5. The third end of the first three-way valve10is connected to an external coolant source. The first end of the second three-way valve11is connected to the injection pump5. The second end of the second three-way valve11is connected to the liquid cooling pipeline. The third end of the second three-way valve11is connected to the coolant tank4. The control unit is configured to turn off the injection pump5and to add coolant to the coolant tank4when the liquid level of the coolant tank4is less than the lower liquid level threshold. In detail, during the coolant-adding to the coolant tank4at the first time, the first three-way valve10and the second three-way valve11are switched to communicate the pipes c and b. Coolant is added into the coolant tank4via the inlet port1, and then automatically flows into the injection pump5under action of gravity until there is coolant flowing back to the coolant tank4through a pipe12. Thus, the coolant-injection operation for the injection pump5is completed. Then, the automatic coolant-injection operation for the coolant tank4begins. A hose9is connected to an external coolant source. The control unit automatically switches the first three-way valve10to a pipe a. Subsequently, the injection pump5is turned on to transmit the coolant to the coolant tank4through the pipes a and b until the liquid level is higher than the upper liquid level threshold. Then, the injection pump5is turned off, and the first three-way valve10and the second three-way valve11are switched to communicate the pipes c and b. Thus, the automatic coolant-adding operation for the coolant tank4is completed. It can be understood that the subsequent coolant-adding operation does not need the coolant-injection operation for the injection pump5anymore. The automatic coolant-adding operation for the coolant tank4can be performed directly through the hose9. When the first/initial coolant-adding operation for the coolant tank is completed, the coolant-injection operation and/or coolant-supplementing operation to the cooling system can be controlled automatically. Therefore, the three-way valves are used to switch the pipeline for coolant-injection operation and the pipeline for coolant-supplementing operation, thereby realizing automatic coolant-adding to the coolant tank, and apparently reducing manual participation. According to another embodiment of the present invention, an automatic coolant-injection method for a cooling system is provided. The cooling system comprises a heat exchanger, a converter, and a liquid cooling pipeline connecting the heat exchanger with the converter and provided with a circulation pump8. The automatic coolant-injection method can be realized by the following steps:providing a coolant tank4and perform coolant-adding operation to the coolant tank4;providing a liquid level sensor3on the coolant tank4for real-time detecting a liquid level of the coolant tank4;providing an injection pump5connected to the coolant tank4for injecting coolant from the coolant tank4into the liquid cooling pipeline when the injection pump5is turned on; andproviding a control unit for controlling turning-on and turning-off operations of the injection pump5and turning-on and turning-off operations of the circulation pump8, and setting a dynamic hydraulic threshold and a static hydraulic threshold corresponding to states of the coolant in the liquid cooling pipeline. In the above coolant-injection method, when the liquid level of the coolant tank4reaches an upper liquid level threshold, the control unit is configured to turn on the injection pump5, allowing the coolant to be injected into the liquid cooling pipeline through the injection pump5. When a pressure of the coolant in the liquid cooling pipeline reaches an upper static hydraulic threshold, the control unit is configured to turn off the injection pump5so as to execute the turning-on and turning-off operations of the circulation pump8with a preset circulation period. The circulation pump8forces the coolant to circulate in the liquid cooling pipeline when being turned on. Referring toFIG.4, the flow diagram illustratively shows the coolant-injection operation for the cooling system. The coolant tank4is added with coolant, and the liquid level sensor3detects the liquid level of the coolant tank4. When the liquid level of the coolant tank4reaches a preset upper liquid level threshold and the pressure of the coolant is less than a lower static hydraulic threshold, the injection pump5is turned on to perform coolant-injection. When the pressure of the coolant reaches an upper static hydraulic threshold, the injection pump5is turned off. Then, the circulation pump8operates according to a preset on-off cycle to exhaust gas/air from the pipeline. It can be understood that the coolant-injection operation is a preparation work before the initial use of the device. At this time, the pressure of the coolant in the pipeline of the cooling system is typically far less than the lower static hydraulic threshold. Therefore, when the liquid level of the coolant tank4reaches the upper liquid level threshold, the determining whether the pressure of the coolant is less than the lower static hydraulic threshold can be omitted depending on situation, and the injection pump can be directly turned on for injecting coolant. Optionally, when the liquid level of the coolant tank4is less than a lower liquid level threshold, the injection pump5is turned off and a low liquid level alarm can be sent, and adding coolant to the coolant tank4. In order to timely supplementing the coolant consumed during normal operation of the device, the cooling system still further provides coolant-supplementing operation. When the circulation pump8is turned on, the control unit is configured to turn on the injection pump5when the pressure of the coolant in the liquid cooling pipeline is less than a lower dynamic hydraulic threshold so as to supplement the coolant to the liquid cooling pipeline, the pressure of the coolant continuously increases. When the pressure of the coolant reaches an upper dynamic hydraulic threshold, the control unit is configured to turn off the injection pump5. Furthermore, when the circulation pump8is turned off, the control unit is configured to turn on the injection pump5when the pressure of the coolant in the liquid cooling pipeline is less than the lower static hydraulic threshold so as to supplement coolant to the liquid cooling pipeline. The pressure of the coolant continuously increases. When the pressure of the coolant reaches the upper static hydraulic threshold, the control unit is configured to turn off the injection pump5. Referring toFIG.5, the flow diagram illustratively shows coolant supplementing operation of the cooling system. When the liquid level of the coolant tank4is greater than the lower liquid level threshold and the pressure of the coolant is less than a preset lower pressure threshold at the current operating state, the injection pump is turned on. When the pressure of the coolant reaches a preset upper pressure threshold at the current operating state, the injection pump is turned off. When the liquid level of the coolant tank4is less than the lower liquid level threshold, the low liquid level alarm is sent and the coolant tank4is added with coolant. Furthermore, the cooling system further comprises a temperature sensor. The temperature sensor is disposed on the liquid cooling pipeline. The temperature sensor monitors the temperature of the coolant at an inlet end of the converter, and outputs a temperature detection signal to the control unit. In the automatic coolant-injection method, when the circulation pump8is turned on, the control unit is configured to turn on the injection pump5when the temperature of the coolant reaches an upper operating temperature threshold and the pressure of the coolant is less than the upper dynamic hydraulic threshold, so as to supplement the coolant to the liquid cooling pipeline. The pressure of the coolant continuously increases. When the pressure of the coolant reaches the upper dynamic hydraulic threshold, the control unit is configured to turn off the injection pump5. In some embodiments, the control unit determines the upper and lower dynamic hydraulic thresholds and the upper and lower static hydraulic thresholds according to the operating states of the circulation pump, thereby ensuring the pressure of the pipeline within a normal demand range. When the device is in normal operation, the control unit real-timely monitors the temperature of the coolant in the pipeline, for example, by use of one or more temperature sensors. When the temperature is close to a derated operating temperature of the device, the coolant-supplementing operation is performed to make the pipeline pressure to the maximum upper threshold, thereby improving heat dissipation capability of the cooling system, and ensuring full power operation of the device. Moreover, when the temperature of the coolant is in a normal range, the upper dynamic hydraulic threshold can be set depending on different temperature ranges. The coolant supplementing operation can be performed in either the turning-on or turning-off state of the device. Additionally or alternatively, in some embodiments, in the automatic injection method, when the circulation pump8is turned on, the cooling system gathers the gas in the liquid cooling pipeline along with circulation of the coolant. When the circulation pump8is turned off, the cooling system exhausts the gathered gas via an exhaust valve. The exhaust valve may be disposed on the liquid cooling pipeline. According to another embodiment of the invention, an automatic coolant adding method for the coolant tank4is provided. Illustratively, the coolant adding operation for the coolant tank4further comprises the steps of:switching the first three-way valve10and the second three-way valve11by the control unit, to form a fluid loop including the coolant tank4, the first three-way valve10, the injection pump5, the second three-way valve11, and the coolant tank4in sequence;adding coolant via the inlet port1disposed on the coolant tank4; andallowing the coolant in the coolant tank4to automatically flow into the injection pump5via the first three-way valve10under action of gravity until the coolant flows back to the coolant tank4to complete filling of the injection pump5. Additionally and/or Alternatively, the control unit is configured to turn off the injection pump5when the liquid level of the coolant tank4is less than the lower liquid level threshold, so as to automatically add coolant to the coolant tank4. Further, the control unit switches the first three-way valve10and the second three-way valve11, and thus forms a fluid loop. The fluid loop sequentially includes an external coolant source, the first three-way valve10, the injection pump5, the second three-way valve11, and the coolant tank4. The control unit is configured to turn on the injection pump5to automatically add coolant to the coolant tank4. When the liquid level of the coolant tank4reaches the upper liquid level threshold, the automatic coolant adding operation for the coolant tank4is completed. The control unit is then configured to turn off the injection pump5and to switch the first three-way valve10and the second three-way valve11to form a fluid loop. The fluid loop includes the coolant tank4, the first three-way valve10, the injection pump5, the second three-way valve11, and the liquid cooling pipeline in sequence. Based on the above, the present disclosure realizes automatic coolant-injection for the pipeline of the cooling system before use, under the mutual cooperation between the liquid level sensor3, the injection pump5, the circulation pump8, and the control unit. In addition, the present disclosure also realizes automatic coolant injection function for the cooling system and automatic coolant adding function for the coolant tank4by communicating the coolant tank4, the injection pump5, the liquid cooling pipeline, and the external coolant source through the first three-way valve10and the second three-way valve11. Although the embodiments of the invention have been disclosed, they are not to limit the invention. Any skilled in the art shall make various variations and modifications without departing from spirit and scope of the invention, so the protection scope of the invention shall be subjected to the scope defined by the appended claims. | 20,298 |
11859924 | DETAILED DESCRIPTION In one aspect of the present disclosure, a cooling tower and related control system is provided. The control system monitors the condition of evaporative liquid utilized by the cooling tower and may make operational changes in order to reduce the chance for microbial contamination, corrosion, and/or scaling during upset conditions while keeping the cooling tower operating efficiently between water treatment and cooling tower service visits. The evaporative liquid may be water or, in some embodiments, a mixture of water and one or more other liquids such as liquid treatment chemicals. Parameters of evaporative liquid utilized by the cooling tower is continuously monitored, including conductivity, bioactive material, biofilms, pH level, plume, and drift. The control system can also be configured to continuously monitor operating parameters of the cooling tower, such as ambient temperature, spray water temperature, sump water levels, spray pump operation, sump sweeper pump, side stream UV pump operation, and UV lamp intensity on make-up, within the sump, and/or side stream loops to provide input into a control algorithm of the control system. For the purposes of this disclosure, the term “cooling towers” refers to, but is not limited to, open circuit direct evaporative cooling towers, closed circuit evaporative fluid coolers, evaporative condensers, adiabatic coolers such as spray and/or pad type units, adiabatic condensers, and related components. The control system includes a controller having a normal operating mode and a failsafe operating mode. During the normal operating mode, the controller may be configured to, on a regular basis, automatically purge then flush the water touched components of the water tower and/or add water treatment to keep the cooling tower evaporative liquid within specified tolerance levels to prevent microbial contamination and scaling while striving to conserve water and water treatment chemicals. If, however, a determination of inadequate evaporative liquid quality occurs, one or more attempts are made to automatically resolve the issue. If, after a prescribed number of attempts to correct the evaporative liquid quality issue have been performed and the measured evaporative liquid quality parameters remain in an unacceptable range, or if any of the sensors fail, the controller enters the failsafe mode. The number of attempts may be set by a user, such a three or five attempts, or may be set or adjusted by a remote computer such as by a server computer that utilizes machine learning to determine a number of attempts based on the operation of similar cooling towers in similar geographical areas, as one example. In some embodiments, the failsafe operating mode can be configured to keep the cooling tower and nearby area or environment in a safer condition until service personnel arrive. The failsafe mode may involve operating cooling tower fans, pumps, and other components to either limit the possibility of biological contamination from leaving the cooling tower in the case of a component or sensor failure, or take additional actions to improve operation, depending on need, including increased purge and flush cycles, limiting fan speed, increased water sterilization, or even complete water removal for dry operation. In one embodiment, the failsafe operating mode operates the cooling tower, such as by utilizing cooling tower parameters, such as sump pump on/off, pump speed, frequency of purge/flush cycles, and/or evaporative liquid treatment chemical application, that keeps the cooling tower from damaging itself. For example, the failsafe operating mode may involve the controller refraining from running a pump without fluid and/or operating a fan that is unbalanced. In one example in this regard, the cooling tower may include a fan assembly having an electronically commutated (EC) motor. The EC motor has a motor controller configured to detect excess vibration and send an alert to the controller of the cooling tower that there is an issue with the fan. The controller enters the failsafe operating mode in response to receiving the alert from the motor controller. In the failsafe operating mode, the controller and the fan motor controller cooperate to allow the fan to operate up to a threshold speed that results in a maximum permitted vibration. The controller and fan motor inhibit operation of the fan beyond the threshold speed. In some examples, the control logic includes a purge and flush cycle where the cooling tower water is drained then refilled and recirculated through the sump, water distribution system and evaporative heat exchangers to scrub the surfaces with fresh clean water. The purge and flush cycle can be run one or more times when attempting to remedy (or correct) a water quality issue and while keeping the cooling tower running. The purge and flush cycle can be configured to reduce the amount of microbes and solids in the water, inhibit solids and contaminants from laying on the bottom and sides of the sump, and limit the potential for microbial contamination and scaling. While the subject disclosure is applicable to all cooling towers, cooling towers employing extremely low volume sumps limit the amount of water used during the purge and flush cycle. For example, if the sump is less than half the size of the cooling tower footprint, then only half of the water is purged as compared to prior cooling towers, which can be a significant water savings. Further, in some examples, the control logic may include a dry out cycle that runs occasionally to dry out the water contact surfaces to further reduce the risk of microbial contamination. Removing the water from the water contact surfaces kills microbes on the water contact surfaces. FIG.1shows an evaporative heat exchanger cooling tower10. The cooling tower10has a spray pump19, fan motor25, a fan assembly26A including a fan26and a motor25, and an evaporative liquid collector such as a water collection system50. The cooling tower10further includes an indirect evaporative heat exchanger such as serpentine tube heat exchangers23, an evaporative liquid distribution system such as a spray water distribution system22, drift or mist eliminators28, spray water nozzles24, and a sump such as a spray water sump39. The spray water sump39is less than half the size of the cooling tower10footprint which reduces the volume of water used by the cooling tower10when the cooling tower10purges the water. In other embodiments, the sump may be any size up to and including the full size of the footprint of the cooling tower10. Process fluid enters the serpentine tube heat exchangers23via a connection29and header30. The process fluid leaves the serpentine tube heat exchangers23conditioned having passed through serpentine tubes33, through outlet header32, and then to connection31. The flow of process fluid through the connections29,31may be reversed in some cases. Specifically, the process fluid may enter the serpentine tube heat exchangers23via the connection31and exit the serpentine tube heat exchangers23via the connection29. During dry operation of the cooling tower10, the spray pump19is turned off and the motor25rotates the fan26at a speed to achieve a setpoint requested by, for example, a HVAC system, an industrial process system, and/or a user. The fan26draws air into the cooling tower10and pressurizes dry plenums36and37, which guides air up through indirect heat exchangers23and out through mist eliminators28. The serpentine tube heat exchangers23shown are of the serpentine tube-type which is well known in the industry, but the heat exchanger utilized by the cooling tower10may be of any type of evaporative heat exchanger including indirect heat exchangers, such as tube and fin heat exchangers and/or plate-style heat exchangers, and/or direct heat exchangers such as fill. During wet operation of the cooling tower10, the spray pump19is turned on and pumps water from sump39to distribution pipe22A then out of nozzles24. The evaporative spray water forms small droplets as the water exits nozzles24and cascades down onto and through the indirect heat exchangers23. Water that evaporates during the heat transfer process or water that is bled off via sump drain valve48to keep the solids content within acceptable limits is replaced through make-up float valve assembly34of a water makeup supply34A. The sump drain valve48may have a partially open configuration that permits a limited flowrate of water containing solids in the sump39to bleed off from the sump39. The bleeding off of water containing solids and subsequent refilling of the sump39with makeup water via the makeup water supply34A functions to decrease the solids in the sump39. The sump drain valve48may have a fully open configuration that permits a larger flowrate of water to exit the sump39, the fully open configuration of the sump drain valve48being used to purge the sump39. The cooling tower10may include a water level device that actuates a solenoid fill valve to keep water in the sump39at a set level. Air is drawn in by fan26which is rotated by motor25. The speed of motor25is determined by the requested system control setpoint. Once water drops off the indirect heat exchangers23, at least a portion of the water is caught by water collectors50and that water is guided away from the fan and towards the sump39by water baffles12. Some of the water falling off the left side indirect heat exchanger23cascades down directly into sump39. In the cooling tower10, a portion of the air travels through water collectors50and through water baffles12forming a dry zone plenum36and a wet zone plenum37. During wet operation of the cooling tower10, there is a dry air zone in plenum36and a wet zone formed in plenum37. The combination of the water collectors50and sump wall38forms a smaller sump39, typically at least half than the full footprint of cooling tower10which allows easier management of the sump water from a biologic and water waste standpoint. Other cooling tower configurations, including examples of water collectors, are provided in U.S. Pat. No. 10,677,543, which is hereby incorporated by reference in its entirety. With reference toFIG.2, a cooling tower20is provided that is similar to the cooling tower10with similar reference numerals indicating similar components. The cooling tower20has a control system21including various sensors and a controller52to facilitate operation of the cooling tower20. The cooling tower20includes an ambient temperature sensor54configured to sense outdoor ambient air temperature and a spray water temperature sensor54A configured to sense the temperature of the water in the spray water distribution system22. Signals from the ambient temperature sensor54and the spray water temperature sensor54A are sent to controller52for evaluation. The functionality of controller52is shown in the logic flow diagrams ofFIGS.3A-3C and4A-4Band discussed further below. The cooling tower20has an evaporative liquid treatment system27including a UV light42A installed on the incoming make-up water line34may be used to reduce the microbial content entering embodiment20from the make-up water line34. UV lamp intensity sensor43A may be used to signal when the lamp is not operating or not operating at the minimum allowed intensity and sends an alarm that UV light42A needs to either be cleaned or replaced. A UV light installed in the sump39, below mist eliminators28, in wet air zone37, or in the spray water distribution system22may also be employed. In one embodiment, the evaporative liquid treatment system27includes a UV pump41, pH sensor46, UV light42, UV light sensor43, a flow proving switch41C, and a conductivity sensor45. The UV pump41is configured to draw a side stream of water from the sump39, through the pH sensor46, through the UV light42, across the flow proving switch44, and through conductivity sensor45, then back into sump39. In another approach, the pump41is determined to be operating by using a pressure differential switch or transducer connected to pump suction and pump discharge pipes or via a current sensor. Whenever there is water present in sump39as evidenced by sump float sensor47, the UV side stream pump41will be operated continuously or intermittently to monitor the pH level46, conductivity level45, and to run the sump water39through UV light42to reduce microbial contamination. Sump float sensor47may be a dual function sensor that also operates as a high-water level float sensor to sense that water is too high and is being wasted. UV lamp intensity sensor43is used to signal when the lamp42is not operating or not operating at the minimum required intensity and sends a status signal to the controller52to be evaluated. The pH sensor46measures the pH of the sump water. Conductivity sensor45measures the dissolved solids, such as total dissolved solids, in the water of the sump39. The controller52evaluates the conductivity level and the function of the conductivity sensor45. Spray pump flow switch49determines whether the spray pump19is running and alerts the controller52of the status of the spray pump19. Drift sensor40, located above mist eliminators28, senses if the drift is greater than a threshold or accepted tolerance level and sends a signal to the control system52to be evaluated. Plume sensor55, located above mist eliminators28, senses if the plume is greater than an accepted tolerance level and sends a signal to the controller52to be evaluated. Biofilm sensor51senses if there are biofilms forming in sump39. If there are biofilms present, the biofilm51sends a signal to the controller52to be evaluated. Biofilm sensors may be mounted in other wet locations in the cooling tower20. Sump drain valve48is controlled by controller52and may be fully open, fully closed or partially open as determined by controller52and will be described later. Electrically operated emergency shut off water make-up valve56is set to be open unless the high-water level alarm from sump float sensor47senses that water is being wasted and the situation is evaluated by the controller52. The various sensors of the cooling tower20send data indicative of the associated sensed parameters to the controller52. The sensors may perform edge processing such that the sensors compare a sensed parameter to a threshold, range, and/or tolerance and send data to the controller52indicative of whether the parameter is unacceptable (or acceptable). In other approaches, one or more of the sensors communicate data indicative of the sensed parameters to the controller52and the controller52determines whether the parameters are unacceptable (or acceptable), such as the parameters being above/below a threshold, within/outside of a range or tolerance, etc. For the cooling tower20shown inFIG.2, the evaporative cooling equipment is shown as forced draft, single-singled air inlet configuration with an indirect heat exchanger but it should be understood to be a non-limiting example. The fan system utilized may be any style fan system that moves air through the unit including but not limited to forced draft in a generally counterflow, crossflow or parallel flow with respect to the spray. The fan system may also be of an induced draft style in a counterflow, parallel, or crossflow orientation by way of a non-limiting examples. The fan location and the direction of the air intake and discharge may be different for a particular application and are not a limitation to the embodiment presented. Additionally, motor25may be directly connected to the fan26as shown or be driven by a belt or gear arrangement. The process fluid direction may be reversed to optimize heat transfer and is not a limitation to the embodiment presented. It also should be understood that the number of circuits and the number of passes or rows of tube runs within an indirect heat exchanger23is not a limitation to embodiments presented. Furthermore, it should be understood that the type of evaporative heat exchanger utilized in the cooling tower10may be selected for a particular application. WhileFIG.2shows an indirect heat exchanger23, that evaporative heat exchanger could also be a direct heat exchanger such as with cooling tower fill by way of example. Cooling fill may include, for example, PVC sheets with raised features and/or blocks. Therefore, the cooling towers disclosed herein may utilize various types of evaporative heat exchangers, including but not limited to an indirect, direct, a combination of an indirect and a direct or an adiabatic air cooler, fluid cooler, or condenser. The controller52includes a processor52A, a non-transitory computer readable memory such as memory52B, and communication circuitry52C. The memory52B includes computer readable instructions such as source code to implement the logic ofFIGS.3A-Cand4A-C. The communication circuitry52C is capable of wired and/or wireless communications. In one embodiment, the communication circuitry52C includes a network interface that communicates with one or more networks such as a local wired network (e.g., ethernet), a local wireless network (e.g., Wi-Fi), a wide area wireless network (e.g., cellular), and/or the internet. The control logic ofFIGS.3A-3C and4A-4Bmay be implemented by the processor52A, by a remote computing device such as a server computer (e.g., a cloud-based computing system) or a user device (e.g., a smartphone, tablet computer, or desktop computer) in communication with the processor52A via the communication circuitry52C, or by a combination of the processor52A and the remote computing device. The controller52has a normal operating mode300utilizing the control logic ofFIG.3A-3Cand a failsafe operating mode400utilizing the control logic ofFIG.4A-4B. The controller52is in the normal operating mode300when all the sensors and equipment are operating correctly and the water quality parameters are all within a tolerance of allowable operating ranges. If a water quality upset condition occurs, such as a parameter of the water falling outside the acceptable range, the controller52and/or a remote computing device makes a determination of inadequate evaporative liquid quality. The controller52, in the normal operating mode, will take several attempts to clear up the upset condition. The attempts to correct the upset water quality condition may include, for example, a purge and flush cycle, a clean and disinfect cycle, or a combination thereof as described below. If, after a prescribed number of attempts to correct the water quality issue have been performed and the measured water quality parameters remain in an unacceptable range, or if any of the sensors fail, the controller52switches to the failsafe mode. The failsafe mode keeps the cooling tower water and environment in a safer condition until service personnel arrive. If any of the sensors of the cooling tower20are not reading in an acceptable range, or are interpreted by the controller52to be in a faulty condition, the controller52sends a notification such as an alarm to a remote computing device and controller52switches to the failsafe mode which is described in further detail with respect toFIG.4. The controller52may send the alarm to, for example, a HVAC system, a server computer, a service provider, and/or a user device. The alarm may be in the form of an email, an application notification, and/or an SMS message as some examples. In one embodiment, the controller52assigns different weights to different evaporative liquid parameters and addresses deviation in the evaporative liquid parameters differently. For example, the controller52may enter the failsafe operating mode400in response to the controller52determining a biofilm parameter in excess of a threshold. By contrast, the controller52may not enter the failsafe operating mode400in response to the controller52determining the pH of the evaporative liquid is beyond a threshold. Instead, the controller52communicates a warning to a remote device regarding the elevated pH level. In some embodiments, the controller52takes an average of readings of the sensors of the cooling tower20to ensure an upset condition is true before making a decision. The controller52may utilize machine learning with historical data for the cooling tower20and/or other cooling towers to identify thresholds, ranges, and tolerances used in determining whether a current parameter value is unacceptable. Alternatively or additionally, the controller52may compare different evaporative liquid parameters to determine an inadequacy of a given evaporative liquid parameter. For example, before initiating the failsafe operating mode400based on an elevated pH level parameter, the controller52may consider the elevated pH level parameter in view of a biofilm parameter and a chlorine level parameter of the evaporative liquid. If fewer than all three parameters are outside of acceptable tolerances, the controller52may determine the evaporative liquid parameter is adequate for the time being. An occurrence of a similar out-of-tolerance reading after a set period of time may be grounds for the controller52to initiate the failsafe operating mode400. Referring now toFIGS.3A-3C, control logic for the normal operating mode300is provided. The normal operating mode300includes the controller52receiving302a request or call for cooling and initiates304a normal wet evaporative cycle. The controller52checks306whether the cooling tower20includes sump heaters. If there are sump heaters, then the evaporative equipment can typically operate wet regardless of the ambient temperature and proceeds to operation312. If there are not sump heaters, then controller52at operation306considers the ambient temperature sensed by temperature sensor54. The customer or operator can input, such as via a user interface of a HVAC system operably coupled to the cooling tower20, whether the cooling tower20can or cannot be allowed to run in freezing conditions. On some equipment, if the ambient temperature is below freezing (32° F.), the cooling tower20is kept from operating in the wet mode to eliminate the possibility of freezing and instead operates in the dry mode. The controller52communicates a low temperature alarm310to a remote device, such as a HVAC system or a user smartphone, upon the cooling tower20not having sump heaters and the ambient temperature being below a predetermined temperature such as 40° F. Alternatively or additionally, the controller52may monitor the temperature sensor54A in the spray water pipe and as long as the spray water temperature remains above a preset level, typically 45 to 50 F, then it is safe to operate the cooling tower in the wet mode. Referring again toFIG.3A, the controller52at operation312monitors a wet timer to keep track of how many hours the cooling tower20has operated in the wet mode. One reason to keep track of time the cooling tower20has operated in the wet mode is because the controller52is programmed so that at a select interval of time, a changeable parameter, typically after operating wet for 24 hours, the cooling tower20can be run through a purge and flush cycle to reduce the risk of microbial contamination at a time that is convenient for the operator of the cooling tower20. The configuration of the unit plays a role in allowing a purge and flush cycle without wasting a large volume of water. While a flush cycle can be used on any size evaporative heat transfer equipment, for some applications it is advantageous to have as small of a sump as possible. The sump may be less than half the size of the footprint of the cooling tower footprint to minimize water usage. Referring toFIG.2, the sump39is smaller, for example, less than half the size than the footprint of the unit as shown by wall38. The purpose of the purge and flush cycle is to dispose of built-up solids, debris, contaminants, microbials and biofilms to help keep the tower sump floor and walls clean and to reduce microbial contamination. The controller52may perform a purge and flush cycle once daily (or after 24 hours of wet operation) while the controller52is in the normal operating mode300. When the controller52is operating in the failsafe operating mode400, the controller52will run purge and flush cycles more frequently because when operating in the failsafe operating mode, the controller52has determined that there is an upset condition that could not be corrected under normal operation mode and the controller52communicates a notification that the unit needs to be serviced. Further details regarding operation of the failsafe operating mode are discussed below. Referring again toFIG.3A, once the wet mode is on, the controller52turns on the make-up water and starts a fill timer at operation312. The controller52determines314whether the sump water has reached a minimum level as detected by a float sensor in the sump within a certain period as determined by the fill timer, and if the sump39is not filled within the maximum allowable fill time (which may be an adjustable parameter), then the controller52communicates316a low sump water alarm. The controller52refrains from operating the cooling tower20in the wet mode and waits for the make-up assembly to be repaired and the alarm to be reset. If, however, the sump float detects the sump water level is high enough, the controller52energizes the spray pump19and a spray pump start timer is energized at operation318. After the spray pump time period ends, the controller52checks320whether the sump water has reached a predetermined level based on a sump float sensor47. If the sump water has exceeded the predetermined maximum level at operation320, the controller52communicates322a high sump water level alarm. The controller52determines324whether the spray pump19is on. The determining324may include, for example, checking whether a spray pump switch detects there is water flowing downstream of the spray pump19. If the spray pump switch49(seeFIG.2) does not detect water flow, the controller52communicates326a spray pump alarm. In one embodiment, the controller52is not operable in the normal operating mode300after one or more alarm communications (e.g., communications310,316,322,326) until the alarm(s) is cleared and the issue is repaired. The controller52operates the cooling tower20in the dry mode until the issue is repaired. Once the controller52determines the spray pump19is operating, the controller52starts the UV pump41and waits a predetermined time period such as ten seconds at operation328. At the end of the time period, the controller52determines330whether the UV pump41is running such as by checking whether a UV pump switch detects water flowing through the UV side stream loop97. If the controller52determines330that the UV pump is not running, the controller52communicates332a UV pump alarm, turns off the UV lamp, and enters the failsafe operating mode400. It is noted that there are different methods to prove the spray pump or UV pumps are pumping such as a flow switch, differential pressure switch, and/or a current sensor. It is also noted that once the sump float switch determines there is water in the sump39, in one embodiment the UV pump will always run to continually reduce microbial content in the sump water until such time that the float switch detects there is low or no water in the sump. This also allows continuous monitoring of all the water qualify parameters. Once the UV pump flow switch41C detects water flow in the side stream water loop, controller52checks334an intensity sensor of the UV lamp. If the UV lamp has lost intensity past a minimum effective value (e.g., 8%), meaning that the lamp needs to be cleaned or is not working properly, then the controller52communicates336a UV bulb replacement alarm and the controller52changes from the normal operating mode300to the failsafe operating mode400. Referring again toFIG.3B, normally, once the controller52determines the UV lamp intensity to be acceptable at operation334, the controller52checks338whether the conductivity sensor51is operable. The controller52communicates340a conductivity alarm and enters the failsafe mode upon the conductivity sensor51not being operable. If the conductivity sensor51is operable, the controller52determines342whether the conductivity of the sump water is greater than a predetermined level such as 1,000 micromhos per centimeter. The conductivity levels utilized at operations342and346may be programmed into the controller52by a user. Cooling tower bleed-off is used to keep the level of dissolved solids within acceptable range as when water evaporates, solids contained in the water are left behind. The evaporative liquid treatment system27of the cooling tower20may include a chemical treatment system99that, in addition to adding chemicals into the water, takes primary responsibility for bleeding off water from the sump39. The chemical treatment system99may add solid or liquid chemicals to the water. Example chemicals include chlorine, bromine, halogen tablets, a corrosion inhibitor, a scaling inhibitor, and/or a non-oxidizing biocide. The chemical treatment system99may include, for example, a floating feeder and/or a brominator with a separate recirculating pump. Should the bleed-off function of the chemical treatment system99not operate correctly, controller52in the normal mode operates as a secondary control and functions as back-up bleed off by bleeding water the sump39as needed. This helps to assure that the cooling tower can continue to run without the solids running out of control until the next service visit. So, as an example, the chemical treatment system99may open the bleed-off at 1,000 micromhos per centimeter and close the bleed-off at say 800 micromhos per centimeter. This differential can help assure a small amount of water is bled off while the makeup replaces the water that is bled off. Of course, these values can be changed to suit the needs of the installation. Continuing with the example, back-up conductivity set points for the controller52are set at 1200 micromhos per centimeter bleed-off on and 1000 micromhos per centimeter off and the next set point is set at 1500 micromhos per centimeter on and 1000 micromhos per centimeter off. Thus, when controller52sees the conductivity of the water cross the1200conductivity point, in the normal operating mode the controller52performs344a bleed off operation by opening the drain valve48of the sump39for a calibrated time period to prevent the spray pump from turning off. The open drain valve48drains water from the sump39and the make-up float valve assembly34will automatically fill the sump back up. Alternatively or additionally, the controller52may decide to open the drain valve48to bleed off water based on the load and/or the time of day. In one embodiment, the drain valve48can be proportionally controlled to allow a small amount of water to be bled off or a separate bleed off valve, not shown, can be installed, for example. If, during the normal operating mode the water conductivity falls below 1,000 micromhos per centimeter, then the drain valve48will be closed and controller52allows the chemical treatment system99to control bleed off provided by the sump drain valve48. If, however, the conductivity value continues to rise about the second controller high set point,1500in this example, then in the normal operating mode controller52takes control of the sump drain valve48and initiates348a purge-flush cycle384, which purges or drains all the sump water then refills the sump water. The purge and flush cycle384should immediately bring the solids content below the1,000setting with proper differentials on each setpoint. It should be noted that, in some embodiments, the cooling tower20includes a sump sweeper system including a pump and piping. The sump sweeper system can run as part of the purge and flush cycle384to assist in churning the solids and any bioactivity to be purged from the cooling tower. If after a set amount of purge and flushes the conductivity remains high, a high conductivity alarm is sent and controller52switches to the failsafe operating mode400which is described below. In addition, there is feedback from the conductivity sensor itself at operation338. If the feedback is that the conductivity sensor has malfunctioned or is not working, then a conductivity sensor failure alarm is sent and controller52changes the unit's operation from the normal operating mode to the failsafe operating mode. Referring again toFIG.3B, once the conductivity is within acceptable limits, controller52determines350whether the biofilm sensor51is operable. If not, the controller52communicates352a biofilm alarm and enters the failsafe mode. If the biofilm sensor51is operable, the controller52determines354whether there is any bioactivity or any biofilms forming in sump39. If bioactivity or a biofilm is detected, then the controller52in the normal operating mode initiates356the purge and flush cycle384which is run to clean out the bioactivity or biofilm in the sump water by flushing the sump39and associated water touch components. Alternatively or additionally to the purge-flush cycle384, the controller52may direct an emergency supply of shocking chemical to the cooling tower sump. As an example, if the chemical treatment system99provides chlorine or other chemicals to control bacterial growth and the chemical is depleted or their system fails to add the chemicals, the controller52in the normal operating mode300can act as a back-up system to reduce the risk of microbial contamination by either adding emergency chemicals the cooling tower sump39to clean and disinfect or can purge and flush the sump water containing components, or both, until such time that service personnel arrive to fix the upset condition. It should be noted that a biofilm alarm is sent and the controller52changes from the normal operating mode300to the failsafe operating mode400after a number of purge and flush cycles, and after adding emergency supply of chemicals, if the bioactivity or biofilm is still detected. In addition, there is a feedback at operations350from the bioactivity and/or biofilm sensor itself. If the feedback is that the sensor has malfunctioned or is not working, a biofilm sensor alarm is sent and the controller52changes from normal operating mode300to failsafe operating mode400. Referring again toFIG.3C, once bioactivity or biofilm is not detected above acceptable setpoint levels, next the controller52checks358whether the pH sensor39B is operable. If not, the controller52communicates360a pH sensor alarm and enters the failsafe operating mode400. If the pH sensor39B is operable, the controller52determines362whether the pH level in the water is acceptable such as being within a predetermined range. If the value of pH is not acceptable, controller52can either add emergency back-up chemicals and/or activate364the purge and flush cycle386depending on the water quality of the make-up water and a manual input. As an example, if the incoming make-up pH is not within acceptable limits and chemicals are needed to be added to control the pH level, a manual input to the controller52identified at operation386causes the controller52to direct the chemical treatment system99to add chemicals at operation396instead of performing purge and flush operations388to control the pH. Thus, during the normal operating mode, the controller52will act as a back-up for the way the chemical treatment system99would maintain the pH level. The controller52sends a pH alarm and activates the failsafe operating mode400if a certain number of purge and flush cycles are attempted or chemicals are added in an attempt to bring the cooling tower water back into the acceptable pH range during the normal operating mode. In addition, there is a feedback from the pH sensor itself at operation358. It will be appreciated that, in some embodiments, an unacceptable pH parameter by itself is insufficient to cause the controller52to enter the failsafe operation mode400. If the feedback is that the pH sensor has malfunctioned or is not working, a pH sensor alarm is sent and the controller52changes the controller52from normal operating mode300to failsafe operating mode400at operation360. Referring toFIG.3C, the controller52receives feedback from the drift sensor40and determines366whether the drift sensor40is functional. The drift sensor40is configured to detect if there is an unacceptably high amount of drift. Drift is defined as water droplets or aerosol leaving the cooling tower20. As most cooling towers are designed to have a minimal amount of drift, there are a few upset conditions where drifting can occur such as in extreme wind conditions, when the cooling tower fill becomes damaged, the cooling tower eliminators are damaged or dislodged, or when a water distributing nozzle is dislodged or breaks. While these conditions are rare, this application describes techniques for controlling the cooling tower20to limit drift during these upset conditions while keeping the cooling tower20running. If the drift sensor40detects an unacceptable amount of drift, the controller52in the normal operating mode will attempt to reduce the drift rate in order to reduce the risk of microbial contamination to the environment. If the drift rate is not within proper limit and the degree of bioactivity or biofilm content is also high, then depending on the manual inputs, the controller52may shut the cooling tower down until service is performed to prevent microbial contamination to the nearby environment. If the drift is detected as being too high in the normal operating mode, the controller52will attempt to correct or reduce the drift rate if that was the preference input in the manual data. The controller52may also switch to a lower fan speed, shut the unit down, or switch to dry operation depending on the manual inputs and system requirements. For example, on multi-cooling tower installations, if one cooling tower has a drift issue, the decision can be made to turn the cooling tower off and call for service if the other cooling towers can handle the load. If, however, the customer needs to continue operating the cooling tower until such time that service personnel arrive, the controller52will decide to lower the fan speed to a level that is known to be where drift cannot occur, typically 50% fan speed, and the controller52communicates372a drift alarm and changes the unit's operation from the normal operating mode to the failsafe operating mode. In addition, there is a feedback from the drift sensor40itself. If the controller52determines366that the drift sensor40has malfunctioned or is not working, controller52changes the unit's operation from the normal operating mode to the failsafe operating mode400and communicates a drift sensor alarm at operation368. Referring toFIG.2, the plume sensor55is configured to detect373if there is an unacceptable rate of plume leaving the cooling tower as desired by certain customers. In some applications, plume is not desired because either the plume can be interpreted as an unsafe condition, the plume can block vision at an airport for example, the plume can freeze, or the plume can impinge on the surrounding buildings or structures and is therefore undesirable. Accordingly, some cooling tower customers ask that plume is limited or completely avoided. Cooling towers for these applications are typically equipped to abate plume. If plume sensor55detects373the plume is too high, then controller52will change unit operating parameters to reduce or eliminate the plume, such as adding heat from a waste heat source or other heat source. If after the adjustments an unacceptable amount of plume is still detected, controller52communicates375a plume alarm and changes the controller52from the normal operating to failsafe operating mode. In addition, there is a feedback from the plume sensor55itself. If the controller52determines377is that the plume sensor55has malfunctioned or is not working, controller52changes379the from the normal operating mode300to the failsafe operating mode400and sends a plume sensor alarm. After the safety checks on the sump water system and cooling tower operation are completed, the controller52checks374whether the cooling tower20has operated above the wet timer setpoint which under normal conditions is typically set to 8 to 24 wet running hours. If the cooling tower20has run greater than the manually inputted wet time period, then controller52initiates376the purge and flush cycle384. The purge and flush cycle384may be set according to a user manual input (seeFIGS.5A and5B). The purge and flush cycle384includes the controller52directing operations388(FIG.3B) if treatment chemicals are not added at operation386. The operations388include turning off spray pump(s), turning off UV pumps, turning off UV light(s), closing the make-up water valve56, turning off the fan motor25, opening the sump drain valve48, and opening a drain41A associated with the UV light42. The operations388further include waiting a first predetermined time period, such as 30 seconds, followed by closing the drains. Next, the make-up water valve56is opened and the spray pump19, UV pump41, and sump sweeper pump39A (if equipped) are turned on. The controller runs the flush cycle and waits a second predetermined time period, such as 30 seconds, before again initiating a purge cycle including turning off spray pump(s), turning off UV pumps, turning off UV lights, closing the make-up water valve56, turning off the fan motor25, opening the sump drain valve48, and opening a drain41A associated with the UV light42. The controller52waits a third predetermined time period, such as 30 seconds, before closing the sump drain valve48and UV pump drain41A. The make-up water valve56is opened and the sump pump19, UV pump41, and sump sweeper pump39A are turned on. In some embodiments, the sump sweeper pump39is connected to a filter or cyclonic separator which has their own flush cycle that may be controlled by controller52. The operations388conclude with enabling operation of the fan26and wet operation of the cooling tower20. If after running the purge and flush cycle384, any of the conductivity, biofilm or pH levels are not as expected as determined at operations390,392,394, one or more alarms are sent and the controller52changes from the normal operating mode300to the failsafe operating mode400at operations391,393,395. After the purge and flush cycle384, the controller52also looks at the dry run timer and will initiate380a dry cycle382when the duration of the dry run timer is above that manually inputted dry run timer period. The purpose of the dry cycle is to purge the sump water and run the fan26so that the sump39dries out for a manually inputted specified period to inhibit microbial contamination because many microbes will die once they are dry. Once the dry cycle322is complete, the system loops back to the beginning of the normal operating mode300. Another feature of the control logic of the normal operating mode300is the ability of the controller52to detect when there is an upset condition, send the appropriate alarm, and switch from the normal operating mode300to the failsafe operating mode400. Controller52continuously monitors cooling tower water quality parameters including but not limited to at least one of: conductivity level, existence of a bioactivity or a biofilm, pH level, excessive plume, and cooling tower drift. The controller52also continuously monitors the following: ambient temperature, spray water temperature, sump water level, spray pump operation, UV pump operation, UV lamp intensity on make-up and/or bypass loop, conductivity sensor, biofilm detection sensor, pH level sensor, plume sensor, and drift detection sensor. The controller52operates in the failsafe operating mode400should one or more of the sensors fail or after an attempt to bring the water quality back into the acceptable operating range such as after adding chemicals or activating a prescribed number of water purge and flush cycles460(seeFIG.4). During the purge and flush cycle operations464, the cooling tower water is purged then refilled with fresh water, then the water sump, water distribution system and evaporative heat exchangers are flushed. In some embodiments, the chemical treatment system99adds chemicals to the newly filled sump water after the water has been purged to aid in cleaning, flushing and disinfecting the water contact components. In one embodiment, the failsafe operating mode400increases the frequency of purge and flush cycles464as compared to the purge and flush cycles384during the normal operating mode300to keep the water quality parameters safer until the cooling tower20is properly serviced and the alarms are reset. As an example, in some prior art cooling towers that employ a conductivity sensor, the conductivity sensor measures the solids content in the water and the cooling tower opens the bleed off until the conductivity sensor reads an acceptable value. But if, after a certain time period, the conductivity does not drop below an acceptable value or if the solids content continue to rise, an alarm for service is turned on but there is no provision to continue operating the tower in a safer condition until the service is completed. To address this issue, and to keep the solids from running to a level which creates extreme heat exchanger fouling and loss of cooling tower capacity, the cooling tower20and control logic inFIGS.3A-3C and4A-4Bcause the unit to automatically purge then flush out the sump by turning on the purge and flush cycle384,460without wasting a large amount of water. It is worth noting that a small design for the sump39can help reduce water consumption during the purge and flush cycle. If after attempting to purge, flush, and disinfect the sump water and the conductivity remains high, controller52switches to the failsafe operating mode400. In one approach, the manual inputs500(seeFIGS.5A and5B) include an instruction to shut down the cooling tower20in such a situation. The cooling tower water would then be drained and the cooling tower20turned off until service personnel service the cooling tower20and resent the alarms. As another example, in some prior art cooling towers that employ a pH sensor, that sensor would measure the pH and add chemicals to try to maintain the proper pH levels. But if, after a certain time period the pH does not get to an acceptable value, other than turning on an alarm for service, there is no provision to continue operating the tower in a safer condition until service is completed. To address this issue and to keep the cooling tower from running at extremely unsafe and potentially corrosive pH levels, the cooling tower20and control logic inFIGS.3A-3C and4A-4Bcause the cooling tower20to automatically purge then flush out the sump39and to bring in fresh water to get the pH level under control. If after attempting to clean out the sump the pH remains at an unacceptable level, or if the pH sensor46fails to operate, a pH alarm is sent and the controller52switches to the failsafe operating mode. In one approach, the manual inputs500(seeFIGS.5A and5B) include an instruction to shut down the cooling tower20in such a situation. The cooling tower water is then drained and the cooling tower20turned off until service personnel service the cooling tower20and resent the alarms. In another example of a benefit provided by the cooling tower20, a contractor may from time to time add very acidic chemicals into the basin with the hopes of descaling the indirect heat exchanger. However, if not properly administered, the cooling tower water can be left with extremely corrosive pH levels. Under this extreme upset condition, controller52can be configured to continue to call for purge and flush cycles in an attempt to correct the situation and after a certain amount of purge and flush cycle attempts, if the pH level remained out of a safe operating condition, the controller52would send a pH alarm and activate the failsafe mode. In one approach, the manual inputs500(seeFIGS.5A and5B) include an instruction to shut down the cooling tower20in such a situation. The cooling tower water would be drained and the cooling tower20turned off until service personnel service the cooling tower20and resent the alarms. As another example, in some prior art cooling tower applications employing a side stream of sump water with a UV light, or with a UV light installed on the make-up water line or in the sump, or both, the UV light will continue to kill bacteria as long as the UV lamp is clean and is operating at an acceptable intensity level. But if the UV lamp becomes dirty or nonoperational, other than sending an alarm for service, there is no provision to continue operating the cooling tower in a safer condition until service is completed. To address this issue and to reduce microbial contamination, the cooling tower20and control logic inFIGS.3A-3C and4A-4B, the controller52switches to the failsafe operating mode when the UV light is nonoperational or needs to be cleaned. In the failsafe operating mode, the sump39may be purged and flushed at a much higher rate and/or anti-microbial chemicals are added to reduce the chance of microbial contamination until service is completed on the cooling tower20and the alarms are reset. In some embodiments, users can provide water quality parameters to the controller52using a user interface of the cooling tower20or a remote device in communication therewith. The water quality parameters may include cooling tower conductivity, pH, bioactivity, biofilm, drift, and plume. The water quality parameters may be determined from testing with manual instrumentation. The manual inputs may be considered in the control logic in the same manner as if the water quality parameters had been autonomously gathered by the sensors of the cooling tower20. One advantage of the control logic ofFIGS.3A-3C and4A-4Bis that the control logic keeps the cooling tower20and environment safe by first trying to clean out the sump39automatically but then switching to a failsafe operating mode400once any problem is detected that could not be repaired under the normal operating mode300. As another example, in prior art cooling tower applications, occasionally a make-up valve or solenoid fill valve will stick wide open causing an excessive amount of water to be wasted. While some prior art cooling towers are equipped with a high-water level alarm, there is no provision to save water. In both the normal operating mode300and the failsafe operating mode400, should a high water level be detected and depending on the manual inputs500, while an alarm is communicated at operations322and422, in additional there is an option to turn off the water supply to the cooling tower20through an independent electrically operated emergency water valve56which will still allow dry operation if so equipped but has the potential to conserve water that otherwise would be continuously drained from the sump39via a cooling tower overflow valve. As noted inFIG.3A, during the purge and flush cycle384, the make-up water is turned off, the spray pump19is turned off, the UV lamps42,42A and UV pump41are turned off, the sump drain valve48is opened, and the UV pump drain41A is opened allowing all the water in the cooling tower20to be purged. In operation388, the controller52sets a timer utilizing a time period (e.g., 30 seconds) determined by the controller52, or entered by a user, to permit the water to fully drain from the cooling tower. Then, the sump drain valve48and the UV pump drains41A are closed, and the make-up is turned back on allowing fresh water to fill the sump39and associated piping. Once a minimum water level is detected in the sump39, such as the controller52detecting make-up float valve assembly34closing, the spray water pump and UV pumps are turned back on which circulates fresh water to scrub and clean the surfaces within the sump39and water-contacting surfaces of the spray water distribution system22and serpentine tube heat exchangers23which helps to scrubs away any solids, debris, contamination, and microbes which may have accumulated. In another embodiment, the controller52detects the minimum water level via an electronic water level sensor. In one embodiment, the cooling tower20may be equipped with a two-speed or variable speed spray pump19. The controller52operates the spray pump19at the low speed for water recirculation during a wet evaporation mode of the cooling tower20and the controller52operates the spray pump19at a high speed during the purge and flush cycle384,460. This allows higher water flow rates to have more scrubbing action during the purge and flush cycles384,460. If so equipped, the fan25is typically stopped or run at a low speed to limit drift from occurring when the spray pump19is operated at high speed to flush out the water touched components. After a purge and flush cycle384,460, the water may be used immediately if the water quality is sensed as being in the acceptable range or after having run a few minutes. If the water quality is still not in the acceptable range, the water is purged again then the process starts over again. The number of purge and flush cycles384,460in the flush cycle mode is an adjustable parameter that can be manually set depending on environmental conditions as well as make-up water quality. Another feature of the control logic of the normal operating mode300is having the ability to continue to run the cooling tower20in the wet evaporation mode during the purge and flush cycle384. This operability is set by a manual input so if the user has selected to keep the cooling tower20operating during the purge and flush cycle384, as maintaining fluid setpoint is paramount, the normal operating mode300will keep the fan25running. By the time the serpentine tube heat exchangers23starts to dry, the purge and flush cycle384is terminated and the water is refilled. Stopping the purge and flush cycle384after a time period and before the serpentine tube heat exchangers23fully dry out keeps the evaporative heat exchanger23from fouling. The time period may be entered by a user or determined by the controller52. The time period is based on the configuration of the cooling tower20and the time required to refill the sump39. Referring now toFIG.4, the controller52failsafe operating mode400is activated when a problem with any of the sensors was found or when any of the measured and controlled parameters of the water quality is out of an acceptable range and attempts to correct them during the normal operating mode300have failed. One objective of the failsafe operating mode is to keep the cooling tower20and the environment safe during an upset condition until service is performed on the cooling tower20. Depending on the manual inputs provided to the controller52, during the failsafe operating mode, the cooling tower20may continue to operate, may be operated with limited capacity, can be operated in the dry mode if so equipped or can be shut down. Referring again toFIG.4, once controller52has determined that the failsafe operating mode is required, on a call402for cooling and more specifically, when the cooling tower20needs to operate in a wet, evaporative state, the controller52initiates404the failsafe mode wet cycle and checks406to see if there are sump heaters. If there are sump heaters, then the evaporative equipment can usually operate wet regardless of the ambient temperature, but this is a manual input depending on the cooling tower configuration. If there are not sump heaters, then controller52considers the ambient temperature sensed by temperature sensor54. If the ambient temperature is below freezing and there are no sump heaters, the controller52communicates410a low temperature alarm and keeps the cooling tower20from operating in the wet evaporative mode to eliminate the possibility of freezing. Another option is to monitor the temperature sensor54A in the spray water pipe or outlet water pipe and as long as the water temperature remains above a preset level, typically 45° F. to 50° F., then it is safe to operate the cooling tower20in the wet evaporative mode. Referring again toFIG.4, once controller52will allow the cooling tower to operate in the wet mode, the controller52performs operation412that includes the controller52monitoring a wet timer to keep track of the time the cooling tower20has operated in the wet evaporative mode. The controller52keeps track of the time the cooling tower20has operated in the wet evaporative mode because the controller52runs a purge and flush cycle at a select interval of time (a changeable parameter), typically after operating wet for 4 hours in the failsafe operating mode. The purge and flush cycle will occur more often in the failsafe operating mode then the normal operating mode to keep the cooling tower and environment safe until service can be performed on the cooling tower20. In operation412, the make-up water is turned on and a fill timer is started. If the controller52determines414the sump water has not reached a minimum level within the time period set by the fill timer, then a low water alarm is communicated416, and the controller52waits for the make-up assembly to be repaired. If the controller52determines414the water level is high enough via closing of the make-up float valve assembly34, the spray pump19is turned on and a spray pump start timer is started318. After the spray pump time period ends, the controller52determines320whether the water level exceeds a maximum level via the sump float sensor47and determines324whether the spray pump19is on via, for example, a spray pump switch. The controller52communicates322,326corresponding alarms if the water level is too high or the spray pump19is inoperable. In the failsafe operating mode, wet evaporative operation of the cooling tower20may not be permitted according to the manual inputs500until any alarms are cleared and the relevant components repaired. If the controller52determines324the spray pump19is on, the controller52turns on the UV pump330and starts a UV timer to measure a UV time period such as 10 seconds. Once the UV time period ends, the controller52determines330whether the UV pump41is running such as via a UV flow switch. If the UV flow switch does not detect water flowing from the UV pump41, the controller52communicates432a UV pump alarm, and the UV lamp42A is turned off to keep the loop from overheating due to lack of flow. Various approaches may be used to detect the spray pump19and UV pump41are pumping such as a flow switch, differential pressure switch, and/or a current sensor. Once the sump float sensor47determines there is water in the sump39, the UV pump may run continuously until such time that the sump float sensor47detects there is no water in the sump39. This allows continuous monitoring of some or all of the water quality parameters. If the UV pump flow switch41C detects water flow in the side stream water loop, controller52looks at the intensity sensor43of the UV lamp42. If the UV lamp42has lost intensity past a minimum effective value, meaning that the UV lamp42needs to be cleaned or is not working, then the controller52communicates436a UV lamp alarm. In the failsafe operating mode, controller52in one embodiment performs operation438wherein the controller52ignores data from the conductivity, biofilm, and/or pH sensors because in the failsafe operating mode400, a service call has already been requested by the controller52communicating an alarm and the failsafe operating mode400purging and flushing the water at a much higher frequency than in normal operating mode300. InFIG.4, the failsafe operating mode is shown bypassing considering data from these sensors, but the ability of the controller52to bypass consideration of the data from the conductivity, biofilm, and pH sensors is set by a manual input from a user. Next controller52receives feedback from the drift sensor40and determines440whether the drift sensor40is operating. If the feedback is that the drift sensor has malfunctioned or is not working, controller52communicates442a drift sensor alarm and may adjust a fan speed at operation442, such as by limiting the speed of the fan25to 50% of the maximum fan speed. If the drift sensor40is operating, the controller52determines444whether the measured drift is above a threshold. The drift sensor40detects if there is an unsafe amount of drift which contains water droplets or mist that are leaving the cooling tower20in order to reduce the risk of microbial contamination to the surrounding environment. If the drift is determined444as being above an acceptable parameter and depending on the bioactivity parameter sensed by the bioactivity sensor, the controller52communicates446a drift sensor alarm and may adjust the fan speed at operation446. The controller52may adjust the fan speed to a level that is known to be where drift is within tolerance, or the controller may turn off the fan or operate in the dry mode depending on the customer manual inputs and the controller52. After the safety checks on the sump water system and cooling tower operation of operations408,414,420,424,430,434,440, and444are completed, controller52determines448whether the cooling tower20has operated for longer than the wet timer setpoint which, in the failsafe operating mode400, may be set to four wet running hours as an example. If the unit has run longer than the wet timer setpoint, then controller52will initiate450the purge and flush cycle460. After the purge and flush cycle460, the controller520also determines448whether the dry run timer has exceeded a dry run timer setpoint and will initiate454a dry cycle456when the dry run timer has exceeded the dry run timer setpoint. The dry cycle456includes operations457is to purge the sump water and run the fan so that the sump39dries out for a specified period of time in a further attempt to inhibit microbial contamination. Once the dry cycle456is complete, the failsafe operating mode400loops back to the beginning of the process. The number of dry cycles permitted may be a manual input provided by a user. Like the purge and flush cycle384, the controller52upon starting the purge and flush cycle460may determine462whether to direct the chemical treatment system99to add chemicals at operation396instead of performing purge and flush operations388. The operation462may include the controller52making the decision of based on the current unacceptable water parameter and a manual input. For example, if the pH of the water is outside of a first tolerance (causing initiation of the failsafe operating mode400) but still within a second tolerance, the controller52may determine462to add water treatment chemicals at operation466rather than performing the operations464of the purge and flush operations464. Although the normal operating mode300and the failsafe operating mode400are discussed above as a flow of particular operations, it will be appreciated that the order of the operations may be changed, the operations combined or separated, and various operations added or omitted as desired for a particular application. As one example in this regard, the control logic of the modes300,400may utilize two or more related evaporative liquid parameters to make a given determination. For example, the normal operating mode300may have an operation wherein if the pH is greater than 10 and the total dissolved solids are outside of a predetermined range, the controller52initiates the failsafe operating mode400. The same operation in the normal operating mode300may further specify that if the pH is less than 10 and the total dissolved solids are outside of the predetermined range, the controller52remains in the normal operating mode for a set period of time to wait and see whether the normal operation of the cooling tower remedies the out-of-range total dissolved solids parameter. As discussed above, in some embodiments the controller52may utilize various manual inputs as part of the control logic implemented in the normal operating mode300and the failsafe operating mode400.FIGS.5A and5Bprovides includes example manual inputs500that may be used as part of the control logic. The manual inputs500may include, for example:Existence of sump heaters?Minimum ambient temperature for wet operation?Is cooling tower operable in a dry mode?Whether to operate in dry mode below freezing ambient temperature?Minimum allowable spray temperature for wet operation?Is there a UV system installed on the make-up?Is there a UV system installed within the tower?Is there a UV system installed in a side stream?Shut off water supply when high-water level alarm is present?During failsafe mode, whether it is preferred to purge and flush more often regardless of water quality sensors?Is water quality monitored offsite and is that information inputted into the controller?Does a water treatment system control bleed off?Is it desired to have the controller operate the bleed off when conductivity is too high?Conductivity values for water treatment system?Conductivity values for controller to take over bleed off control?Minimum effective UV light intensity(s)?Minimum acceptable bioactivity or biofilm level and differential?Under upset condition, preference to continue operating unit or shut down?Is cooling tower equipped with back-up antimicrobial chemicals?Is adding chemicals more preferred than purge and flush cycles during upset bioactivity condition?Number of purge and flush cycles before activating failsafe mode?Proper value of pH and differential?Is cooling tower equipped with pH controlling chemicals?pH level of make-up water?Is adding chemicals more preferred than purge and flush cycles during upset pH condition?Maximum acceptable drift limit?Preference to lower fan speed or shut off tower under unacceptable drift conditions?Maximum acceptable plume rate?Preference to operate plume abatement system, lower fan speed or shut off tower under unacceptable plume conditions?Number of flush cycles during normal operating mode?Number of flush cycles during failsafe operating mode?Is unit equipped with a high-speed pump to aid in flushing operation?Are dry cycles desired and at what frequency?Drain the sump during when demand for cooling is not present? Uses of singular terms such as “a,” “an,” are intended to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms. It is intended that the phrase “at least one of” as used herein be interpreted in the disjunctive sense. For example, the phrase “at least one of A and B” is intended to encompass A, B, or both A and B. While there have been illustrated and described particular embodiments of the present invention, it will be appreciated that numerous changes and modifications will occur to those skilled in the art, and it is intended for the present invention to cover all those changes and modifications which fall within the scope of the appended claims. For example, although the control logic of normal and failsafe operating modes300,400are described with reference to cooling tower20, it will be appreciated that some or all of the normal and failsafe operating modes300,400may be implemented by a control system of the cooling tower10. | 67,759 |
11859925 | It should first of all be noted that the figures set out the invention in detail for implementing the invention, it being of course possible for said figures to serve to better define the invention if necessary. InFIG.1, a motor vehicle is equipped with an element1which has to be cooled or heated, for example in order to optimize its functioning. Such an element1is in particular an electric motor or combustion engine intended to at least partially propel the motor vehicle, a battery provided to store electrical energy, a device for storing heat and/or cold energy, or similar. To this end, the motor vehicle is equipped with an installation2which comprises a refrigerant circuit3within which a refrigerant4circulates, for example carbon dioxide or the like, and a heat-transfer liquid circuit5within which a heat transfer liquid6circulates, in particular glycol water or the like. The installation2comprises at least one heat exchanger11according to the present invention. The installation2is described below in order to better understand the present invention, but the features of the described installation2are not limiting for the heat exchanger11of the present invention. In other words, the installation2is able to have distinct structural features and/or operating modes different than those described, without the heat exchanger11departing from the rules of the present invention. The refrigerant circuit3comprises a compressor7for compressing the refrigerant4, a refrigerant/external air exchanger8for cooling the refrigerant4at constant pressure, for example placed at the front of the motor vehicle, an expansion member9to permit expansion of the refrigerant4, and a heat exchanger11which is arranged to permit thermal transfer between the refrigerant4and the heat-transfer liquid6. The element1is in communication with a thermal exchanger14, the thermal exchanger14being able to modify a temperature of the element1, in particular by direct contact between the element1and the thermal exchanger14, the thermal exchanger14being part of the heat-transfer liquid circuit5. The heat-transfer liquid circuit5comprises a pump15for making the heat-transfer liquid6circulate within the heat-transfer liquid circuit5. The heat-transfer liquid circuit5comprises the heat exchanger11, which is also part of the refrigerant circuit3. The heat exchanger11comprises at least one first circulation path21for the refrigerant4and at least one second circulation path22for the heat-transfer liquid6, the first circulation path21and the second circulation path22being arranged to permit a heat exchange between the refrigerant4present inside the first circulation path21and the heat-transfer liquid6present inside the second circulation path22. Preferably, the heat exchanger11has several first circulation paths21and several second circulation paths22. A first circulation path21is interposed between two second circulation paths22, and a second circulation path22is interposed between two first circulation paths21. The heat exchanger11thus has an alternating arrangement of first circulation paths21and second circulation paths22. Inside the heat-transfer liquid circuit5, the heat-transfer liquid6flows from the pump15to the heat exchanger11, then flows inside the heat exchanger11, using the second circulation paths22to exchange heat energy with the refrigerant4present inside the first circulation paths21, then flows inside the thermal exchanger14, then returns to the pump15. Inside the refrigerant circuit3, the refrigerant4flows from the compressor7to the refrigerant/external air exchanger8, then to the expansion member9. The refrigerant4then flows inside the heat exchanger11, using the first circulation paths21inside which the refrigerant4exchanges heat energy with the heat-transfer liquid6present inside the second circulation paths22, then returns to the compressor7. InFIG.2, the heat exchanger11is parallelepipedal overall and comprises an end-plate100which is provided with a heat-transfer liquid admission point101by way of which the heat-transfer liquid6accesses the interior of the heat exchanger11. The end-plate100is also provided with a heat-transfer liquid evacuation point102by way of which the heat-transfer liquid6is evacuated from the heat exchanger11. The second circulation paths22extend between the heat-transfer liquid admission point101and the heat-transfer liquid evacuation point102. The end-plate100also has a refrigerant admission point103by way of which the refrigerant4accesses the interior of the heat exchanger11, and a refrigerant evacuation point104by way of which the refrigerant4is evacuated from the heat exchanger11. The first circulation paths21extend between the refrigerant admission point103and the refrigerant evacuation point104. The heat exchanger11is a plate-type exchanger which comprises a plurality of plates105, such as the plate105illustrated inFIG.3. The plates105are engaged one inside the other in order to jointly delimit a tube123which channels a circulation of the refrigerant4or else of the heat-transfer liquid6. In other words, the two plates105forming the tube123jointly delimit a channel111dedicated to the circulation of the refrigerant4or of the heat-transfer liquid6. More particularly, one side of a plate105borders the channel111for circulation of the heat-transfer fluid4and the other side of the same plate105borders the channel111for circulation of the heat-transfer liquid6. Thus, the plates105are mutually arranged in such a way as to alternately configure the channels11for circulation of the refrigerant4and of the heat-transfer liquid6. The plate105extends principally along an axis of longitudinal extent A1. The plate105comprises a bottom106, and at least one raised edge107which surrounds the bottom106. The bottom106extends within a bottom plane P5. The raised edge107is formed at the periphery of the bottom106, and the raised edge107surrounds the bottom106. The raised edge107intersects the bottom plane P5. It will be understood that the plate105is arranged in a generally rectangular tub, the bottom of the tub being formed by the bottom106, and the edges of the tub being formed by the raised edge107. Such plates105are intended to be stacked in such a way that the bottoms106of the plates105are arranged parallel to each other, with a spaced-apart superpositioning of the bottoms106. The raised edges107of two plates105nested one inside the other are in contact and are intended to be soldered to each other in order to ensure leaktightness of the channel111that is thus formed between two adjacent plates105. More particularly, the raised edge107comprises two longitudinal raised edges108a,108b, namely a first longitudinal raised edge108aand a second longitudinal raised edge108b, which are formed opposite each other. The raised edge107also comprises two lateral raised edges109a,109b, namely a first lateral raised edge109aand a second lateral raised edge109b, which are formed opposite each other. InFIG.4, the first lateral raised edge109aextends in a first plane P1which crosses the bottom plane P5and which intersects the axis of longitudinal extent A1. Arranged longitudinally opposite the first lateral raised edge109ais the second lateral raised edge109b, which extends in a second plane P2, the second plane P2crossing the bottom plane P5and intersecting the axis of longitudinal extent A1. The first longitudinal raised edge108aextends in a third plane P3which crosses the bottom plane P5and which intersects an axis of lateral extent A2of the plate105, the axis of lateral extent A2being orthogonal to the axis of longitudinal extent A1and parallel to the bottom plane P5. The second longitudinal raised edge108bextends in a fourth plane P4which crosses the bottom plane P5and which intersects the axis of longitudinal extent A2of the plate105. By way of example, the first plane P1forms, with the bottom plane P5, a first angle α of between 91° and 140°, preferably of between 91° and 95°. The second plane P2forms, with the bottom plane P5, a second angle β of between 91° and 140°, preferably of between 91° and 95°. The third plane P3forms, with the bottom plane P5, a third angle γ of between 91° and 140°, preferably of between 91° and 95°. The fourth plane P4forms, with the bottom plane P5, a fourth angle δ of between 91° and 140°, preferably of between 91° and 95°. According to a design variant, the first angle α, the second angle β, the third angle γ and the fourth angle δ are equal, to within manufacturing tolerances. InFIGS.3and4, the plate105comprises four openings110, preferably circular openings, which are distributed in pairs at each longitudinal end of the plate105, more particularly at each of the corners of the bottom106of the plate105. Two of these openings110are configured to communicate with one of the first circulation paths21formed at one side of the bottom106, and the two other openings110are configured to communicate with one of the second circulation paths22formed at another side of the bottom106. Two of the openings110formed at the same longitudinal end of the plate105are each surrounded by a collar120, such that these openings110, encircled by this collar120, extend in a plane that is offset with respect to a bottom plane P5in which the bottom106is inscribed. The two other openings110, situated at the other longitudinal end of the plate105, extend in the bottom plane P5. The bottom106comprises a rib113, which is arranged such that the channel111has a U-shaped profile. The rib113is parallel to a first direction D of extent of the longitudinal raised edges108a,108b, the first direction D of extent of the longitudinal raised edges108a,108bbeing preferably parallel to the axis of longitudinal extent A1of the plate105. The rib113extends between a first longitudinal end114and a second longitudinal end115, the first longitudinal end114being in contact with the lateral raised edge109athat the raised edge107comprises. The second longitudinal end115is situated at a first non-zero distance D1from the raised edge107, the first distance D1being taken between the second longitudinal end115and the lateral raised edge109b, measured along the axis of longitudinal extent A1of the plate105. The first longitudinal end114of the rib113and the second longitudinal end115of the rib113are aligned along a first direction D parallel to an axis of longitudinal extent A1of the plate105. These arrangements are such that the channel111is shaped as a U whose branches are parallel to the longitudinal raised edges108a,108bof the plate105and are separated by the rib113, while the base of the U lies next to a second lateral edge109bwhich is formed longitudinally opposite the first lateral edge109a. The rib113is formed at an equal second distance D2from the two longitudinal edges108a,108bof the plate105, the second distance D2being measured between the rib113, taken at its center, and one of the longitudinal raised edges108a,108b, perpendicularly to the axis of longitudinal extent A1of the plate105. According to one design variant, the rib113is offset by a non-zero distance with respect to a median plane P6of the plate105, the median plane P6being orthogonal to the bottom106and parallel to the axis of longitudinal extent A1of the plate105, the distance being measured between the rib113, taken at its center, and the median plane P6, perpendicularly to the latter. InFIGS.5and6, the rib113comprises two rib edges141, which extend respectively between the bottom106and a summit140of the rib113. The summit140is the part of the rib113at the greatest distance from the bottom106. In other words, the summit140of the rib113is bordered longitudinally by the rib edges141. The summit140is arranged as a plateau formed in a plane parallel to the bottom plane P5. The rib113is advantageously of a sinuous configuration. In other words, the rib113is of a sinusoidal shape overall. It will be understood that a first ridge142which separates the summit140from any one of the rib edges141has a sinuous shape in a plane parallel to the bottom plane P5and containing the summit140. It will also be understood that a second ridge143which separates the bottom106from any one of the rib edges141has a sinuous shape in a plane parallel to the bottom plane P5and containing the bottom106. The first ridge142and the second ridge143are not rectilinear. The first ridge142and the second ridge143of the same rib edge141are superposable on each other. It follows from this that each of the rib edges141is formed by an alternating sequence of humps and hollows. In other words, each of the rib edges141has the shape of a corrugated sheet. In other words too, each of the rib edges141comprises an alternating succession of convex portions144and concave portions145, as can be seen inFIG.5. More particularly inFIG.6, in a transverse plane P7which is orthogonal to the bottom plane P5and to the axis of longitudinal extent A1of the plate105, each of the rib edges141forms, with the bottom plane P5, a fifth angle σ of between 900 and 160°. In other words, the rib113has a trapezoidal profile in the transverse plane P7. A rib width X, taken between the two rib edges141and parallel to the bottom plane P5, is constant from one to the other of the longitudinal ends114,115of the rib113. Referring again toFIGS.3,4and5, the bottom106is provided with a plurality of protuberances112in order to disturb a flow of the refrigerant4or of the heat-transfer liquid6in the channel111. These protuberances112form obstacles to a laminar flow of the refrigerant4or of the heat-transfer liquid6in the channel111. Preferably, the protuberances112have a frustoconical profile in section in the transverse plane P7. InFIGS.3and5, the protuberances112are organized in a plurality of rectilinear rows124aof protuberances112, the rectilinear rows124abeing formed along a second direction D′ which is parallel to the axis of lateral extent A2of the plate105. The successive rectilinear rows124aalternately traverse a convex portion144or a concave portion145of the groove113. A rectilinear character of a rectilinear row124aof protuberances112stems from the fact that the rectilinear row124aof protuberances112is orthogonal to the axis of longitudinal extent A1of the plate105. The protuberances112are also organized in a plurality of oblique rows124bof protuberances112, the oblique rows124bbeing formed along a third direction D″ which forms, with the second direction D′, a sixth angle φ, the sixth angle φ being the acute angle formed between the two directions D′, D″, which is of the order of 90°, to within manufacturing tolerances. The successive oblique rows124balternately traverse a convex portion144or a concave portion145of the groove13. An oblique character of an oblique row124aof protuberances112stems from the fact that the oblique row124bof protuberances112is inclined by a non-zero angle with respect to the axis of longitudinal extent A1of the plate105. It will be noted that a first distance E1taken between a crown146of a convex portion144of the rib113and a protuberance112laterally nearest to the crown146is between 200% and 300% of a second distance E2taken between a hollow147of a concave portion145of the rib113and a protuberance112laterally nearest to the hollow147. In other words, the crown146of a convex portion144of the rib113is farther from the protuberance112laterally nearest to the crown146than are the hollow147of a concave portion145of the rib113and the protuberance112laterally nearest to the hollow147. The plate105is made of a metallic material able to be stamped in order to form in particular the protuberances112and the rib113by stamping of the plate105, the metallic material being chosen from among the thermally conductive metallic materials, in particular aluminum or aluminum alloy. The invention as has just been described does indeed achieve its set objectives, making it possible to homogenize the exchanges of heat along the entire length of the plate, thereby avoiding the zones of lesser exchange, for example along the rib113or along the longitudinal raised edges108a,108b,208a,208b. The invention is not limited to the means and configurations exclusively described and illustrated, however, and also applies to all equivalent means or configurations and to any combination of such means or configurations. In particular, whilst the invention has been described here in its application to a heat exchanger involving refrigerant and heat-transfer liquid, it goes without saying that it applies to any shape and/or size of plate or to any type of fluid circulating along the plate according to the invention. | 16,705 |
11859926 | A front end module100according to the present invention comprises at least one heat exchanger4provided with one or more pipes17a,17b, a support frame1for the heat exchanger4, at least one ventilation duct2configured to interact with said frame and guide fresh air toward this frame and to force through the heat exchanger, at least one sealing device3arranged around a pipe in a passage area where the pipe passes through a wall of the support frame, and an associated housing10with a shape complementary to that of the sealing device3, wherein the housing10may be formed in the support frame1or in the assembly formed by the support frame1and the ventilation duct2, corresponding to a flow duct for an air stream, when the latter are assembled so as to adopt a “closed” configuration, thus forming an encapsulation casing. The support frame1, also referred to as a holder frame, corresponds to a rigid structure, more specifically to a rigid plastic frame with four members delimiting a surface within which the heat exchanger4and possibly a motor-fan unit are arranged. In order to ensure the continuity of the flow duct2, said ventilation duct2is attached to the support frame1in a sealed fashion. In other words, the holder frame ensures the continuity of the ventilation duct2or, in other words, the holder frame corresponds to part of the flow duct2. Below, and as shown on the trihedrons in the figures, a longitudinal axis L will be defined as an axis parallel to the main direction of circulation of the air stream through the support frame and each heat exchanger, and the lateral Lt and transverse T orientations will be defined as orientations perpendicular to the longitudinal axis. Such a system100is shown in particular, schematically, inFIG.1andFIG.2in closed and open configuration, respectively. The heat exchanger(s)4are housed in the support frame1, which is configured to allow the attachment of the ventilation duct2. The ventilation duct2has an air vent101open on the front end of the motor vehicle, thus allowing the entry of a fresh air stream which it redirects in the encapsulation, toward the heat exchanger4. This ventilation duct has, at an end opposite the air vent, a rear end face, brought into contact, in the closed configuration shown inFIG.1, with the support frame1. The support frame1comprises two side walls102and two transverse walls104, which define an open frame for accommodating, between the walls, one or more heat exchangers4. A front end face24of the support frame is defined as being the face intended to be in contact with the ventilation duct2, and more particularly with the rear end face of this ventilation duct. It is through this front end face24that, in the example shown, the fresh air is brought into the frame to pass through the exchangers. Each heat exchanger comprises an exchange surface25and at least one collector box arranged laterally with respect to this exchange surface, as well as at least one inlet17aand outlet17bpipe coming from the collector box and ensuring the circulation of a coolant. The coolant is caused to exchange calories with the air passing through the exchange surface. The pipes17a,17bprotrude from the collector box of the exchanger, substantially in the main plane of extension of the exchanger, that is to say perpendicular to the side walls defining the frame. As a result, the pipes, allowing the connection of the exchanger to a coolant circuit not shown here, are arranged so as to pass through the support frame1in the passage areas26defined when the exchanger is assembled on the frame. In order to ensure the sealing of the encapsulation of the front end module and thus prevent any leakage of fresh air, i.e. prevent air from exiting the casing without passing through the heat exchanger or exchangers4, or any recirculation of hot air from outside the casing to the inside thereof, which would in both cases be detrimental to the thermal performance of the front end module100, the system is equipped with at least one sealing device3in the passage areas26where the inlet17aand outlet17bpipes of each heat exchanger4pass through the support frame1. This sealing device3will be described in more detail later in the description. For illustrative purposes,FIG.1andFIG.2show two of the embodiments of the sealing device3of the present invention, when they are integrated in a suitable front end module. The invention is nevertheless in no way limited to this example of use, and, for reasons relating to production costs, the same embodiment of the sealing device3could be implemented at the incoming17aand outgoing17bpipes of the heat exchanger4. The sealing device3according to the present invention consists of a plate5allowing a pipe17of the heat exchanger4to pass through, said sealing device3being able to interact with a housing10located in the support frame1of the heat exchanger4. The sealing device3is characterized in that it comprises at least two snap-fastening means8, able to interact with complementary fastening elements9integrated in the support frame1of the heat exchanger4, and in that the sealing device3comprises an elastically deformable sealing means6,13, arranged at a central recess12formed in the plate5. This sealing device3is intended to be inserted in the housing10as shown inFIG.10orFIG.11, said housing10being located in a passage area where a pipe passes through a wall of the support frame1. The snap-fastening means8, integrated in the sealing device3, interact with the complementary snap-fastening elements9, integrated in the support frame1, and more particularly in the housing10or in grooves11surrounding this housing10. For each pair made up of a snap-fastening means8and a complementary fastening element9, intended to interact with one another, one consists of a male element forming a protrusion, while the other consists of a female element of corresponding shape to that of the protrusion, for example a slot. FIG.3shows a first embodiment of the sealing device3, which consists in particular of a plate5extending, when the sealing device is mounted in the housing around the appropriate pipe, in a plane defined by the longitudinal axis and by the lateral axis, parallel to the main plane of extension of the side wall of the frame in which the housing of the sealing device is formed. The plate5has a central recess12, capable of allowing the pipe17of the exchanger to pass through, and it participates in forming or supporting an elastically deformable sealing means13dimensioned to be engaged around the pipe and to seal this passage area26. The plate5is also equipped with at least two snap-fastening means8protruding from the plate5, arranged on opposite end edges of the plate5and located in the main plane of extension X of said plate5. These protruding snap-fastening means8extend over a dimension smaller than that of the plate5and are configured to interact with complementary snap-fastening elements9forming slots, as will be described in detail below, located in the support frame1, more precisely in the housing10. Alternatively, the arrangement of the male and female elements forming the snap-fastening means could be reversed, and a plate5equipped with slots could be provided while the support frame1contains protrusions of complementary shape and dimensions. In an alternative not shown, provision could be made for the snap-fastening means8of this first embodiment to be arranged at 90° relative to the configuration shown and to extend from an end edge of the plate, substantially perpendicular to the main plane of extension X of the plate. Here again, the fastening means8extend over a dimension less than the length of the plate. FIG.4andFIG.5show alternative embodiments of the sealing device. In a second embodiment, shown schematically inFIG.4, the sealing device3has a more complex arrangement: the sealing device comprises a plate5, surrounded by a peripheral zone20, which is axially offset so as to form a platform arranged in a separate plane parallel to the plane X defined by the plate5. The plate5and the peripheral zone20are connected by side walls19extending in a plane orthogonal to that of the plate5. The sealing device3comprises at least two snap-fastening means8, each being arranged on a side wall19. As shown, these fastening means may more particularly be arranged on an external face of this side wall, that is to say a face of the side wall oriented toward the outside of the part or, in other words, facing away from the plate5. Thus, the snap-fastening means8are compact at the central recess12or the plate5, and therefore where the pipes17pass through. The fastening means8according to the present configuration extend in a plane parallel to the plane X of the plate5, over a length less than the length of the side edge of said plate5. As shown, the snap-fastening means8integrated in the sealing device3take the form of protrusions, and therefore form the male snap-fastening element. They are thus configured to interact with complementary snap-fastening elements9integrated in the support frame1, and more particularly in the housing10, taking the form of slots. As specified above, the reverse configuration can nevertheless be envisaged, so that the sealing device3comprises the snap-fastening means8forming slots, and the support frame1integrates the protruding fastening means. According to a third embodiment, shown in particular inFIG.5,FIG.6,FIG.7andFIG.8, each distal end of the peripheral zone20is equipped with a section of material, orthogonal to the plane of said zone, forming a return wall21. It is understood that the distal end of the peripheral zone is the end opposite the side wall which it extends perpendicularly. The assembly comprising the return walls21, the peripheral zone20and the side walls19forms a clearance zone22, comparable to a channel in which the various snap-fastening means8are arranged, so as to reinforce the sealing of the sealing device3at its fastening means8. For this third embodiment, the fastening means can again adopt different configurations. According to a first configuration, shown inFIG.5andFIG.6, the snap-fastening means8are arranged on the side walls19so as to extend in a plane parallel to the plane X defined by the plate5, and be housed within the channel formed by the clearance zone22. According to an alternative configuration, the snap-fastening means8have the same features, but this time are each arranged on a return wall21, from an internal face thereof extending in the opposite direction to what is shown inFIG.5, so as to be again housed within the clearance zone22, and extend toward the plate5. Alternatively, the snap-fastening means8of the sealing device3may protrude from the external faces of the return walls so that they are not housed in the clearance zone22and extend in a plane parallel to the plane X defined by the plate5in the direction away from said plate5. As has already been mentioned, for each of the configurations described for this third embodiment, the snap-fastening means8, integrated in the sealing device, may have the male form of a protrusion or female form of a slot, while the complementary snap-fastening element9will have the opposite shape. In all of the above, and below, note that the term “slot” means both blind holes comprising an end wall against which or facing which the protrusion comes, and holes passing through the wall in which these slots are formed. When the sealing device3comprises, in accordance with this third embodiment, such return walls21, the housing10of complementary shape has a particular shape, visible in particular inFIG.9. The housing10is surrounded by at least two grooves11, able to accommodate the return walls21, which will be discussed in more detail below. Thus, the plate5of the sealing device3interacts with the housing10, while the return walls21interact with the grooves11. The return walls21thus help to lock the sealing device3by abutting against grooves11accommodating them, so as to prevent any movement, in the plane X defined by the plate5, within the housing10. FIG.10andFIG.11show the interaction between the sealing device3, when it is produced according to the third embodiment, and the housing10associated therewith. The particular arrangement of said housing10is more apparent inFIG.9, in the absence of the sealing device3. The housing10presented takes the form of a window, formed in a side wall102of the support frame1, encircling a cavity arranged so as to allow a pipe17to pass through when the heat exchanger4is inserted in the support frame1. Below, the various faces of this window will be referred to as “sides of the housing”10. The cavity, which forms a passage area26as mentioned above, thus allows on the one hand the passage of a pipe and on the other hand the integration of the sealing device3. Whatever the embodiment considered, the housing10is arranged so as to adopt a shape complementary to that of the sealing device3. More particularly, the sides of the housing10must be able to interact with the shape of the plate5so that, when the sealing device3is inserted in the housing10, the sides of said housing10surround the plate5. For more complex embodiments, such as the second or third embodiment, comprising side walls19, the sides of the housing10are in contact with the side walls19. According to the configurations adopted for the snap-fastening means8of the sealing device3, the complementary fastening elements9are integrated on the sides of the housing10. This is for example the case for any configuration in which the snap-fastening means8are integrated in the side walls19. In the third embodiment of the sealing device3, the sealing device3comprises return walls21. In order to be able to accommodate these return walls in the volume defined by the support frame1, the housing10is surrounded, on at least two sides, by grooves11of a shape complementary to the shape of said return walls21. These grooves11thus form a receiving rail for the return walls21, extending parallel to the housing10over a length greater than the length of the housing, so as to be able to surround the latter. In the case shown inFIG.10, the housing10is closed by four sides of the housing and it is completely encircled by grooves11which define a closed periphery. According to an alternative, shown inFIG.11, the housing10is produced by a notch formed in a wall of the support frame, in this case a side wall102, from the front end face24. The housing again has a shape and dimensions complementary to those of the sealing device3, more particularly to those of the plate5. In other words, the housing10is open on at least one side, so that it opens onto the front end face24of the support frame1. As will be described below, the housing is thus intended to be closed, once the sealing device has been inserted, by pressing the ventilation duct2on the support frame1when the system is in the closed configuration. In such an alternative, the grooves11are also open on at least one side. More particularly, the grooves are open at a longitudinal end so as to open onto the front end face24of the wall of the support frame. As shown, the grooves11are arranged on either side of the housing10and they open onto the same face as the opening of the housing. The grooves thus extend from the end of the side wall of the support frame1to beyond the periphery of the housing10. A side edge16of the sealing device, located on the open side of the housing10, is arranged in the extension of the front end face24of the wall of the support frame1, when the sealing device3is assembled in the housing10. The side edge16of the sealing device thus forms an area of contact with the ventilation duct2, and more particularly the rear end face of this ventilation duct2, when the front end module is in the closed configuration and the ventilation duct is attached to the support frame1. Thus, when the ventilation duct2is attached to the support frame1, thus forming an encapsulation casing, the rear face of the ventilation duct2is in continuous contact with the support frame1/sealing device3assembly, thus ensuring, on the one hand, sealing of the casing, and on the other hand, locking of the sealing device3in its housing10. As described above, the plate5participates in forming or supporting a sealing means, which is elastically deformable, arranged at the central recess12in the plate5. This sealing means, making it possible to ensure the sealing of the encapsulation casing in the passage area where the pipe17of the heat exchanger4passes through, will be described below with reference to various embodiments. It will be understood that combinations other than those shown by way of example could be implemented in the context of the invention, with sealing devices which comprise one or other of the snap-fastening means described above and one or other of the elastically deformable sealing means which will be described below. In a first embodiment, shown schematically inFIG.8, the elastically deformable sealing means comprises a flexible sealing sheath6, overmolded on the plate5around the recess12in its central zone. Said sealing sheath6is provided with a plurality of precut zones7, each precut zone7corresponding to a different gauge of pipe. This particular feature makes it possible to facilitate assembly of the sealing device with the pipes17of the heat exchanger4, and to use a standard sealing part for several sizes of pipes. The production of the elastically deformable sealing means in the form of a flexible sheath, with precut zones, makes it possible either to perforate the sealing sheath to the desired diameter before inserting the pipe17, or to perforate the sealing sheath directly by tearing the precut zone as a result of the physical stress exerted by the pipe on the unperforated sealing sheath. In the example shown, the various precut zones are formed by the arrangement of the flexible sealing sheath6with a diameter which decreases in the direction away from the plate and by the arrangement of fragile zones along the flexible sheath, each fragile zone location corresponding to the cutout of the sheath to correspond to a pipe of given diameter. More particularly, the sealing sheath may have a series of plateaus60the diameters of which decrease in the direction away from the plate, with each plateau, substantially parallel to the plane defined by the plate, connected to a neighboring plateau by a connecting ring61substantially perpendicular to the plateau(s). The junction of such a connecting ring with a plateau60forms a right angle facilitating the detachment of the plateau having dimensions corresponding to those of the pipe to be inserted. In such an embodiment, at least the sealing sheath6should be made of a flexible material, such as EPDM. In an alternative, the plate5and the snap-fastening means8will be made of a more rigid material capable of withstanding greater stresses, such as polyamide PA66, where appropriate reinforced with glass fibers, while the sealing sheath, overmolded on the plate around the central recess12, will be made of EPDM. The flexibility of the material forming the sealing sheath allows it to deform as the pipe passes through, the elastic nature of this material helping to press the sheath around the pipe as it passes through, so as to seal the passage area. The elastic return to position of the material forming the sealing sheath can also make it possible to interact with an anti-disengagement stop means formed on the pipe. In a second embodiment, shown inFIG.6andFIG.7for example, the plate5participates in forming in its plane of extension the elastically deformable sealing means. This is formed in one piece with the plate5, by at least two notches13extending from the central recess12toward the edges of the plate5. These notches13make it possible to form deformable tabs14, each of substantially trapezoidal shape, over the entire contour of the recess zone12, thus making the sealing device3suitable for accommodating pipes17of variable diameter. When a pipe17is inserted in such a sealing device, the deformable tabs14are bent as the pipe passes through and the elastic return to position due to their elastic nature allows these tabs to press against the pipe17so as to ensure sealing of the system. Such an embodiment has, moreover, the advantage of simplifying as much as possible the assembly of the front end module since it does not require the sealing device to be cut to a specific diameter beforehand, with a view to insertion of the pipe. In such an embodiment, the sealing device3may be made of a single material and made of a material allowing elastic deformation of the tabs upon insertion of the pipe, such as an EPDM rubber. A method for assembling the heat exchange system comprising a sealing device as described above, more particularly according to the third embodiment, as shown inFIG.7orFIG.8, will now be described. In a first step, the heat exchanger4is inserted in the support frame1, so that its incoming17aand outgoing17bpipes pass through the wall of the support frame1at the housings10intended to accommodate the sealing devices3. Next, the sealing device3is inserted in the system by sliding along the longitudinal axis defined by the pipe, until it is inserted in the housing10, or more precisely, in the case of a sealing device according to the third embodiment, until the return walls21are inserted in the grooves11and the plate5is surrounded by the housing10, and finally the snap-fastening means8of the sealing device3interact with the complementary fastening elements9of the housing10. In the case of a sealing device3, the sealing means of which is a sealing sheath6, the sealing device3may be directly inserted on the pipe17, so that the stress exerted by said pipe17perforates the sealing sheath6by tearing. Alternatively, the sealing sheath6may first be cut to the desired diameter, such a step being optional. Lastly, the ventilation duct2is fastened to the support frame1, so as to close the front end module. The front end module100according to the invention may further comprise a shut-off device comprising a set of shut-off flaps capable of pivoting rotatably so as to vary the flow rate of the air stream, said shut-off device being arranged in the ventilation duct2upstream of the heat exchanger4relative to the flow of the air stream. The shut-off device further comprises a support frame having bearings so as to hold the shut-off flaps. The axes of rotation allow the shut-off flaps to switch from an open configuration to a closed configuration. The open configuration consists in placing (by rotation) the shut-off flaps so that they provide as little opposition as possible to the passage of the air stream while orienting it appropriately. The closed configuration consists in placing the shut-off flaps so that they provide, by means of their front surface, as much opposition as possible to the flow of the air stream F, in conjunction with the other shut-off flaps. According to an embodiment of the front end module100that is not shown, the heat exchanger4and the support frame1may be inclined relative to the shut-off device. In other words, the mid-planes of the support frame1and of the shut-off device form an angle other than 0° (non-zero), particularly an angle in an interval of 10° to 80°, more specifically in an interval of 30° to 60°. Such an arrangement makes it possible to reduce the spatial footprint of the front end module100. It will be understood from reading the foregoing that the present invention proposes a sealing device, intended for a heat exchanger, this sealing device being configured to ensure the sealing of a front end module while facilitating its assembly and disassembly. The presence of snap-fastening means contributes to easy insertion of said device, while facilitating assembly of the front end module, while the sealing means presented ensure both optimum sealing of the system and standardization of the sealing device. The invention is not limited to the means and configurations described and illustrated herein, however, and also extends to all equivalent means or configurations and to any technically operational combination of such means. In particular, the shapes of the snap-fastening means or the shape of the plate may be modified at no detriment to the invention, as long as they fulfill the functions described in this document. The embodiments that are described hereinabove are thus entirely nonlimiting; it will be possible, in particular, to imagine alternative forms of embodiment of the invention that comprise only a selection of the features described below, in isolation from the other features described in this document, if this selection of features is sufficient to confer a technical advantage or to distinguish the invention from the prior art. | 24,991 |
11859927 | The structure and functioning of the control element10, breechblock stop lever20, breechblock carrier30, trigger50and the trigger assembly of a firearm that has a breechblock that moves longitudinally in the receiver, or the firearm with at least one of these elements, is explained below in reference to the drawings, which show examples. Not all of the reference symbols are inserted in all of the figures, for purposes of clarity. The same reference symbols apply, however, to all of the figures. DETAILED DESCRIPTION Position terms in this document such as “up,” “down,” “front,” “back,” etc. refer to an automatic weapon in which the bore axis is horizontal, and shots are fired toward the front, away from the shooter. FIG.1shows an example control element10in a schematic view from two perspectives. The control element can preferably be used with one or a combination of the assemblies described in greater detail in reference toFIGS.2,3,4,5and6, comprising the breechblock stop lever, breechblock carrier, continuous firing element, and trigger, for a trigger assembly described in greater detail in reference toFIG.7. The control element10for controlling a breechblock stop lever20that can pivot about a first axis of rotation A is described in greater detail in reference toFIG.2, wherein the breechblock stop lever can move between a standby position for releasing a breechblock carrier that has a control curve and a retaining position for retaining the breechblock carrier. The control element10, shown in the form of a pivoting control lever, the axis of which is parallel to the first axis of rotation A for the breechblock stop lever20, has a second axis of rotation B, and two arms15,16, which are axially spaced apart in the direction of the second axis of rotation B. A middle piece18is located between the first arm15and second arm16, which defines an annular gap14with the arms15,16, that runs at least in part about the axis of rotation B. The legs of a torsion spring can be brought to bear on the middle piece18in this annular gap14, in particular. A first control section11is formed on the first arm15, which can be pivoted about the second axis of rotation B for the control element10by means of the control curve on the breechblock carrier30. The first control section comprises two contact surfaces11aand11bfor this, which can be selectively brought into contact with the control curve. The two contact surfaces11a,11b, face away from one another, and the contact surface11bfaces “forward,” while the contact surface11afaces “backward.” The cross section of the first arm15is tapered from the radial inner end to the radially outer end in the region of the first control section11. The first arm15is also stepped such that a thickness of the first arm15in the region of the first control section11decreases in the axial direction. The first control section11can also be regarded as a lever arm. In other words, the control element10, and therefore the first control section11, or the lever arm, can rotate about the second axis of rotation B in two directions, specifically in a first direction91and in an opposing second direction92. There is also a second control section13on the first arm15, which has a radial projection13athat ends in a point, which can be controlled by a control surface55aand a control edge55bon a trigger50described in greater detail in reference toFIG.6, in order to rotate it with the breechblock stop lever20coupled to the control element10about the first axis of rotation A. The radial projection13ahas an edge that is parallel to the axis of rotation B. The first control section11and the second control section13are also opposite one another radially. There is a third control section12on the second arm16, which can be brought into contact with the breechblock stop lever20. The third control section12has a projection19that extends in the axial direction, which acts as a claw in the present case. Two contact surfaces19a,19bare formed on the claw, facing the direction of rotation for the control element. The claw19is formed on an end surface of the second arm16facing away from the first arm15. The contact surface19aacts in the direction of rotation91, and the contact surface19bacts in the opposite direction of rotation92. The second section13has a first region13athat extends radially, which bears on the control surface55a, or control edge55bof the trigger50. Such an “elongated” control element10enables the activation by the trigger50, as well as the interruption or termination of this activation. The third control section12has a second radial projection12a, which is eccentric or arched, and forms the lower end of the second arm16, or the third control section12. The second radial projection12a, which can also be referred to as a radial downward extension, is used to brace against a non-rotating component, e.g. an element. The radial incline of the end region12aof the third control section12is used to further bring the rear first26and second27stop arm sections of the breechblock stop lever20to the standby position when the breechblock moves forward, as shall be explained in greater detail in reference to the following figures. The eccentric third control section12transitions at its left end into another radial projection. Unlike the other projection13a, this radial projection is blunt, or is semicircular, and does not come to a point. This second radial projection serves substantially as a lateral stop to keep the spring element, in particular the torsion spring, from slipping out of the annular gap14. The control element10also has a hole17that is coaxial to the axis of rotation B. In other words, this hole17passes through both end surfaces. A fastening element such as a bolt can be received in this hole that then supports the control element10on the breechblock stop lever20. Such a control element10can hold the breechblock stop lever20in its standby position through the control of the breechblock carrier30when the breechblock carrier is moving forward, and allow the breechblock stop lever to rotate in this position during the return of the breechblock carrier. FIG.1ashows a view of the outer end surface of the second arm16of the control element. This gives a better view of the eccentric or arched section12a. The eccentric section12aextends circumferentially along a curve over an angle α of approx. 120°. As can be readily seen, the middle M of the curve is outside of the axis of rotation B. The eccentric section12aof the third control section12transitions at is left end at the transition P into another radial projection12b, which acts as a lateral stop for a torsion spring. FIG.2shows an example breechblock stop lever20in a schematic perspective view (left) and in a schematic lateral section or longitudinal section (right). The breechblock stop lever20is shown with the control element10fromFIG.1, and is preferably intended for a breechblock carrier30described in greater detail in reference toFIG.4. The breechblock stop lever20has a first axis of rotation A, about which it is rotatably supported. In other words, the breechblock stop lever20can rotate in two directions about the first axis of rotation A, specifically in a third direction93and in an opposite, fourth direction94. The breechblock stop lever20also comprises two parallel fastening arms21,22, which extend substantially radially to the first axis of rotation A, and receive a control element10, which has a second axis of rotation B. Both a first arm21and a second arm22on the breechblock stop lever20have holes23on their respective ends. A bolt81is inserted through the holes23to support the control element10on the breechblock stop lever20. Aside from the projection19, the control element10is located entirely axially within the fastening arms21,22. In other words, the control element10is integrated in the breechblock stop lever20. The second fastening arm22has two stops24,25. Both stops24,25comprise a bearing surface, on which the projection19on the control element10can be brought to bear. The stops24,25are formed by a material removal, wherein the material removal forms a C-shaped curve about the second axis of rotation B. The projection, which is substantially the same thickness as the second fastening arm, moves accordingly along a curved track about the second axis of rotation B, and can rotate freely between the stops24,25. The end of the breechblock stop lever20in the region of the stops24,25is also referred to as an end of the fastening arm or breechblock stop lever on the control element side. If the control element10rotates in the first direction91to a certain degree, the projection19strikes the bearing surface of the stop24at its bearing surface19a. If the control element10rotates in the other direction92to a certain degree, the projection19strikes the bearing surface19bon the stop25. According to this example, the range of rotation is approximately 160°. This means that the control element10, and therefore the control section11can be rotated or pivoted 160° from one stop24to the other stop25and back. Other angular ranges are likewise conceivable. The breechblock stop lever also comprises a stop arm, which is divided into two stop arm sections26,27. The stop arm sections26,27extend in substantially opposite directions to the fastening arms21,22from the first axis of rotation A. The stop arm sections26,27snap in place in corresponding catches in the breechblock carrier30. Each of the stop arm sections26,27have a respective stopping surface26a,27afor this. The stop arm sections26,27are parallel to one another, and connected to one another at their ends by a web28, which increases the strength of the structure. The web28is placed such that it can pass by the control curve of the breechblock carrier without coming in contact therewith. It can be readily seen that the web28is located in a lower region of the ends of the stop arm sections26,27. The cross section of the stop arm sections26,27and the web forms an upright U. The distance between the two stop arm sections is greater than the width of the control curve. In other words, there is space, or a gap29, between the stop arm sections26,27, that extends longitudinally, such that the control curve can extend into, or enter, the space29during the return of the breechblock. The end of the breechblock stop lever20in the region of the stopping surfaces26a,27ais also referred to as the end of the stop arm section or the breechblock stop lever at the stop arm side. The breechblock stop lever20also has a hole that is coaxial to the first axis of rotation A, through which a bolt82is inserted. The breechblock stop lever is attached to a non-rotating component, such as a handle or receiver, or a part connected thereto, by means of the bolt, such that it can rotate about the first axis of rotation A. To ensure that the control element10does not rotate in an uncontrolled manner about its axis of rotation B, there is a torsion spring8wound around the first axis of rotation A, the legs of which at least partially encompass the middle piece18of the control element10in the annular gap14in the control element. To transfer forces effectively, the middle piece has two straight bearing surfaces, against which the legs come to bear. The control element10is shown in a middle position inFIG.2in which it is retained by the spring8. Seen in a longitudinal section, the middle piece18ends in a triangular profile, wherein the point is at a side of the axis of rotation B facing away from the first axis of rotation A, as can be readily seen in the image on the right inFIG.2. It can also be readily seen in the image on the right inFIG.2that the breechblock stop lever20has a hook or claw26bon a side of the stop arm section26facing away from the breechblock carrier30, which faces toward the first axis of rotation A, for receiving a corresponding claw on a continuous firing element40. FIG.3shows the control element10supported in the breechblock stop lever20shown inFIG.20, in further views, sections, and perspectives. The control element is in a middle position in the left-hand column. This corresponds to the position shown inFIG.2and explained in reference thereto. The left-hand column shows the breechblock stop lever20from top to bottom in a schematic side view and in a lateral section. The middle position is also referred to as the vertical position. In the middle column, the control element10, or the first control section11is as far back as possible, i.e. it is rotated in the direction91. The projection19comes in contact at its stop surface19ain this position with the stop24on the breechblock stop lever20. The control element10, or the first control section11, is at the front in the right-hand column, i.e. it is rotated in the direction92. In this position, the projection19comes in contact at its stop surface19bwith the stop25on the breechblock stop lever20. The right-hand column also shows a cross web26c, on which a leg of a torsion spring can bear. The middle and right-hand columns show the breechblock stop lever from top to bottom in a schematic side view, in a lateral section, and in a perspective. FIG.3also shows the middle piece in a side view, in an example. As can be readily seen in this side view, the middle piece18may have a hexagonal cross section, which partially rounded corners. The substantially parallel sides form the bearing surfaces referred to in reference toFIG.2. The lower surface is longer than the upper surface. The profile ends at its left side in a triangular profile, as described in reference toFIG.2, the point of which (front point of the triangle) is at a side of the axis of rotation B facing away from the first axis of rotation A. In other words, the respective left-hand ends of the upper and lower sides of the hexagonal profile are connected to one another via two side lines, which span an angle of approx. 80°-90°. At its right side, the hexagonal profile ends in a triangular profile with rounded corners. In other words, the respective right ends of the upper and lower sides of the hexagonal profile are connected to one another via two side lines spanning an angle of approx. 135°. The rounded corners are also referred to as rear triangle corners. The side lines that connect the rear triangle point with the right-hand end of the upper side is in part at a clearly smaller distance to the second axis of rotation B than that side that connects the rear triangle point to the right end of the lower side. The second axis of rotation B, or the second axle, is displaced clearly to the rear, i.e. the distance between the front triangle point and the second axis of rotation B is clearly greater than the distance between the second axis of rotation B and the rear triangle point. This geometry of the middle piece18affects the deflection of the control element10, or the first control section11to both the rear and the front. The preferred geometry and the rear displacement of the second axis of rotation (B) define the respective points of force introduction, or define the leverages in interaction with the torsion spring8, and result in having to overcome less spring force when deflected to the rear than when deflected toward the front. In other words, the return force of the spring when in the forward position is greater than when in the rear position. This is advantageous because the control element10, or the first control section11, can be quickly rotated back to its middle position after the firing of a shot and the associated acceleration of the breechblock toward the front. This enables a reliable control of the first control section11by the breechblock carrier30, in particular at the start of the return of the breechblock. It can thus be readily seen in the three sectional illustrations that the torsion spring8is rotated in relation to the middle position in both the deflection to the rear and toward the front, wherein the rotation of the torsion spring8in the forward deflection is greater than in the deflection to the rear, resulting in the greater force return specified above. FIG.4shows the breechblock carrier30in an example shown in a schematic view from a perspective. The perspective is aimed at the lower surface of the breechblock carrier, i.e. toward the side facing the breechblock stop lever. The breechblock carrier30has three catches31,32,33on its lower surface, each of which has a locking surface31a,32a, and33a, respectively, by means of which the breechblock stop lever20can catch the breechblock carrier30. The catches31,32,33have a triangular profile, which facilitates the retaining of the breechblock carrier30. The locking surfaces31a,32a,33acorrespond to the contact surfaces26a,27aon the breechblock stop lever20. The front locking surface31ais the so-called main locking surface. It can be readily seen that the catches31,32,33are divided by two longitudinal grooves34,35ainto two catch sections. The rear groove34extends from the rear of the breechblock carrier30to the rear end of the rear projection38, and divides the middle catch32and the rear catch33in to two catch sections in each case, such that a left-hand and right-hand stop arm section are formed in each case. The groove35adivides the front catch31into two catch sections, such that a left-hand and right-hand stop arm section are formed here as well. There is also a groove35bbetween the front and middle projections36and37. The grooves35aand35bare also referred to as a so-called double-groove. Another groove35ccan also be seen in the projection, which begins at the front end of the front projection36and ends shortly thereafter at the front side of the breechblock carrier30. The widths of the grooves34,35a,35band35care at least as wide as that of the first arm15, such that the first arm15can extend at least in part into the grooves34,35a, b, cwhen controlled accordingly. The breechblock carrier30also has the aforementioned control curve for controlling the control element10on its lower surface. The control curve is formed by three projections36,37,38extending in the radial direction of the longitudinal axis. The projections36,37,38have a rectangular cross section, wherein the respective length s is greater than the respective width w thereof. The length s to width w ratio is greater than 4:1 in this example. The ratio of the width of the breechblock carrier30to the width w of the projections36,37,38is approx. 8:1. The three projections36,37,38are also arranged in a straight line. It can also be readily seen that the respective projections36,37,38are interrupted axially by the double-groove35a,35b, and end at the rear in the long groove, and at the front in the short groove35c. The distances between the rear locking surface33aand the rear double-groove35a, the middle locking surface32aand the front double-groove35b, and the front locking surface, i.e. the main stopping surface31aand the short groove35care the same, and correspond to the maximum distance between the rear edge of the first arm15, i.e. the first contact surface11aof the control section11and the stopping surfaces26a,27aon the breechblock stop lever20when the control element is vertical. The projections36,37,38can slide through the space29defined by the stop arms26,27during the return of the breechblock, without touching the rear part of the breechblock stop lever or impeding the movement of the breechblock carrier30. The front projection36and the middle projection37are located in front of the first catch31, and the rear projection38is located between the front catch31and the middle catch32. The maximum distances between the front ends of the projections36,37,38(rear ends of the grooves35c,35b, and35a) and the respective locking surfaces31a,32a,33aare functionally the same as or less than the distance between the first arm15, in particular the first contact surface11a, and the stopping surfaces26a,27aon the breechblock stop lever20when the control element is vertical. As a result, the rear part of the breechblock stop lever20can be pushed up by the first torsion spring (cf.FIG.10, time t2) and the rear part of the breechblock stop lever20can lock in place successively in its stopping surface26a,27awhen the breechblock carrier30is manually slid back, after releasing the third control section12from the second element6(cf.FIG.10, time t2) and releasing the second control section13from the control surface55aof the trigger50(if the trigger is actuated), as soon as the spring-loaded rear part of the breechblock stop lever20located in its retained position comes in contact with the catches33,32,31, and passes over them. To also ensure such a retaining function of the breechblock stop lever20when the breechblock is in an intermediate position (e.g. if the forward movement of the breechblock is disrupted), the projections38,37,36forming the control curve are interrupted by the grooves35a,35b, and35c. The first arm15can enter these grooves vertically (relaxed), and then moved to the back again when the breechblock is returned manually. The third control section12and the second control section13can thus be released from the second element6, or from the control surface55aon the trigger50(when the trigger is actuated), resulting in the rear part of the breechblock stop lever20being released again in a spring-loaded manner in its retained position. FIG.5shows how a continuous firing element40can be used in a trigger assembly, in a schematic view from a perspective. The continuous firing element40has a third axis of rotation C, and three arms41,42,43that extend radially in relation to the third axis of rotation C. A first arm41has a substantially rectangular cross section, and ends at its radial end in the shape of a semi-circle. The first arm41comes in contact with a control surface on a trigger50, as described by way of example in reference toFIG.6. A second arm42has a substantially triangular cross section. The second arm42, which is located axially between the first arm41and the third arm43, can be retained in place or released by a safety. To prevent the continuous firing element40from rotating in the direction95, i.e. to engage the safety, a safety lever can be mechanically brought in contact with a first corner44of the second arm42. The second arm42has a projection46at its second corner45that extends in the axial direction of the third axis of rotation C, which engages with the hook or claw26bon the breechblock stop lever20. The projection46extends axially toward the third arm43. The projection46can likewise be called a claw. The third arm43has a substantially rectangular cross section and ends at its radial end in the shape of a semi-circle. The third arm43is intended in particular to be subjected to the force of a torsion spring to obtain a constant torque to the continuous firing element in the direction95. Such a torsion spring is indicated by the reference symbol4inFIG.7. The direction counter to the spring force is indicated by the arrow96. FIG.6shows a trigger60that can be used in a trigger assembly in a schematic view from a perspective. The trigger50has a main body51that can pivot about a fourth axis of rotation D, with a thickness y and a shape known per se. The trigger50has an actuating element57that has a bearing surface52on its rear surface for a leg of a torsion spring. Such a torsion spring is shown inFIG.7and indicated by the reference symbol3. The torsion spring pushes the trigger50forward in the known manner, i.e. in the direction97. A curvature of the trigger toward the rear results in a rotation about the fourth axis of rotation D in the direction98. The actuating element57can be brought to bear at the front surface on an element in the handle housing in order to limit the rotation of the trigger when it is actuated. Such an element is shown by way of example inFIG.8, and indicated there as the fifth element9a. The trigger also has a first section53, which can be kept in place by means of a safety lever in order to prevent movement of the actuating element57. The first section forms an elongated and curved lever, or arm, which tapers toward the end53a. The tapered end53ais particularly suitable for preventing a collision with the stop, or projection61. The tapered end53acan also be designed such that it can be retained by a projection61in the shape of a claw on the safety lever. Such a safety lever is shown inFIG.7and indicated there with the reference symbol60. The trigger50also has a second section54which is used in turn for the direct control of the control element10and thus the indirect control of the breechblock stop lever20. Furthermore, the rotation of the continuous firing element40can be blocked and allowed by means of the second section54. The first section53and second section54are formed by a material removal in the main body51and likewise exhibit a thickness y. The second section54comprises a projection55that extends axially, with a greater thickness z than the main body51, for controlling the control element10. The projection55also has a control surface55aand a control edge55bon its upper surface, i.e. the side facing toward the control element10. The control surface55ais intended for guiding, or controlling, the control section13and in particular the projection13aon the control element. There is a hole56that is coaxial to the fourth axis of rotation D, through which a pin, rod, or bolt cab be inserted in order to rotatably support the trigger50on a non-rotating component, e.g. a housing on a handle. This pin, rod, or bolt is preferably fastened to a wall of the handle on the left and right sides of the weapon, such that the trigger is held securely in place, and can thus be better guided. FIG.7shows an example trigger assembly70for an automatic weapon, not shown in greater detail, such as a machine gun, in a schematic view of a cross section thereof. According to this example, the trigger assembly70comprises the control element10shown inFIG.1, the breechblock stop lever20shown inFIG.2, the continuous firing element40shown inFIG.5, and the trigger50shown inFIG.6, such that reference can be made to the explanations above, in reference thereto. The trigger assembly70also comprises a safety in the form of a safety lever60. There are also torsion springs2,3,4,8, and four elements, or stops5,6,7,9secured to the housing. In addition to the trigger assembly70, the breechblock carrier30shown inFIG.4is also shown, which is engaged with the trigger assembly70. The trigger50can be moved in the known manner between a non-actuated position and an actuated position. The stop arm sections26,27of the breechblock stop lever20are locked in place in a front catch31, which is the main catch, on the breechblock carrier30, such that the breechblock carrier30is held in place by the breechblock stop lever20. The control element10is supported on the breechblock stop lever20such that it can rotate about the second axis of rotation B, and can be controlled from below by the trigger50and from above by the breechblock stop lever20. Because the control element10is rotatably supported on the breechblock stop lever20, the control element10can also rotate about the first axis of rotation A for the breechblock stop lever, when the breechblock stop lever20moves. Such a control element10holds the breechblock stop lever20down when the breechblock moves forward, and releases the breechblock stop lever20when the breechblock returns by interacting with the control surface55aon the trigger50and the element6on the handle housing. In the continuous firing mode, the breechblock stop lever20is held down by the continuous firing element40while shots are fired. Regarding the further construction of the trigger assembly70: A first leg of a first torsion spring2braces against a first element5, and presses with a second leg against the breechblock stop lever20. The second leg of the first torsion spring2bears on a bearing surface on the breechblock stop lever20, formed in this example by a cross web26con the hook26bat the start of the first and second stop arm sections26a,27a(cf.FIG.3, lower right). The so-called “torsion spring for the breechblock stop lever” causes a torque in the direction94. In other words, the breechblock stop lever20is pushed upward by the torsion spring2into the retaining position. The first torsion spring2is wound around a third element7. A first leg of a second torsion spring3braces against a second element6and presses with a second leg against the trigger50. The second leg bears on the bearing surface52of the trigger50(cf.FIG.6). The so-called “torsion spring for the trigger” causes a torque in the direction97. In other words, the spring force of the second torsion spring3counteracts a movement of the trigger50. To retain the trigger in the non-actuated position, i.e. to generate a torque acting against the spring force of the torsion spring3, there is an element9fixed in place on the housing, which is located at an upper end of the actuating element57, against which the trigger50is braced. The second torsion spring3is likewise wound around the third element7. A first leg of a third torsion spring4braces against the element5, and presses with a second leg against the continuous firing element40. The second leg bears on a bearing surface formed by the third arm43on the continuous firing element40, in particular such that it second leg encompasses the third arm in the axial direction. The so-called “torsion spring for the continuous firing element” is wound around the third axis of rotation C, in particular about a sleeve-like part of the continuous firing element50, and causes a torque in the direction95. The force of the first torsion spring2(torsion spring for the breechblock stop lever) is greater than the force of the second torsion spring3(torsion spring for the trigger), and the force of the second torsion spring3is greater than the force of the third torsion spring4(torsion spring for the continuous firing element). A fourth torsion spring8is wound around the first axis of rotation, and holds the control element10in a middle position. The control element10can pivot in both directions91,92about the second axis of rotation B, counter to the force of the fourth torsion spring8. The trigger50comes in contact with the second control section13, or the radial projection13aon the control element10after it has been actuated with its control surface55aformed on the projection55. The projection55prevents the continuous firing element40from rotating about its own axis of rotation C when the trigger is not pulled. If the trigger50is actuated, the bearing surface52on the actuating element57bears on a fifth element9afixed in place on the housing, which is shown inFIG.8, in order to limit the movement of the trigger50toward the rear. The control curve on the breechblock carrier30can deflect the first control section11counter to the force of the fourth torsion spring8in the direction92when it slides forward, and counter to the force of the fourth torsion spring8in the direction91when it slides back. A safety lever60that can pivot about its axis E, which is combined with the firing selection lever, holds the trigger50in place. The safety lever60can also assume the settings, “single shot,” and “fully automatic firing.” In the “single shot” setting, the safety lever60allows the trigger50to move, and also holds the second arm42of the continuous firing element40in place. In the “fully automatic firing” setting, the safety lever60releases the trigger50as well as the continuous firing element40. The state of the weapon, as shown inFIG.7, is loaded and secured. Starting from the first setting (secured) shown inFIG.7, the safety lever60can be rotated in the direction99to a second setting (single shot). From this setting, it can be rotated in the direction99to its third setting (continuous firing). Counter to the direction99, the safety lever can be rotated back from the third setting to the second setting, and from there to the first setting. FIG.8shows the trigger50shown inFIG.6, interacting with a safety60in the form of a safety lever, in other views, sections, and perspectives. The trigger is shown from the left inFIG.6, but inFIG.8, the trigger50and the safety lever60are shown from the right. The safety lever60can assume three settings: a first setting or position is shown in the left-hand column inFIG.8. In this setting, the safety lever60is behind the first section53of the trigger50, and mechanically prevents it from being fully actuated. The safety lever60is then in the “safety on” setting. A second setting is shown in the middle column inFIG.8. The safety lever60is in the setting for “single shot.” The safety lever60releases the trigger50, and simultaneously holds the continuous firing element40(not shown) in its position. It can be readily seen how the stop61on the safety lever60releases the tapered arm53aof the trigger50. The trigger50is fully actuated in the direction100, and bears with the rear surface of the actuating element57on an element9afixed in place on the housing. The element9aprevents the trigger50from further movement to the rear, in the direction100. A third setting is shown in the right-hand column inFIG.8. The safety lever60is in the setting, “continuous firing.” The safety lever60releases the trigger50, as well as the continuous firing element40(not shown). The trigger50is fully actuated in the direction100, and the rear surface of the actuating element57bears on the element9afixed in place on the housing. A sixth element9bin the form of a pin can be seen in all three illustration, about which the trigger50can be pivoted about its axis of rotation D. The element9bcan also be referred to as an axle. FIG.9shows parts of the trigger assembly70shown inFIG.7, specifically the control element10, the breechblock stop lever20, the continuous firing element40, the trigger50, and the safety lever60, in a schematic view (from the left) and in a perspective. The safety lever, or firing selection lever60, is in the setting “fully automatic firing” in both columns. The trigger50is in a non-actuated setting in the left-hand column, and holds the continuous firing element40in place. As can be readily seen, the control element10is in its middle position. The trigger50is released, and braced on the element9. The right-hand column shows the trigger50in an actuated position. The continuous firing element40rotates in the clockwise direction (direction95) about its own axis of rotation C, due to the spring force of the torsion spring4. The claw46on the continuous firing element40and the hook26bon the breechblock stop lever20then engage in one another. The breechblock stop lever20is held “down” until the trigger50is released. After releasing the trigger50, it then brings the continuous firing element40back into the starting position. While the trigger50is actuated, the control element10is moved back and forth by the breechblock carrier30. In the snapshot on the right inFIG.9, the control element10has been moved forward, i.e. rotated in the direction92. It can also be readily seen in the right-hand column that the axes of the three elements7,9, and9bare spaced apart vertically. There is therefore a distance n between the axis of the element7and the axis of the element9, and a distance m between the axis of the element9and the axis of the element9b. The dynamic interaction of the individual assemblies shall be described in greater detail below in reference toFIGS.10to17c, in particular in reference to the functions, “loading in the secured state,” (FIGS.10a,10b), “single shot” (FIGS.11ato11d), “disruption in forward movement of the breechblock” (FIGS.16a,16b), and “fully automatic firing” (FIGS.17ato17c). FIGS.10a,10bshow the loading sequence for the weapon, or the trigger assembly70shown inFIG.7, in the secured state, in reference to four successive points in time t1, t2, t3, and t4. Time t1: The safety lever60is in its first position, i.e. securing the trigger50, such that the safety is on. The control element10is in its middle position, and is not in contact with the breechblock carrier30. The third control section12is released from the second element6fixed in place on the housing. The breechblock stop lever20is rotated by the first torsion spring2in the counterclockwise direction94, until the second control section13and the third control section12on the control element10are on the upper surface of the fourth axis of rotation D. The stopping surfaces26aand27aon the breechblock stop lever20are then in the “catch position.” The breechblock, or the breechblock carrier30, is moved backward by hand, in the direction100. Time t2: While the breechblock carrier30is moved back, the catch33comes in contact with the stop arm sections26,27on the breechblock stop lever20, and pushes the breechblock stop lever20down, i.e. counter to the force of the torsion spring2, in the direction93. By pushing the back end of the breechblock stop lever20down with the catch33, the front end of the breechblock stop lever20is lifted briefly by the control element10, i.e. temporarily. During the temporary lifting by the control element10as a result of the catches33,32,31, the control curves38,37,36successively pass over the control element10, pushing it backward each time, which has no effect in the present case on the breechblock stop lever20, however, because the control element10does not bear with either its second control section13on the control surface55aon the trigger50, on with its control section12on the second element6, i.e. the control section11repeatedly rotates about its own axis of rotation B, in the direction91. Time t3: While the breechblock carrier30continues to move backward, the breechblock stop lever20catches in the rear catch33. If the breechblock is moved further back, the breechblock stop lever20catches in the same manner in the middle catch32, and then in the front catch31. As soon as the projection38no longer passes over the control element10, and it is in the rear groove35awith its first control section11, it is moved back in the second direction92to its starting position by the fourth torsion spring8. This pivotal movement (i.e. deflection of the control element and pushing back of the control element to the middle position by the spring) of the first control section11on the control element10is repeated during the manual return of the breechblock when passing over the control section11by the middle projection37, the front groove35b, the front projection36, and the groove35con the front side of the breechblock carrier30. Time t4: The breechblock carrier30is now locked in place in its front catch31. The breechblock stop lever20holds the breechblock carrier30in this position. The control element10is in its middle position. The weapon is loaded and secured. FIGS.11a,11b,11c, and11dshow the sing shot sequence of the weapon, or the trigger assembly70shown inFIG.7, based on nine successive points in time t4, t5, t6, t7, t8, t9, t10, t11, and t12. Time t4: The weapon is loaded and secured. Time t5: The safety lever60is in its second setting, the single shot position, which releases the trigger50, and holds the continuous firing element40in position. The trigger50is actuated in the direction100, and pushes the control element10upward, in the direction of the arrow, such that the stop arm sections26,27of the breechblock stop lever20rotate in the direction93, and are moved down, in the direction of the arrow. Time t6: The trigger50is actuated entirely. The breechblock stop lever20is moved out of its retained position and into its standby position, in which the breechblock carrier30is then released. The breechblock carrier30accelerates forward as a result of the force of the closing spring, in the direction101. Time t7: The control section11is moved by the front projection36toward the front (movement92). The control element10slides over the control surface55aon the trigger50while it is pressed with its projection13aand its second control section13, and moves with its eccentric third control section12up to its projection12band over the element6fixed in place on the housing. The projection19on the control element10bears with its stop surface19bon the stop25on the breechblock stop lever20. Because of the eccentric design of the control section12up to its projection12b, the leverage effect is increased, and the stopping surfaces26aand27aof the breechblock stop lever20are pressed down further, disengaging from the stopping surfaces31a,332a,33aon the breechblock carrier30in the third direction93, toward the standby position. Time t8: The breechblock carrier30is in its frontmost position. The control lever10is pushed into its middle position by the torsion spring8in the breechblock stop lever20. The control lever10braces at this point against the element6as well as the control surface55aon the actuated trigger50. There is therefore a spacing between the axis D and the control sections12,13. After the breechblock has moved forward as far as possible, and a cartridge has been loaded and fired, the breechblock carrier30is returned by the gas pressure from the fired cartridge, as shown in reference to the following point in time t9. Time t9: A cartridge is fired and the breechblock, or breechblock carrier30, returns, due to the gas pressure from the fired cartridge, in the direction100. The contact surface1lb on the first control section11is moved back by the rear projection38(rotation91). The second control section13, in particular the projection13aon the control element10slides over the control edge55bon the trigger50, and “falls” down. The breechblock stop lever20is rotated upward, toward its retained position. Time t10: The breechblock carrier30moves in direction100toward a rear end stop, and is pushed forward again, in direction101, by the closing spring force. The breechblock stop lever20is then in its retained position. The breechblock turns over in its rearmost position, and is retained in the single shot setting of the firing selection lever60, as shown below in time t11, by the breechblock stop lever20in its front catch31. If the trigger50is released and the firing selection lever60set to “safety on,” a new belt can be placed in an empty cartridge feed, such that the weapon is again ready to be fired. If the breechblock is returned while the trigger50is pressed, the projections36,37,38move the first arm15of the control lever back, at which point the second control section13of the control lever is moved with its projection13aover the control surface55aand the control edge55bof the pressed trigger50, and the control lever10tips down with the front surface of the breechblock stop lever20and the rear part of the breechblock stop lever20tips up, at which point the breechblock carrier30passing over the breechblock stop lever20is retained in its front main catch31as the breechblock stop lever20moves forward again, and can first be released again when the trigger50is again actuated. Time t11: The breechblock stop lever20locks in place in the front catch31on the breechblock carrier30, and holds the breechblock carrier30in this position. Time t12: The trigger50is released. The torsion spring8pushes the control element10back into its starting position, specifically its middle position. The weapon is now loaded and not locked. The single shot firing can then be repeated. FIG.12shows the weapon at time t8, in an enlargement. This view shows, in particular, the distance between the axis D and the control section12or13. The distance is indicated by the letter k. The control lever10is braced at this point on both the element6(not visible inFIG.12) and the control surface55aon the actuated trigger50. FIGS.13and14show the breechblock stop lever20in a front position of the breechblock, with a released, i.e. not actuated, trigger50. The firing selection lever60is in the setting “single shot” inFIG.13, and it is secured inFIG.14. The control lever10braces against the axis D, or9bfor the trigger50with its second and/or third control section13,12in bothFIG.13andFIG.14. The breechblock is then returned manually and subsequently released. FIG.15shows the breechblock stop lever20of the breechblock retained when moving forward, with the trigger50released, and the safety lever60on. The control lever10does not bear — as it does inFIGS.13,14—on the axis D for the trigger50with its second and third control sections13,12, when the breechblock stop lever20is in the locking position, in order that the breechblock stop lever20can fully engage with its stopping surfaces26a,27ain the catches31,32,33on the lower surface of the breechblock carrier30, until reaching the stop. It can be seen how the stop arm section27bears with an upward facing surface of the stop surface27aentirely on the lower surface of the breechblock carrier30and with a backward facing surface of the stop surface27a, entirely on the stop, or catch31. In other words, the breechblock stop lever20braces against the breechblock carrier30, to ensure that it makes a full-surface contact in the catch31on the breechblock carrier30. The distance between the axis D and the control sections12,13is indicated by the letter k. The distance k inFIG.15is less than the distance kinFIG.12. FIGS.16aand16bshow the sequence of a disruption in the forward movement of the breechblock on the weapon, or trigger assembly70shown inFIG.7, at five successive points in time t13, t14, t15, t16, and t17. Time t13: The trigger50is actuated (direction100). The breechblock is stopped from moving forward due to a disruption, in particular a loading disruption. Time t14: The trigger is released due to the disruption (and then moves in direction101), and the breechblock is returned by hand, i.e. slid back. As the breechblock, or breechblock carrier30moves back100, the control section11extends into the intermediate space, or the second groove35a, which separates the front catch31into two catch sections, due to the spring force of the fourth torsion spring8. The weapon can also be secured during this procedure for correcting a disruption, and all other operations can be carried out in the secured state. Time t15: The middle projection37on the control curve controls the second contact surface11bon the first control section11, and rotates it in the clockwise direction (direction91). By rotating the control section11in direction91, the third control section12on the control element10, which has so far been on the upper surface of the second element6fixed in place on the housing, is then released from this element6, at which point the breechblock stop lever20, which us subjected to the tension of the first torsion spring2, can dip down at the control element end, and rise up at its stop arm end (fourth direction of rotation94). When the trigger50is not pressed, the control element end of the breechblock stop lever20can dip down directly, and the stop arm end of the breechblock stop lever20can rise up until this stop arm end bears on the lower surface of the breechblock carrier30. When the trigger50is pressed, the control section11is rotated as the breechblock continues to move back until the second control section13of the control element10lying on the control surface55aof the trigger50slides over the control edge55bof the trigger50with its projection13a, and is released from the actuated trigger50, after which the breechblock stop lever20can assume the same position that it is in when the trigger50is not pressed. Time t16: The control section11is moved away from the middle projection37, toward the back, until the third control section12on the control element10, which bears on the element6, slides down over the element6. As a result, the breechblock stop lever20is moved further upward, and the breechblock carrier30can lock in place in each of the three catches31,32,33during the backward movement. Time t17: The breechblock carrier30is in the starting position. The weapon is loaded and not secured. FIGS.17to17cshow the fully automatic firing sequence for the automatic firearm, or the trigger assembly70shown inFIG.7, in nine successive points in time, t18, t19, t20, t21, t22, t23, t24, t25, and t26. Time t18: The safety lever60is in its third setting, i.e. the continuous firing position. Unlike in the single shot position, the continuous firing element40is now no longer blocked by the safety lever60. Only the spring force of the second torsion spring3that pushes the trigger50forward holds the continuous firing element40in position, counter to the spring force of the third torsion spring4. Time t19: The trigger50is actuated, and pushes the control element10upward. This results in a downward rotation of the breechblock stop lever, i.e. in direction93. The continuous firing element40is rotated clockwise, i.e. in direction95, by the third torsion spring4when the trigger50is actuated. Time t20: The trigger50is fully actuated. The breechblock carrier30is released and moved forward by the force of the closing spring. Time t21: The front projection36on the breechblock carrier30comes in contact with the control section11, and moves it forward, i.e. in direction92. The third control section12of the control element10passes over the element6at this point. Because of the eccentric design of the section12aup to the projection12bin the third control section12, the breechblock stop lever20is moved downward at its stop arm end (direction93), i.e. the breechblock stop lever20dips down further at its stop arm end. The continuous firing element40engages in the claw26bon the breechblock stop lever20with its claw46, and retains the breechblock stop lever20in this lower position at its stop arm end, corresponding to the standby position of the breechblock stop lever20. Time t22: The fully automatic firing setting is now obtained: The trigger50is actuated. The breechblock stop lever20is held in its standby position by the continuous firing element40, and is in the lower position. The breechblock carrier30moves back and forth freely in the fully automatic firing mode. The control section11is rotated back and forth in both directions91,92by the first control curve region36,37,38—although the control section11has no function in this mode, and can swing freely. Time t23: the fully automatic firing function is stopped. The trigger50is released. The breechblock carrier30moves forward again after reaching the rear end stop. In this snapshot, the trigger50is released after first catch31has been passed. By releasing the trigger50, the continuous firing element40is rotated in the counterclockwise direction, i.e. in direction96. As a result, the claw46on the continuous firing element40is rotated away from the claw26bon the breechblock stop lever20, and releases it. Time t24: The breechblock moves forward for the last time. The control section11on the control element10is moved forward in the rotational direction92(second rotational direction). The breechblock stop lever is moved downward at its stop arm end in the direction93. Time t25: The breechblock returns after firing the last shot. The breechblock can be locked in place in each of the three catches31,32,33, if the return is too weak. Time t26: The starting point, like that at time t18, is now obtained. Remarks regarding time t24: If the trigger50is released during the return movement of the breechblock, the breechblock stop lever20locks the breechblock carrier30in place after the buffer contact in the forward movement, and no further shots are fired. The functional sequences at times t23and t24no longer occur in this case, and the breechblock carrier remains retained in its front catch31, as shown at time t26, i.e. the main catch. FIG.18shows an automatic weapon MG, model MG5, with the trigger assembly70and the breechblock carrier30described above. The MG5 is an indirect gas-operated reloader, with a caliber of 7.62×51 mm. The weapon is shown in a side view, in which some of the structural details of the example are hidden by the handle G. The actuating element57for the trigger50can be seen. So can the safety or firing selection lever60. The safety or firing selection lever60can be operated ambidextrously. There is no need to go into further design features of the automatic machine gun MG5 in the framework of this disclosure, as they are not substantial to the examples disclosed herein. The examples disclosed herein are not limited to a specific type of automatic weapon, and instead can be used with numerous different automatic weapons. In particular, existing automatic weapons can be retrofitted with the trigger assembly described above, and with the breechblock carrier described above. Based on the above, at least one goal of some of the disclosed examples is to create an improved control element. It is also a goal of some of the disclosed examples to create an improved breechblock stop lever. It is also a goal of some of the disclosed examples to create an improved breechblock carrier. It is also a goal of some of the disclosed examples to create an improved trigger. Moreover, it is also a goal of some of the disclosed examples to create an improved trigger assembly. It is also a goal of some of the disclosed examples to create an automatic weapon with any of the aforementioned components. As such, an automatic weapon should be obtained in particular, which allows for the settings “safety on,” “single shot,” and “fully automatic firing” as well as loading of the firearm when the safety is on, with at least one of the aforementioned components. These goals are achieved through the subject matter set forth in the examples herein. According to a first aspect, there is a control element for controlling a breechblock stop lever that can move about a first axis of rotation, wherein the breechblock stop lever can be moved between a standby position for releasing a breechblock carrier and a retaining position in which the breechblock carrier is held in place. The control element has a first arm that can pivot about a second axis of rotation. A first arm comprises a first control section that has a first contact surface and a second contact surface facing away from the first contact surface, which can be deflected about the second axis of rotation for the control element by means of a control curve on the breechblock carrier. In other words, the first control section comprises contact surfaces on its front and back sides, each of which can be controlled by the control curve located on the lower surface of the breechblock carrier. A controlling of the control element by the control curve results in a deflection (also referred to as a turning) of the control element about its own axis of rotation. If the control element is activated when the breechblock slides forward, it then turns in one of two directions of rotation toward the front. The deflection toward the rear corresponds to a first rotation, while the deflection toward the front corresponds to a second rotation about the second axis of rotation. The first arm also comprises a second control section that can be activated by means of a control surface on a trigger. The second control section, which is located in particular on a lower surface of the first arm, can be brought into contact with the control surface on the trigger, when the trigger is squeezed, to trigger, or release, the breechblock stop lever. After releasing the breechblock stop lever on the breechblock carrier, the example control element disclosed herein can retain the breechblock stop lever in its standby position while the breechblock carrier moves forwards, and release the breechblock stop lever when the breechblock carrier slides backward. Furthermore, the second control section can “slide” over the control edge on the trigger when the stop lever is released, such that the control element is then moved from a pivoted or twisted position, resulting from the movement of the breechblock carrier when it is moving backward, to a starting position. If the second control section “slides” over the control edge of the trigger, a spring element that presses the breechblock stop lever into its retaining position then triggers a “downward” movement of the control element. The control element can be understood in particular to be a type of coupling element, which couples the trigger in a trigger assembly to the breechblock stop lever. The control element can be controlled in a trigger assembly by the trigger via the control surface and by the breechblock carrier via the control curve. In other words, the control element can be controlled from “below,” and from “above.” The control element is rotated when it is controlled by the control curve on the breechblock carrier, and the second axis of rotation preferably moves in a purely vertical direction when it is activated by the trigger. In other words, when the trigger is actuated, it pushes or shoves the second axis of rotation, and therefore the control element, upward by means of its control surface. The functions, or settings, “safety on,” “single shot,” and “fully automatic firing” as well as a continuous loading of the automatic weapon can be enabled by means of the control element according to the examples herein, when the automatic weapon is in the secured state. The control element can form a control lever that can be pivoted, i.e. rotated, in particular in relation to the first axis of rotation for the breechblock stop lever, parallel to the axis thereof. To attach the control element to the breechblock stop lever, the control element can have a hole that is coaxial to the second axis of rotation, for receiving a fastening element, e.g. a pin, rod or bolt. If the hole is in the form of a single bore-hole, a single fastening element is sufficient. If instead, the hole is formed by two blind holes, one on each end, then two fastening elements are necessary. In one example, the control element comprises a second arm, axially spaced apart from the first arm, which comprises a third control section that can be functionally connected to a first component on the housing in order to brace against a torque. When the trigger is actuated, and the breechblock moves forward after it has been released by the breechblock stop lever, the third control section moves toward an element connected to the housing of the handle, and braces against this element during the entire forward movement of the breechblock. Such a third control section that can brace against an element prevents the breechblock stop lever from catching in a notch, or in an intermediate position, on the breechblock carrier if the actuated trigger is prematurely released (intentionally or unintentionally) while the breechblock is moving forward. If the breechblock carrier catches while moving forward, it must then be manually pulled back to its starting position. In other words, the third control section on the second arm of the control element slides over the element on the housing, and retains the breechblock stop lever in its standby position while the breechblock is moving forward, even if the trigger is released during the forward movement of the breechblock carrier. In one example, the second control section lies radially opposite the first control section, and comprises a radial projection, which can be controlled by means of the control surface on the trigger. The radial projection allows for a better control by the control surface on the trigger, in particular if the control surface on the trigger has a control edge. If the trigger is actuated, and the breechblock carrier moves backward, the second control section is reliably “slid” over the control edge by means of the radial projection. To increase the reliability of the sliding down of the radial projection, it may be preferred that the radial projection tapers to a point, and forms an edge that is parallel to the second axis of rotation. In another example, the first arm has a cross section that tapers radially outward in the region of the first control section. A control element that narrows toward the top allows for a more precise control of the control curve on the breechblock carrier. This also allows for a “finely adjusted” control curve on the breechblock carrier. This effect can be increased if the first arm is stepped at an end surface such that the first arm becomes thinner in the axial direction in the region of the first control section. This results, in other words, in a reduction in the longitudinal section. If there is a second arm, the end surface of the first arm facing away from the second arm may be the end that is stepped. In another example, the third control section preferably has a second radial projection, which is preferably eccentric or curved, and forms, in particular, the lower end of the second arm, or the third control section. An eccentric or curved radial projection, preferably extending downward toward the trigger, enables a reliable bracing of the third control section against a component on the housing. The reliable bracing on the housing component results in turn in a reliable retaining of the breechblock stop lever in the standby position while the trigger is actuated, and the breechblock is moving forward. In another example, the second arm has a projection extending radially, in particular a claw, on which a contact surface is formed facing the direction of rotation of the control element, wherein the projection is preferably formed on an end surface of the second arm facing away from the first arm. Such a projection allows for the formation of a contact surface that does not impair control by the breechblock carrier or the trigger. A projection directed axially outward can advantageously come to bear on corresponding stop or contact surfaces on the breechblock stop lever in both rotational directions of the control element about the second axis of rotation. Such an axially extending projection is then preferably used when the control element is supported axially inside two fastening arms on the breechblock stop lever with regard to the first axis of rotation for the breechblock stop lever, and only the axial projection extends into the plane of one of the two fastening arms formed by a longitudinal section. In another example, there may be a middle piece between the first arm and the second arm, which forms, along with the arms, an annular gap that runs at least in part about the second axis of rotation, in which a leg of a spring can be brought into contact with the middle piece. To ensure that the control element rotates in a controlled manner about its own axis of rotation, the control element is also coupled with the breechblock stop lever by means of a spring element. The control element has a space for this, located axially between the first and second arms, e.g. in the form of an annular gap, in which a part of the spring element can come to bear. In a structurally simple example, a torsion spring is wound about the first axis of rotation for the breechblock stop lever, and clamped with both legs in the annular gap such that a force is constantly exerted that retains the control element in its middle position. The middle position is approximately in the middle, between the forward deflection and the backward deflection. As a result, the spring force must be overcome in order for the element to rotate in either the first or second direction. If the control element is deflected, the spring force of the spring element pushes the control element back into the middle position. The second arm may have a second radial projection, which forms a lateral stop that guides the spring element, in particular the torsion spring, in the annular gap. In other words, the later stop prevents the spring element from sliding out of the annular gap while the weapon is in operation. According to a second, there is a breechblock stop lever for retaining and releasing a breechblock carrier, wherein the breechblock stop lever can pivot about a first axis of rotation, and has a stop arm for retaining the breechblock carrier. It is distinguished in that it has two fastening arms that substantially extend radially to the first axis of rotation for receiving a control element that has a second axis of rotation, wherein the two fastening arms are preferably parallel to one another. In order to support the control element such that it can rotate about the fastening arms, there is a hole on each end of the respective fastening arms, e.g. in the form of a bore-hole, for receiving a fastening element, e.g. a pin, rod, or bolt. The holes in the fastening arms correspond to the holes in the control element. In a further development of the breechblock stop lever, one of the two fastening arms, in particular the second fastening arm, has two stops, each of which has a stop surface, which can be brought into contact with the axial projection on the control element. The two stops are placed such that a first stop limits the rotation of the control element in the first direction of rotation, caused by the breechblock carrier, and a second stop limits the rotation of the control element in a second direction of rotation, caused by the breechblock carrier. The stops can be formed by the removal of material in the second fastening arm. In particular, the removal of material can at least in part form a ring, seen longitudinally, i.e. a surface with two concentric circles. The material removal preferably describes at least a C-shaped ring, at least in part. Accordingly, bearing surfaces are formed “in front of” and “behind” the second axis of rotation. These bearing surfaces are also formed on the lower surface of the second fastening arm. The lower surface is the side facing away from the breechblock carrier, or the side facing toward the trigger. This material removal enables a substantially circular rotation of the axial projection about the second axis of rotation. In another example, the stop arm is formed by two stop arm sections extending in the opposite direction of the fastening arms, wherein the stop arm sections form a stopping surface on their respective ends. Two stop arm sections have the advantage over just one stop arm section in that, starting from the first axis of rotation, a longitudinal space or gap is formed between the two stop arm sections. The control curve can advantageously “dip into” this space with its projections, and thus pass by the stop arm with its axially spaced apart stop arm sections. This is particularly important if the breechblock stop lever is pushed upward, in particular by its torsion spring, when the breechblock returns, and the spacing between the control curve, or the projection forming the control curve, and the stop arm is reduced. To increase the strength of the stop arm sections, the respective stop arm sections are preferably connected to one another at their respective ends by a web. The web is preferably placed such that it can also pass by the control curve without coming in contact therewith. This can take place in that the web forms an opening with the ends of the stop arm sections directed toward the lower surface of the breechblock carrier, wherein the opening has, in particular, a semi-circular, U-shaped, rectangular, or V-shaped cross section. The control curve can pass by these geometries without coming in contact therewith, in particular during the return of the breechblock. In order to be able to hold down or lock the breechblock stop lever in place with a continuous firing element, the breechblock stop lever has a claw, which may be located on a side of a first stop arm section facing away from the breechblock carrier. According to a third aspect, there is a breechblock carrier for a breechblock that can move longitudinally in an automatic weapon. The breechblock carrier is distinguished in that it has a control curve on its lower surface for controlling a rotating control element on a breechblock stop lever for retaining and releasing the breechblock carrier. Such a breechblock carrier move freely back and forth, interacting with a control element such as that described above, because the control element releases the breechblock stop lever in its retaining position (when the breechblock returns) and holds it in its standby position (when the breechblock moves forward) through the control of the breechblock carrier. A breechblock carrier in which the control curve is formed by at least one projection extending in the radial direction of the longitudinal axis is preferred. It has proven to be particularly advantageous to have at least two, preferably three radially extending projections, which are arranged sequentially in the longitudinal direction of the breechblock carrier, such that an empty space is formed between the projections. The projection can have a rectangular cross section. A length of the projection can be greater than its width. The length to width ratio of the project is preferably greater than 2:1, more preferably greater than 3:1, and particularly preferably greater than 4:1. There can be numerous projections, e.g. two, three, four, or five projections. Three radially extending projections are particularly preferable, which are then arranged successively in the axial direction. Numerous successive projections makes it possible, for example, to have numerous catches, in which the breechblock stop lever can lock in place when pulling back the breechblock. The first control section of the control element can enter the empty spaces between the projections in an advantageous manner, e.g. in the event of disruptions and breechblock blockings when the breechblock is moving forward during the manual return of the breechblock necessary for removing these disruptions, after releasing the trigger, and during the return of the breechblock. The breechblock carrier may have at least one catch, which is divided into left and right catch sections by at least one recess extending in the longitudinal direction, in particular a groove. The groove is placed such that the first control section on the first arm of the control element can enter it. In other words, the first control section extends into the groove and passes by the catch section without touching it, i.e. without coming in contact therewith. The empty spaces on the projections and the at least one groove have the same function, specifically of giving enough space for the first control section during the manual return of the breechblock. If there are numerous catches, it is then the projections forming the control curve may be placed at appropriate spacings to the respective catches. The result is that the at least one groove is interrupted by one of the projections, such that there are then three grooves for three catches, in order to ensure a reliable locking in place, in particular of the first and second stop arm sections of the breechblock stop lever in all three catches that are moved backward. A breechblock carrier that has three projections and three catches is a particular example, in which a first and second projection are placed in front of the three catches in the longitudinal direction of the breechblock carrier, and a third projection is placed between two catches in the longitudinal direction. Such a breechblock carrier makes it possible to control the control element or the first control section, and also enables locking in place in three positions while still obtaining a compact breechblock carrier. According to a fourth aspect, there is a trigger for controlling a control element in a trigger assembly for an automatic weapon. The trigger can move between a non-actuated position and an actuated position, and comprises an element that can pivot about a fourth axis of rotation. The trigger is characterized by a projection extending axially from the element, wherein the projection has a control surface that faces upward, which can be brought in contact with a corresponding control section of the control element to move the control element from a first position to a second, and wherein the projection has a control edge that is substantially parallel to the fourth axis of rotation. Such a trigger can control the control element, i.e. moving it from a first position, substantially vertically, to a second position, and the actuated trigger can also allow the control element to “slide down” over the control edge, after the control element has been deflected by the return of the breechblock. A trigger with a control surface profile that has a concave cross section is preferred. The concave shape makes it easier to control the control element that is to be moved. According to a fifth aspect, there is a trigger assembly for an automatic weapon. The trigger assembly comprises a control element such as that described above, a breechblock stop lever such as that described above, and a trigger such as that described above. A trigger assembly can be obtained with these components that can be controlled by a breechblock carrier. Such a trigger assembly, interacting with a breechblock carrier that has a corresponding control curve, enables continuous loading in the secured state and the settings, “single shot,” and “fully automatic firing.” The breechblock carrier can be the breechblock carrier described above, in particular. The trigger can pivot about a fourth axis of rotation, and is configured to control the control element, in particular the second control section of the control element. The trigger has a control surface for this, which moves the second control section upward when the trigger is actuated. The upward movement of the second axis of rotation results in a downward turning of the first and second stop arm sections, i.e. the breechblock stop lever is moved into its standby position. In other words, an activation of the control element by means of the trigger results in a rotation of the breechblock stop lever about the first axis of rotation for the breechblock stop lever. The perpendicular control element that is moved upward can then be controlled by the breechblock carrier with the control curve such that the control element can exert a torque on the breechblock stop lever in order to hold the breechblock stop lever in its standby position, and also to move it further into its standby position. If the perpendicular control element that has been moved upward is controlled by the breechblock carrier with its front control curve when the breechblock moves forward, the third control section slides the second arm of the control element over the second element on the housing, as described above, and retains the breechblock stop lever in its standby position throughout the entire forward movement of the breechblock, even if the trigger were to be released during the forward movement of the breechblock carrier. The trigger assembly also comprises a continuous firing element and a safety, wherein the safety secures the trigger in a first setting, and releases the trigger and secures the continuous firing element in a second setting, and releases both the trigger and the continuous firing element in a third setting. The continuous firing element encompasses a third axis of rotation. If the safety forms a safety lever, it can then pivot about a fifth axis of rotation. The five axes of rotation, specifically the first axis of rotation for the breechblock stop lever, the second axis of rotation for the control element, the third axis of rotation for the continuous firing element, the fourth axis of rotation for the trigger, and the fifth axis of rotation for the safety lever, are preferably parallel to one another, resulting in the following sequence when seen longitudinally from the front: fourth axis of rotation, second axis of rotation, first axis of rotation, third axis of rotation, and fifth axis of rotation. The control element, breechblock stop lever, trigger and continuous firing element are each subjected to a spring force which can be provided in particular by springs in the form of torsion springs. The torsion springs are wound around axes of rotation formed on the handle housing or about elements or stops. There are preferably three elements for torque bracing of the torsion springs. A first torsion spring is preferably braced against a first element with its first leg and presses with its second leg against the breechblock stop lever (torsion spring for the breechblock stop lever). The first torsion spring is wound around a third element. A second torsion spring is braced against a second element with its first leg and presses against the trigger with its second leg (torsion spring for the trigger). The second torsion spring is also wound around the third element. A third torsion spring is braced against the first element with its first leg and presses against the continuous firing element with its second leg. The so-called “torsion spring for the continuous firing element” is wound around the third axis of rotation and results in a torque toward the back, i.e. in the clockwise direction. In particular, the winding of the torsion spring about the third axis of rotation is understood to mean that the inner diameter of the torsion spring lies on a sleeve region of the continuous firing element, i.e. not directly on the axle. The force of the first torsion spring (torsion spring for the breechblock stop lever) is greater than the force of the second torsion spring (torsion spring for the trigger), and the force of the second torsion spring is greater than the force of the third torsion spring (torsion spring for the continuous firing element). A fourth torsion spring is wound around the first axis of rotation and holds the control element in a middle position (torsion spring for the control element). The control element can be pivoted about the second axis of rotation in both directions, counter to the spring force of the fourth torsion spring. The components of the trigger assembly are located in a handle housing. The handle housing forms a non-rotating component. In one example the control element can rotate between two positions on the breechblock stop lever about the second axis of rotation. The first axis of rotation for the breechblock stop lever is also connected to a non-rotating component, in particular the handle housing, such that the control element can rotate about its own axis of rotation, and also on the axis of rotation for the breechblock stop lever. It may be preferred that the trigger assembly has an element connected to a non-rotating component for guiding the third control section. Such an element, which can also be referred to as an insert or stop, can be used to brace a torque of the third control section, in particular the radial projection. In an advantageous example, the element and the insert or stop are the same component. According to a fourth aspect, there is an automatic weapon that has a trigger assembly such as that described above and a breechblock carrier such as that described above. Such a weapon allows for continuous loading in the secured state and the settings, “safety on,” “single shot,” and “fully automatic firing.” Further examples can be derived by the person skilled in the art from the following claims and the attached drawings. | 79,843 |
11859928 | DETAILED DESCRIPTION Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure. While implementations will be described within a cloud-based contact center, it will become evident to those skilled in the art that the implementations are not limited thereto. FIG.1is an illustration of an example firearm190. The firearm includes a trigger193that when pulled back towards a rear of the firearm190causes the firearm190to fire or discharge. In the example shown, the firearm190is a hand gun. However, other types of firearms190may be used. The firearm190further includes a rail195. The rail195may allow a variety of accessories to be attached to the firearm190including lights, scopes, sights, etc. The rail195may be a standard rail such as a picatinny rail, weaver rail, or NATO rail. Any type of rail may be used. FIG.2is an illustration of the example firearm190including a safety module150. As shown, the safety module150includes a housing103that is adapted to couple with the rail195of the firearm190. The housing103may be made from a variety of suitable materials including plastic and metal. Other materials may be used. The housing103may be removably attached to the firearm190thereby allowing the safety module150to be used with multiple firearms190. The safety module150further includes a trigger bar115. As shown, the trigger bar115forms a loop that extends from the housing103and goes behind the trigger193. As will be described further below, when in a first or locked state, the trigger bar115may be immobilized to prevent an operator of the firearm190from fully engaging the trigger193and firing the firearm190. When in the second or unlocked state, the trigger bar115may be moveable to allow the operator of the firearm190to fully engage the trigger193and fire the firearm190. The trigger bar115is designed to preclude or impede the rearward motion of the trigger. However, the trigger bar115does not block access to the trigger itself by the operator. Free access is always available for the operator to put their finger upon the trigger. The safety module150may further include one or more sensors116. In the example shown inFIG.2, the sensor116is a camera. However, other types of sensors may be used. The trigger bar115may be sized and shaped based on the particular make and model of the firearm190. In particular, the trigger bar115may be sized and shaped based on the particular location of the rail195on the firearm190and the location of the trigger193on the firearm190. The trigger bar115may be removeable from the safety module150, allowing a user to use the safety module150with a variety of different firearms190by swapping out different trigger bars115. Depending on the embodiment, trigger bars115may be manufactured for a variety of different firearm190makes and models. For example, continuing toFIG.3, the safety module150is shown attached to a shotgun-type firearm190. Unlike the handgun-type firearm190ofFIG.2, the shotgun-type firearm190has a rail195on the top of the firearm190and there is a larger distance between the rail190and the trigger193. Accordingly, the trigger bar115has been lengthened and bent to accommodate the locations of the rail195and trigger193. FIG.4is an illustration of an environment400for using the safety module150. As shown, the safety module150may include multiple components including, but not limited to, a housing103, processing component105, a locking mechanism110, a trigger bar115, one or more sensors116, one or more lights117, an alarm118, and a power source119. The processing component105may include a processor and a memory and may be located within the housing. An example processing component may be some or all of the computing system900illustrated with respect toFIG.9. The processing component105may execute a safety engine107. The safety engine107may transition the locking mechanism110between a first state and a second state. In the first state, the locking mechanism110may prevent the trigger bar115from moving and thereby prevent the trigger193of the firearm190from actuating. In the second state, the locking mechanism110may allow the trigger bar115to move and thereby allow the trigger193of the firearm190to be actuated. Depending on the embodiment, the locking mechanism110may be a solenoid, for example. Other types of locking mechanisms110may be used. To determine when to transition from the first state and the second state, the processing component105may store one or more authorized user codes109. Each authorized user code109may be a number that uniquely identifies a user of the firearm190. An operator of the firearm190may wear, carry, or hold what is referred to herein as a user device230. As shown, example user devices230may include rings, bracelets, cards, and implantable devices. Other types of devices may be used such as glasses, googles, badges, gloves, watches, hats, helmets, etc. Each user device230may include a wireless transmitter and may be configured to transmit a code235to the safety module150when the user device230is within range of the safety module150. Depending on the embodiment, the user device230may include an RFID transmitter that may be energized and caused to transmit the code235when the RFID transmitter is within range of an RFID receiver associated with the safety module150. Other wireless technologies may be used to transmit the code235. When the safety module150receives a code235, the safety engine107may compare the received code235with one or more of the stored authorized user codes109. If the received code235matches one of the authorized user codes109(indicating that an authorizer user is holding the firearm190or is in range of the firearm190), the safety engine107may place or transition the locking mechanism110into the second state (i.e., the firing state). As may be appreciated, to prevent an authorized user from giving the firearm190to an unauthorized user after placing the locking mechanism110in the second state, the safety engine107(while in the second state) may periodically receive the code235from the user device230and may redetermine whether the received code235matches an authorized user code109. In the event that a received code235does not match an authorized user code109, the safety engine107may place the locking mechanism110in the first state (i.e., the disabled or non-firing state). Furthermore, if a threshold amount of time passes and the safety engine107does not receive a code235, the safety engine107may place the locking mechanism110in the first state. The particular authorized user codes109stored by the safety engine107may be set by a user or administrator. In some embodiments, the authorized user codes109may be controlled by a manager210. The manager210may be connected to the safety engine107via a network130such as the internet. The safety module150may include a networking component (e.g., WiFi, cellular, or other wireless technology) that allows it to periodically receive instructions215from the manager210. The received instructions215may either add authorized user codes109, remove authorized user codes109, or indicate that no change has been made to the authorized user codes109. The manager210may be a server, for example. Depending on the embodiment, each instruction215may include all of the authorized user codes109for the safety module150. Accordingly, when an instruction215is received by the safety module150, the safety engine107may replace all of the stored authorized user codes109with the authorized user codes109included in the instruction215. In some embodiments, the manager210may provide an application, or API, through which a user can control the authorized user codes109associated with their safety module150. For example, the user may register their safety module150with a user account or user profile associated with the manager210. The user may then use the user account to specify what authorized user codes109should be associated with their safety module150. The manager210may then provide an instruction215that includes the authorized user codes109to the safety module150through the network130. As may be appreciated, for the safety of the user and other users, it is important to ensure that the authorized user codes109used by the safety engine107are current and up-to-date. For example, if a user uses the manager210to remove an authorized user, it is very important that the corresponding authorized user code109is removed from the safety engine107as soon as possible. Accordingly, in some embodiments, the safety engine107may maintain a record of the last time that the safety engine107received an instruction215from the manager210. If the time is greater than a threshold time, the safety engine107may prevent the locking mechanism110from transitioning to the second state (i.e., the firing state) regardless of the received code235. This may prevent a previously authorized nefarious user from stopping the safety module150from receiving an instruction215that might revoke their status as an authorized user. The threshold time may be one minute, ten minutes, fifteen minutes, etc. The safety module150may further require that multiple authorized users be present when using the firearm190. For example, a user being trained to use a firearm190may be required to have an instructor present while using the firearm190. In such embodiments, the safety engine107may only allow the locking mechanism110to transition to the second state when a first code235is received from the user device230of the trainee and a second code235is received from the user device230of the trainer. This may ensure that the firearm190can only be fired by the trainee when the trainer is present. The safety module150may be configured for such operation by the manager210. There is no limit to the number of codes235that may be required to operate the firearm190. Multiple codes235could also be useful in group firearm190scenarios. For example, members of a group may each have their own firearm190. Each firearm190may include a safety module150that requires a first code235unique to the operator and a second code235that is associated with a leader of the group. The leader of the group could effectively disable all of the firearms190in the group by disabling or turning off their user device230. Because the code235of the leader is no longer received by each of the safety modules150of the group members, each firearm190is prevented from being fired by its respective safety module150. As may be appreciated, the above described scenario could be applicable to police or military training exercises where a commander or ranking officer desires to selectively allow or disallow their trainees from operating their firearms190. The safety module150may further include one or more sensors116that allow the safety module150to count the number of rounds that are fired by the firearm190. For example, in one embodiment the safety module150may include an accelerometer that can be used to detect the recoil force caused by firing the firearm190. In another example, the safety module150may include a sensor116that senses the movement of the trigger bar115caused by the operator pulling the trigger193. The safety module150may further use one or more sensors116to collect other data regarding the firearm190. In one example, the sensors116may include a camera, and the safety module150may collect and store video data while the safety module150is in the second state (i.e., the firing state). In another example, the other data may include accelerometer data or position data that may indicate the position of the firearm190before and after the firearm190is fired. This data may be used to determine how the operator is holding the firearm190while firing. The safety module150may further use the one or more sensors116to collect biometric data about the user. The biometric data may be used to determine a physiological state of the user such as a heart rate or a breathing rate. Depending on the embodiment, some or all of the data collected by the sensors116may be provided by the safety engine107to the manager210through the network130. The data may be stored by the manager210and associated with the user profile or user account associated with the safety module150. The safety module150may use the data collected by the various sensors116to provide additional safety features. For example, with respect to the number of rounds fired, the safety engine107may count the number of rounds that have been fired by the firearm190and may determine if a total allowed number of rounds has been exceeded or if a firing rate has been exceeded. If so, the safety engine107may temporality disable the firearm190by placing the locking mechanism110in the first state (i.e., the disabled or non-firing state). The safety module150may further perform one or more safety functions based on the biometric data. For example, the safety engine107may determine that the user is acting erratically based on their heart rate or breathing rate and may determine to place the locking mechanism110in the first state (i.e., the disabled or non-firing state). The safety module150may further use the collected data to critique or improve the performance of the operator of the firearm190. For example, with respect to the video data collected by the safety engine107, the safety engine107(or the manager230) may process the video data to identify a likely target of each shot and the location where the shot ultimately lands. The difference between the target and the location may be determined by the safety engine107and may be used to determine an accuracy rate for the operator of the firearm190. The determined accuracy rate may then be provided to the operator of the firearm190. Based on the determined accuracy rate, the safety engine107(or manager230) may make recommendations to the operator of the firearm190to improve their accuracy. For example, based on data collected by the sensors116about how the operator is holding the firearm190, the safety module150may recommend adjustments to the operator to improve their performance. The recommendations may be provided to the operator though an application or webpage associated with the manager230, for example. The safety module150may further provide real-time or near real time feedback to help improve the accuracy of the user. For example, to help train the operator to fire the firearm between heart beats, the safety engine107may use the biometric data to determine when the operator is between heart beats and may prompt the operator to fire based on the determination. The prompts may include turning on one or more lights117or providing haptic feedback, for example. Other prompts may be used. In addition, the safety module150may transition the locking mechanism110between the first and second states based on the physiological state of the operator. For example, the safety module150may transition to the second state (i.e., the firing state) when the operator is between breaths, and may transition to the first state (i.e., the disabled state) when the operator is exhaling or inhaling. The safety module150may convey certain information to the operator of the firearm190using the lights117and/or the alarm118. For example, when the locking mechanism110is in the first state (i.e., the disabled state) the lights117may be red, and when the locking mechanism110is in the second state (i.e., the firing state) the lights117may be green. Similarly, when the locking mechanism110transitions to the first state from the second state, or when a code235is received that does not match an authorized user code109, the alarm118may sound. The behavior of the lights117and/or the alarm118may be set by a user or administrator. The safety module150may be powered by a power source119. Depending on the embodiment, the power source119may be a rechargeable battery that may be charged using a variety of wired (e.g., USB) and wireless (e.g., Qi) methods. Depending on the embodiment, when the battery charge drops below a threshold charge, the safety module150and/or the manager210may alert the operator. The manager210, in addition to configuring authorized users109, may provide a social networking functionality to users of the safety modules150. For example, the manager210may allow users to create user profiles and share information such as video data collected by the safety module150from recent target shooting as well as calculated accuracy rates. Other functionality may be provided by the manager210. FIG.5is an illustration of an example method500for operating a safety module150on a firearm190. The method500may be implemented by the safety engine107of the safety module150. At510, a code is received. The code235may be received by the safety engine107from a user device230. The code235may be an RFID code and the user device230may include an RFID chip and/or transmitter. Examples of user devices230include rings and bracelets. The safety engine107may be part of a safety module150that prevents an associated firearm190from firing by unauthorized users using a locking mechanism110and a trigger bar115. The locking mechanism110may be in a first state where the locking mechanism110prevents the trigger bar115from moving. Because the trigger bar115extends behind a trigger193of the firearm190, the firearm190cannot be fired while the locking mechanism110is in the first state. At515, a determination of whether the code is associated with an authorized user is made. The determination may be made by the safety engine107comparing the code235with one or more stored authorized user codes109. If the code235matches any of the stored authorized user codes109then the method500may continue at525. Else, the method500may continue at535. At525, the locking mechanism is placed in a second state. The locking mechanism110may be placed in the second state by the safety engine107upon determining that the received code235is associated with an authorized user. While in the second state the trigger bar115is allowed to move by the locking mechanism110. Accordingly, the trigger193of the firearm190is allowed to move, and the firearm190can be fired while the locking mechanism110is in the second state. At535, the locking mechanism remains in the first state. Because the code235did not match any of the authorized user codes109, the safety engine107may keep the locking mechanism110in the first state thereby preventing the firearm190from firing. Depending on the embodiment, the safety engine107may sound an alarm118to indicate that an unauthorized user may be trying to use the firearm190and may send an indication of the unauthorized usage to the manager210. The manager210may then notify one or more authorized users associated with the safety module150. FIG.6is an illustration of an example method600for operating a safety module150on a firearm190. The method600may be implemented by the safety engine107of the safety module150. At610, a time since a last instruction was received is determined. The determination may be made by the safety engine107of the safety module150. In some embodiments, the safety module150may periodically receive instructions215from a manager210through a network130. The instructions215may be to add one or more authorized user codes109, remove one or more authorized user codes109, or to indicate that no changes have been made to the authorized user codes109. At615, whether the time is greater than a threshold is determined. The determination may be made by the safety engine107. The threshold may be set by a user or administrator. As may be appreciated, to ensure that any changes made to the authorized user codes109are adopted by the safety engine107, the safety engine107may check whether instructions215are regularly being received from the manager210. If the time is greater than the threshold, the method600may continue at625. Else, the method600may continue at635. At625, the locking mechanism is prevented from being placed in the second state. The locking mechanism110may be prevented from being placed in the second state by the safety engine107. Because the time was greater than the threshold time, the safety engine107may assume that the stored authorized user codes109may not reflect any changes made by the manager210through the instructions215. Accordingly, the safety engine107may prevent the locking mechanism110from being placed in the second state (i.e., the firing state) even when a code235that matches a stored authorized user code109is received. In some embodiments, the safety engine107may use the alarm118and/or lights117to indicate to the user that the locking mechanism110cannot be placed in the second state until an instruction215is received. At635, the locking mechanism is allowed to be placed in the second state. The locking mechanism110may be allowed to be placed in the second state by the safety engine107. Because the time was less than the threshold time, in the event that a code235that matches an authorized user code109is received, the locking mechanism110may be placed in the second state by the safety engine107. FIG.7is an illustration of an example method700for operating a safety module150on a firearm190. The method700may be implemented by the safety engine107of the safety module150. At710, a number of rounds fired is determined. The determination may be made by the safety engine107of the safety module150. In some embodiments, the safety engine107may keep track of the number of rounds that have been fired by the firearm190that the safety module150is attached to. The safety engine107may determine the number of rounds based on movement of the trigger bar115or based on data received from one or more sensors116such as an accelerometer. At715, whether the number of rounds fired is greater than a threshold is determined. The determination may be made by the safety engine107. The threshold may be set by a user or administrator and may be received from the manager210. For example, as a safety precaution, the number of rounds that may be fired by the firearm190may be limited. Alternatively, rather than a threshold number of rounds, the safety module150may enforce a firing rate. If the number (or rate) is greater than the threshold, the method700may continue at725. Else, the method700may continue at735. At725, the locking mechanism is placed in the first state. The locking mechanism110may be placed in the first state by the safety engine107. As described above, the firearm190cannot be fired while in the first state. Depending on the embodiment, the locking mechanism110may return to the second state after some amount of time has elapsed since the locking mechanism110was placed in the first state (i.e., cooling-off period). At735, the locking mechanism is allowed to be placed in the second state. The locking mechanism110may be allowed to be placed in the second state by the safety engine107. FIG.8is an illustration of an example method800for operating a safety module150on a firearm190. The method800may be implemented by the safety engine107of the safety module150. At810, a first code is received. The first code235may be received by the safety engine107of the safety module150from a user device230associated with a first user. The first user may be an operator of the firearm190associated with the safety module150. The first user may be holding the firearm190. The first code235may be received from an RFID chip associated with the user device230. At815, a second code is received. The second code235may be received by the safety engine107of the safety module150from a user device230associated with a second user. The second user may be a supervisor of the first user. For example, the second user may be an instructor who is required to be present when the first user is operating the firearm190. The second user may not be holding the firearm190but may be near enough to the firearm190that the safety engine107receives the second code235. At820, whether both the first code and the second code are associated with authorized users is determined. The determination may be made by the safety engine107. If both codes235are associated with authorized users the method800may continue at825. Else, the method800may continue at830. At825, the locking mechanism is placed in a second state. The locking mechanism110may be placed in the second state by the safety engine107upon determining that both the received first code235and the received second code235are associated with authorized users. The firearm190may be fired by the first user while the locking mechanism110is in the second state. At830, the locking mechanism remains in the first state. Because one or both of the first code235and the second code235did not match any of the authorized user codes109, the safety engine107may keep the locking mechanism110in the first state thereby preventing the firearm190from firing. FIG.9shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing system environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality. Numerous other general purpose or special purpose computing system environments or configurations may be used. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, servers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like. Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices. With reference toFIG.9, an exemplary system for implementing aspects described herein includes a computing device, such as computing device900. In its most basic configuration, computing device900typically includes at least one processing unit902and memory904. Depending on the exact configuration and type of computing device, memory904may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated inFIG.9by dashed line906. Computing device900may have additional features/functionality. For example, computing device900may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated inFIG.9by removable storage908and non-removable storage910. Computing device900typically includes a variety of tangible computer readable media. Computer readable media can be any available tangible media that can be accessed by device900and includes both volatile and non-volatile media, removable and non-removable media. Tangible computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory904, removable storage908, and non-removable storage910are all examples of computer storage media. Tangible computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device900. Any such computer storage media may be part of computing device900. Computing device900may contain communications connection(s)912that allow the device to communicate with other devices. Computing device900may also have input device(s)914such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s)916such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here. It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. | 31,033 |
11859929 | DETAILED DESCRIPTION With reference to the drawing figures, this section describes particular embodiments and their detailed construction and operation. Throughout the specification, reference to “one embodiment,” “an embodiment,” or “some embodiments” means that a particular described feature, structure, or characteristic may be included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” or “in some embodiments” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the described features, structures, and characteristics may be combined in any suitable manner in one or more embodiments. In view of the disclosure herein, those skilled in the art will recognize that the various embodiments can be practiced without one or more of the specific details or with other methods, components, materials, or the like. In some instances, well-known structures, materials, or operations are not shown or not described in detail to avoid obscuring aspects of the embodiments. “Forward” will indicate the direction of the muzzle and the direction in which projectiles are fired, while “rearward” will indicate the opposite direction. “Lateral” or “transverse” indicates a side-to-side direction generally perpendicular to the axis of the barrel. Although firearms may be used in any orientation, “left” and “right” will generally indicate the sides according to the user's orientation, “top” or “up” will be the upward direction when the firearm is gripped in the ordinary manner. Referring first toFIGS.1and2, therein is shown a precision bolt-action rifle10according to one embodiment of the present invention. The rifle10includes an upper receiver12which houses a bolt14. A barrel16and forearm18or barrel shroud extends from a forward end of the upper receiver12. Unlike many other bolt-action precision rifles, this rifle10does not mount in a chassis or stock. Instead, the upper receiver12is attached to a lower receiver20. The fire control mechanism or trigger assembly is attached to the upper receiver12and extends into the lower receiver20, which supports a detachable magazine22, a handgrip23, and butt stock26. Referring now also toFIG.4, according to a feature of one embodiment of the present invention, the lower receiver may include an integral hinge28that allows the butt stock26to fold to the side of the upper and lower receivers12,20. The trigger housing or lower receiver20may be adapted to attach any of a variety of AR-pattern (or other pattern) handgrips selected by the user. The upper receiver12may also include an accessory attachment rail30for mounting optical aiming devices or other accessories. The rail30may include MIL-STD-1913 pattern lugs or other standardized accessory mounting rail configuration. The rail30may be elevated, as shown, to better position optical aiming devices. It may also include openings32to reduce weight, increase surface area, and ventilate the riser to better dissipate heat transferred to the upper receiver12from the barrel18and/or bolt14. The lower receiver20may include an ambidextrous magazine release34positioned near the trigger guard36. The forearm18or barrel shroud can, if desired, be made from lightweight metal, a polymer material, or composite material. A material that shields or does not retain heat radiated from the barrel16may be desired. The forearm may mount directly to the upper receiver12and may include accessory mounting features38according to a standardized pattern, such as M-LOK™, KeyMod™, or one or more integral or attachable MIL-STD1913accessory rails. Referring again toFIG.3, the rifle10is shown in a field disassembly condition. The ammunition magazine22is removed from the lower receiver. Forward and rear takedown pins120,122are displaced to the side to release forward and rear upper receiver lugs124,126to allow the upper receiver12to separate from the trigger housing or lower receiver20. The takedown pins120,122can be captive in the lower receiver20by well-known means. When the upper receiver12is disassembled from the lower receiver20, the bolt14may be removed from the rear of the upper receiver12. When assembled, as shown inFIG.2, the bolt handle128may be slid along a slot130in the upper receiver12that extends rearward into a rear portion of the lower receiver20to allow the bolt14to fully open. When disassembled, as shown inFIG.3, the slot130is open to the rear, allowing the bolt14to be removed. Referring now toFIGS.5and6, therein is shown an integral hinge28connecting a rear portion of the lower receiver20to the butt stock26in a way that allows it to be firmly locked in an extended position, yet easily folded to the side for more compact storage or transportation of the rifle10. The hinge28provides a knuckle that pivots on a hinge pin40. In the illustrated embodiment, the butt stock26includes a substantially cylindrical bore24that, when in the extended position, axially aligns with another bore44within a rear portion of the lower receiver20to accommodate rearward reciprocation of the bolt14beyond the upper receiver12. Unlike an automatic action firearm, a bolt-action firearm does not include a recoil spring, which is often housed behind the bolt or a bolt carrier assembly and may extend into a butt stock. In the illustrated embodiment, a rotatable latch member46mounted on the butt stock26engages a socket48in the rear of the lower receiver20, allowing the butt stock26to be firmly locked in the extended position. The latch member46may be mounted in a body portion50of the butt stock26and actuated manually by a lever52secured to the latch member46using a roll pin55. The rotational position of the latch member46may be held by a spring-biased ball detent54,56that it engages detent grooves58on the body of the latch member46. A hook portion60of the latch member46will engage the socket48when pivoted to the closed position. The socket48may be formed in an insert piece62fitted into a cavity64in the rear end of the lower receiver20. The insert piece62and latch member46may be made of a material, such as steel, that is relatively harder and more durable than the materials used for the lower receiver20and/or body portion50of the butt stock26. Referring now also toFIGS.7-10, the hook portion60of the latch member46can enter the non-round socket48and then rotate to engage an inner cam surface66. The cam surface66can be configured to provide an increasing amount of tension as the latch member46is rotated toward the engaged or locked position. This assures a firm lock up between the butt stock26and lower receiver20, eliminating any play or rattle in the hinged interface. Referring now also toFIGS.11-13, when the latch member46is in a latched position with the hook portion60bearing against the cam surface66, this frictional engagement maintains the latch member46firmly in the secured position. When the latch member46is rotated to the unlatched position, the butt stock26is allowed to pivot on the hinge28. When moving the butt stock26back to the extended position, the latch member46must be in the unlatched position in order for the hook portion60to enter the socket48. Although the detent ball56generally retains the position of the latch member46, the lever52could be inadvertently bumped, moving the latch member46into a position where it will not enter the socket48when the butt stock26is extended. In order to hold the latch member46in the unlocked position whenever the hinge28is open, a locking plunger68carried by the body portion50of the butt stock26can engage a notch70on the lever52, as shown inFIG.12. The locking plunger68is biased by a spring72toward this locked position and will slide along the lever52until aligned with the notch70. With reference now also toFIG.5, as the hinge28is closed, moving the butt stock26toward the extended position, an extension of the locking plunger68contacts a rear surface of the lower receiver20causing it to be displaced against the spring72, as shown by arrow80, as illustrated inFIG.13. When the locking plunger68is disengaged from the notch70, the latch member46and lever52are released to be rotated toward the latched position (as illustrated by arrow74inFIG.13). This mechanical action operates automatically without user assistance. The butt stock26and/or lower receiver20may include one or more sling attachment features87. In the illustrated embodiment, the attachment features87are, by way of example, recesses for quick-release attachment of a single point sling swivel. Referring now toFIGS.14-16, a fire control mechanism88provides a compound lever mechanism to increase mechanical advantage and movement. It attaches to a pair of laterally spaced apart mounting lugs or flanges90on the bottom side of the upper receiver12using a single sear carrier pivot pin92. The safety selector94is shown inFIGS.14and16-18to provide context, but it is rotatably mounted in transversely opposed openings95in the lower receiver20. The safety selector94is shown in the safe position inFIGS.14,16, and17, and it is shown in the fire position inFIGS.18and19. Referring now also toFIG.17, this view shows the bolt14in an in-battery position. Inside the bolt14is a firing element in the form of a striker or firing pin96attached to a cocking piece98, such as with a transverse roll pin100. The firing pin assembly96,98,100is shown in a cocked position inFIG.17. The fire control mechanism88includes a sear carrier102pivotally secured to the flanges90of the upper receiver12by a pivot pin92and carrying a displaceable sear104biased into an extended position by an internal coil spring106. The required pull force (pull weight) of the trigger mechanism can be set by an adjustable trigger spring105that biases a plunger107against a bottom wall of the upper receiver12. The spring tension may be adjusted by a threaded sleeve109in the sear carrier102that carries the spring105and plunger107. A trigger body108is pivotally mounted via a pivot pin110to the sear carrier102. The trigger body108includes a trigger leg112, the shape or style of which may vary depending on user preference. An opposite extension of the trigger body108includes an adjustable bearing ball point114that bears against a bottom wall of the upper receiver12. The sear104engages the cocking piece98in this position and prevents release of the firing pin96. The trigger spring105biases the members of the fire control assembly88toward the “reset” or “cocked” position (FIG.17). The forward travel (i.e., “reset” position) of the trigger leg112may be adjusted by a set screw111that contacts a surface of the lower receiver20inside the trigger guard (as shown inFIG.20). In this embodiment, the sear carrier pivot pin92is the only fixed point of connection between the fire control mechanism88and the upper receiver12and the only fixed pivot point of the sear carrier102and trigger body108. Additionally, the fire control mechanism88does not require any additional housing or frame to support any other fixed pivot points, as is the case with most trigger mechanisms for bolt-action firearms. Referring now toFIG.18, therein the safety selector94is shown in a “fire” position, the fire control mechanism88is shown in a “pulled” position, and the firing pin assembly96,98,100is shown in a “released” position. The firing pin assembly96,98,100has been released and shifted forward by the firing pin spring116. The fire control mechanism88provides a compound lever system with a sliding fulcrum and/or fixed and moving pivot points. As illustrated, the trigger leg112has been pulled to the rear, as shown by arrow118. This causes the trigger body108to rotate on its pivot pin110relative to the sear carrier102. The upper extension of the trigger body108is leveraged against the bottom wall of the upper receiver12with the ball point114acting as a sliding bearing surface. The extension of the ball point114may be adjusted by rotation of its threaded socket in the sear carrier102. Leverage of the trigger body108against the bottom surface of the receiver12and its pivot110, in turn, causes it to rotate on its pivot pin110, as shown by arrows inFIG.18. The ball point114provides a sliding fulcrum against which the trigger body108displaces its pivotal connection to the sear carrier102. This, in turn, rotates the sear carrier102on its fixed-location pivot pin92, lowering the sear104, and releasing the cocking piece98. The distance (radius) of the sear104from the pivot pin92of the sear carrier102is greater than that of the trigger body pivot110, causing the sear104to be moved a greater distance than the trigger body pivot110when the trigger is pulled. The length of the trigger leg112from the ball point114may be greater than that of the trigger body pivot110. Thus, an appropriate force and length of trigger pull is compounded to produce the appropriate amount of force and sear movement to release the firing element (i.e., cocking piece98of the firing pin96). As shown inFIG.18, when the safety selector94is moved to the “fire” position, the rearward end of the sear carrier102that holds the sear104is allowed to move downward a sufficient distance to release the firing pin assembly96,98,100. In the “safe” position (FIG.17), the safety selector94blocks downward pivotal movement of the rear end of the sear carrier102.FIG.18also depicts an adjustable set screw113that can be adjusted on the safety selector94to limit travel of the sear carrier102(and trigger108,112). Referring now toFIG.19(which shows the lower receiver20), after firing (release of the firing pin96) and return of the fire control mechanism88to its “reset” position, the bolt14and firing pin assembly96,98,100can be retracted. The bolt can be retracted without resetting the trigger. As the catch tooth of the cocking piece98passes the sear104, the sear104can be displaced or deflected against its spring106without causing (or requiring) movement of the sear carrier102. This allows the bolt14and firing pin assembly96,98,100to be retracted without regard to whether the safety selector94is in the “safe” or “fire” position. FIG.20shows the bolt14and firing pin assembly96,98,100in a fully retracted position. In this position, the bolt extends rearwardly beyond the upper receiver12into an extension bore44of the lower receiver20and can extend into a bore42of the butt stock body50when the butt stock26is in the extended position. Some precision marksmen prefer to grip a firearm10with their dominant (trigger finger) hand in a manner in which the shooter's thumb of that hand remains on the same side of the firearm10as the other fingers, rather than wrapping the web of the thumb around the backstrap of the grip24. This can leave the user's thumb unsupported.FIGS.21and22illustrate a safety selector actuator/lever that is modified to provide a “strong side” thumb rest, Shown inFIG.22is a representation of a shooter's hand170in which the thumb of the dominant hand (in this case, the right hand) may be supported by the extended thumb rest172, which replaces the actuation paddle or lever174of the safety selector switch94, according to an embodiment of the invention. The thumb rest172includes a contact surface180presenting a shelf or support surface on or against which the user's thumb may be rested while gripping the firearm10in a shooting position. The extended thumb rest172also acts as a safety selector actuation paddle and may be provided on either or both sides of the lower receiver20. The thumb rest172may be attached to a stem176of the safety selector switch94that engages a socket in the same manner as an ordinary detachable actuation lever174, such as with an assembly cross pin178. While one or more embodiments of the present invention have been described in detail, it should be apparent that modifications and variations thereto are possible, all of which fall within the true spirit and scope of the invention. Therefore, the foregoing is intended only to be illustrative of the principles of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not intended to limit the invention to the exact construction and operation shown and described. Accordingly, all suitable modifications and equivalents may be included and considered to fall within the scope of the invention, defined by the following claim or claims. | 16,471 |
11859930 | DETAILED DESCRIPTION The problems exhibited by previously described trigger units can be solved by the use of a trigger unit having the features recited in the present disclosure. In other words, the sear with its sear axis and the trigger lever with its trigger axis form a common axis of rotation, wherein the sear has on its upper side a bearing recess for receiving and limited rotation about a disconnector axis of a disconnector pivot joint formed on the underside of the disconnector and wherein the bearing recess is formed surrounding the disconnector pivot at least partially in the direction of rotation about the disconnector axis. In still other words, the parts are not positioned side by side as in US 2016/0363401 A1, but are nested one inside the other. Further, the disconnector is rotatably mounted on the sear, and not on the trigger. Finally, the sear axis and the trigger axis are one and the same, which is possible due to the nesting. In this way, the hammer, which is mounted rotatably about a hammer axis and can be prestressed by means of a hammer spring, is no longer blocked by the trigger in the struck state. The trigger lever, which is mounted rotatably about the trigger axis, integrally comprises a trigger and a trigger rear, which is designed to accommodate at least one disconnector. The design and arrangement according to the present disclosure, or the interaction of sear, disconnector and trigger lever allow an adjustment of the selector in the struck state up to the “safe position”, since the trigger rear is easily deflectable in this state. The bearing recess and the disconnector pivot are substantially complementary in shape to each other to allow rotation about the disconnector axis within limits. The assembly can be done relatively easily by pushing the disconnector together sideways, as explained in further detail with reference to the drawings. When installed, this also reduces the likelihood that any of the components of the trigger unit are lost. Throughout the description and the claims “front” or “(to the) front” are used as a direction towards the muzzle of the barrel, “(to the) rear” as a direction towards the well, “(downwards) down” as a direction for the latch towards the magazine, and “(upward) up” as a direction away from the magazine. The terms “weapon center plane,” “barrel core,” “barrel axis,” “core axis,” etc. have the usual meaning that the person skilled in the art attaches to them in the prior art. “Left” thus refers to the weapon center plane, “from left” corresponds to a movement, actuation, exertion of force in the direction of the center plane of the weapon, starting from a starting position to the “left” of it, etc. After a shot has been fired, the latch is moved “to the rear” under the effect of the gases and then “to the front” again under the effect of a closing spring, etc. In the context of the present disclosure, a trigger unit which is suitable for placement in a firearm, preferably a rifle, is referred to as “2” in its entirety. This should explicitly include a “drop-in trigger unit,” i.e. an “installation or retrofit module,” which summarizes the trigger unit2according to the present disclosure in a trigger unit housing23in advance and facilitates the installation in a firearm. In the figures of the drawings, an attempt was made to designate everything that concerns trigger unit2as “2n,” as well as analogously “21n” for the hammer, “3n” for the disconnector, “4n” for the sear, “5n” for the continuous firing unit and “6n” for the selector. It is clear to the person skilled in the art that the embodiments depicted were chosen as schematic and/or exemplary representations and that it is easily possible for a person having ordinary skills in the art with the benefit of the present disclosure, to transfer the connections according to the present disclosure also to embodiments not explicitly shown, which is why these implicitly disclosed embodiments can be gleaned both from the description of the figures and from the claims. FIGS.1to11primarily show exemplary embodiments of the present disclosure which are suitable for use in an AR15 or M4 rifle. Modifications can also be transferred to other types of rifles by the person skilled in the art with knowledge of the disclosure simply and without extensive or complex tests. FIG.1shows a schematic exploded view of a trigger unit2, shown as a drop-in trigger unit, prior to insertion into a lower housing1of a rifle. In the normal direction93(vertical) above the trigger unit2, a bolt carrier11is shown, which in the rest position of the weapon, i.e. before firing, is mounted above the trigger unit2in an upper housing (not shown). In addition, a grip12, a magazine catch14and a bolt catch lever15can be seen on the lower housing1when installed.FIG.1also shows an auto sear unit5and a selector6in exploded view, which are not to be seen as part of the trigger unit2according to the present disclosure, but are to be described for their function. In addition, as can be seen fromFIGS.1and2B, the auto sear unit5usually comprises an auto sear51and a continuous fire spring52, as well as a sleeve and a pin for fixing in the housing1. Likewise, the selector6in the form shown comprises two actuating members and a control shaft61, whereby the control shaft61is arranged inside the housing1and can be adjusted by the two actuating members from the outside in its angular position, i.e. by rotation about the transverse direction92. The control shaft61has a geometry which, by forming differently shaped cams along the control shaft61, interacts with different parts of the trigger unit2depending on the position of the selector6. The control shaft61is substantially designed as can be seen, for example, from DE 20 2011 004 556 U1 or EP 2 950 033 B1. InFIG.2A, a composite drop-in trigger unit can be seen in a perspective view, which is shown in the rest position. At this point it should again be noted that the trigger unit2according to the present disclosure can theoretically also be installed without a trigger unit housing23, i.e. directly in the lower housing1, provided that smaller adaptations, such as a support for the sear spring41are provided in housing1. The illustrations show the preferred embodiment as a drop-in trigger unit. InFIG.2B, the rest position of the trigger unit2can be seen in a plan view and explained in conjunction withFIG.3: A hammer21, also often referred to as striking pin, is mounted in the trigger unit housing23, or more precisely in a bearing sleeve24, so that it can rotate about a hammer or hammer axis212. The disconnector3, which is located inside the trigger26, is also very clearly visible.FIG.2Balso shows the superimposed illustration of the trigger unit2with the auto sear unit5and the selector6, as it corresponds to the installation situation and becomes clear in conjunction withFIG.1. FIG.3shows another exemplary representation of the trigger unit2in an exploded view, whereby the dashed lines are to be seen as reference lines to illustrate the position of the components in relation to each other in the installation situation. From the illustration, the multi-part nature of the trigger unit2according to the present disclosure can be seen very clearly, whereby the trigger lever26in particular has no specific shape, i.e. no dedicated front section, in the barrel direction91to the front, as can very often be seen in the prior art. The mechanical engagement on the hammer21or its hammer cams215(e.g.FIG.5) does not take place directly with the trigger lever26, but indirectly via a separately designed sear4. According to the present disclosure, the sear4and the trigger lever26have a common axis of rotation in the installation situation which, accordingly, is designated both as the trigger axis262and as the sear axis43. In addition, according to the present disclosure, the sear4is connected to a disconnector3in such a way that the sear4has on its upper side a bearing recess42for receiving and limited rotation of a disconnector pivot32formed on the underside of the disconnector3. A bearing recess42encloses the disconnector pivot32at least partially in the direction of rotation about the disconnector axis35, which runs through the disconnector pivot32in the transverse direction92. In the installation situation, this permits a limited rotation of the disconnector about the disconnector axis35and, due to the formation of a common sear axis43or trigger axis262, the sear4and the disconnector3can be tilted individually and jointly or rotated within limits. The sear4and the disconnector3are at least partially laterally mounted by the trigger26. It can also be seen that in the installation situation a sear spring41is held on both sides of the trigger26by the bearing sleeve24in the trigger unit2. The curved rear continuous leg of the sear spring41engages on the underside of the trigger housing23in the exemplary embodiment shown (see also e.g.FIG.5A). This type of spring support can also be provided by the person skilled in the art in other ways, such as by means of appropriate support points on the inside of the lower housing1. However, according to the present disclosure, the two loose ends of the sear spring41are supported on the sear4on the underside of the sear spring supports412provided for this purpose. This causes a sear edge44of the sear4to be prestressed upwards in the direction of the hammer21. The hammer21is prestressed in the installation situation using the corresponding hammer spring211. The hammer spring211is stretched in the usual way against the hammer21with the center connecting piece from below and can be supported by the bearing sleeve24, which holds the trigger26. In the embodiment shown, e.g. in conjunction withFIG.3, projecting hammer spring supports261can be provided laterally on the trigger26, which act as abutments for the hammer spring21and thus prevent the hammer spring21from resting on the sear spring41. Due to the support of the hammer spring21, according to the present disclosure, on the hammer spring supports261provided for this purpose, but basically also on the bearing sleeve24or the sear spring41, there is also a force transmission which pushes the trigger lever26with its trigger rear263downwards in the normal direction93. This connection is advantageous for the design of the trigger unit2according to the present disclosure, since it transmits a force to be overcome to the trigger26and thus noticeably to the shooter on the trigger264, which is perceived as the “first stage” and defines the resistance in the pull, which will be explained later. The analysis of the followingFIGS.5to11makes it clear to the person skilled in the art that the tasks according to the present disclosure can be solved by means of the one-piece components shown as examples, in particular the trigger26, the sear4, the disconnector3and the hammer21. It should be noted at this point that multi-part sears4and/or disconnectors3are also conceivable, which interact in an analogous way. InFIGS.4aand4bthe sear lever4and the disconnector3are shown enlarged. The disconnector3has a hook31on the upper side which interacts with the hammer hook213. At its rear end the disconnector3has a back end33, which in transverse direction92, as shown, can have a smaller extension than the center or front section. This makes accommodation/insertion in the rear of the trigger263easier. The disconnector3can, as shown, have formed a kind of support lug in the front section for guiding along the upper side of the sear4. The guide and/or also the support on the upper side of the sear4can also be achieved by an alternative, functionally identical design of the pairing bearing recess42and disconnector pivot32. Further, one may see the distance442, d, between the axis262which is the same as the axis43and the outermost end of sear4. The meaning of this distance will be explained in connection withFIGS.5A-5Dand the detail ofFIG.10. The disconnector3has a disconnector pivot32on its underside, which serves to accommodate and rotatably mount on the upper side of the sear4and which defines a disconnector axis35in the transverse direction92. In addition, a receptacle for a disconnector spring34is provided on the underside of the disconnector3. The diameter and depth of this receptacle, which is better visible in cross-section e.g. inFIG.5A, is adapted to the disconnector spring34in such a way as to decrease the risk of it slipping out sideways. In a special embodiment, the sear4, as enlarged in detail C inFIG.4B, also has a spring recess46. This spring recess46is formed on the upper side, i.e. facing the disconnector3, and serves, like the mounting in the disconnector3, to at least partially mount and loss protect the disconnector spring34. In the advantageous further embodiment shown, the spring recess46is partially open in at least one transverse direction92, which facilitates assembly, as the disconnector spring34does not have to be compressed to the point where it can be inserted into the recess or receptacle. A further aid for the assembly is provided by a ramp461provided at the side in the area of the opening to the spring recess46. Because the ramp461rises in the direction of the spring recess46, the disconnector spring34can be inserted more easily from the side, i.e. moved over it. In all the cases described, however, the function of the disconnector spring34is the same in that it prestresses the disconnector3around the disconnector axis35, i.e. substantially upwards in the direction of the hammer hook213. The bearing recess42is substantially complementary in shape and function to the disconnector pivot32, whereby a partial rotation of the disconnector3, i.e. within defined limits, is made possible in addition to the mounting. The assembly of the sear4and the disconnector3is therefore carried out by shifting from one side in the transverse direction92, whereby an independent disassembly or disintegration during operation is avoided by the lateral limitation within the trigger unit housing23or also lower housing1of the firearm. FIGS.5to8describe the function of the trigger unit2in more detail. The sectional views of the different rest and working positions ofFIGS.5A,5C,6A,7A,8A, and8B correspond to a side view through the center plane along the section line A-A′, as shown as an example for the rest position inFIG.2B. The sectional views of the different resting and working positions ofFIGS.5B,5D,6B, and7Bcorrespond to a side view through the plane along the section line B-B′, as shown as an example for the rest position inFIG.2B, which substantially corresponds to a side view without the “left side wall” of the trigger unit housing23. The rest position71, the first trigger stage72(1st stage) and the second trigger stage73(2nd stage) are illustrated by dotted lines in the area of the trigger264and/or the trigger rear263. FIGS.5A and5Bshow the rest position71of the trigger unit. The hammer21is tensioned, i.e. the hammer spring211attempts to rotate the hammer head counter-clockwise around the hammer axis212(FIG.2) and rests on the hammer spring supports261. The hammer21has at least one hammer cam215on its outer surface in the area of the hammer axis212, which is held in the rest position by a sear edge44of the sear4. The sear edge44is prestressed by the sear spring41against the hammer21by engaging in the sear spring supports412. As shown, the trigger lever26is preferably integrally designed and has a trigger264which projects substantially downward in the normal direction93. In addition, the trigger lever26has an opening in its center section and in the rear direction in the rear263to accommodate the sear4and the disconnector3. InFIGS.5A and5Bit can be seen very clearly that the trigger264of the rest position71is prestressed by the hammer spring211, because the trigger lever26is pushed down. It can also be seen very clearly that the selector6is in the “safe” position, whereby the control shaft61in one section blocks the disconnector3on the back end33on the upper side and prevents a deflection upwards. A comparative examination ofFIGS.5C and5Dshows that a slight deflection to the rear is possible when a first force is applied to the trigger264, whereby the trigger rear263is rotated upwards until the trigger lever26comes into contact on its inner surface25with the underside of the sear4in the contact area of detail D. This slight idle travel is also referred to as pull and can be clearly perceived by the shooter through the retention force of the hammer spring211on the trigger26. This first trigger resistance is thus perceived between the rest position71and the end of the pull. The end of the pull is thus referred to as the first trigger stage72, which is also often referred to as the “first stage” in the Anglo-American linguistic area. The first trigger stage72of this two-stage trigger unit2can be perceived, as shown inFIGS.5C and5D, for example. The design according to the present disclosure allows the same perception of the first trigger stage72even in the unsecured condition of the firearm, e.g. when the selector6is moved to the position “single fire” or “continuous fire,” as a comparison withFIGS.6and7shows. InFIGS.5C and5D, the rest position71of the trigger264and the trigger rear263is clearly marked in dotted lines. A further deflection of the trigger264to the rear via the first trigger stage72is prevented in the “securing” position by the rear trigger part263resting on both sides with its upper side against correspondingly designed sections of the control shaft61. Detail D fromFIG.5Ais shown enlarged inFIG.9. This shows particularly advantageous embodiments, which e.g. consist of a sear protrusion45formed on the underside of the sear4. This allows a defined contact position between the inner surface25of the trigger level26and the underside of the sear4, whereby the friction can be minimized and the reaching of the first trigger stage72can be perceived better. A further embodiment is the incline of the inner surface25sloping backwards, as shown inFIG.9. This inclined surface can also have an advantageous influence on the force transmission between the trigger rear263and the sear protrusion45by being substantially at right angles—provided that the incline is formed at the corresponding angle. This allows a very precise triggering of the trigger unit and the reaching of the first trigger stage72. From the context and the description, it is easy to understand the fact that it is possible to provide different sears4, which have sear protrusions45which protrude to different extents. As shown inFIG.9, these sear protrusions45can be integrally formed on sear4. In this way a fine adjustment of the pull can be carried out by selecting the desired remaining distance between the inner surface25and the sear protrusion45of the respective sear4. Similarly, an adjustable adjustment device451, preferably designed as an adjustment screw (e.g. grub screw, worm screw) or also as a prismatic adjustment member451, can function to adjust the part of the sear protrusion45that protrudes from the underside. FIGS.11A and11Bshow two prismatic adjustment members451as examples, which can be inserted laterally into a recess of the sear4corresponding to the rough outer contour of the prism. Due to the differently rounded edges of the prismatic adjustment member451, a sear protrusion45projecting from the sear4on the underside to different extents can be formed by pushing it into the desired position, as a comparison ofFIGS.11A and11Bclearly shows. The adjustment members451are sufficiently wide in the transverse direction92to ensure a stable bearing in the corresponding recess of the sear4. The prismatic adjustment members451are listed as examples of three-sided prisms, whereby four-, five- or even multi-sided prisms are basically also conceivable. A complementary, or also alternative, possibility for fine adjustment would be to provide different trigger levers26with correspondingly adapted inner surfaces25. FIG.6shows the situation where the selector6is put in the “single fire” position and the control shaft61with the corresponding sections allows a slight further rotation of the trigger rear263about the trigger axis262. Due to the function of the trigger unit2, which has already been sufficiently described inFIG.5until the first trigger stage72is reached, a second, usually higher trigger resistance is perceived when the trigger264is deflected further back. This second trigger resistance results in part from the direct force transmission of the trigger lever26on the sear4, since after contacting the sear4with the inner surface25it must be rotated together about the trigger axis262. The hammer spring211still attempts to push the trigger26downwards. On the other hand, the sear edge44of the sear4must be disengaged from the trigger cam215of the hammer21. InFIGS.6A and6B, the rest position71and the first trigger stage72are therefore schematically indicated as dotted lines on the trigger264before the second trigger stage73is reached by releasing the sear edge44from the trigger cam215. As shown inFIG.5, the auto sear unit5is still in its rest position. A further deflection of the trigger264to the rear, i.e. a further upward movement of the trigger rear263, is limited by the control shaft61. When the hammer21is released, it rotates around the hammer axis212(see e.g.FIG.8A) and accelerates to the firing pin within the central recess of the bolt carrier11. The disconnector3attempts to rotate upwards around the disconnector axis35by prestressing the disconnector spring34, which is made possible at least within certain limits by the position of the selector6, until the back end33contacts the corresponding section of the control shaft61at the top. Of course, this only applies in the case of the pulled trigger264—a release of the trigger would require a renewed overcoming of the first trigger resistance, etc. Since the lock opens after the shot is fired and the bolt carrier11moves backwards, the hammer21rotates backwards again and is caught in this position with its hammer hook213by the hook31of the disconnector3. The bolt carrier11is moved forward again by a closing spring, whereby a new cartridge is fed from the magazine into the cartridge chamber of the barrel and the locking head is locked with the barrel. The hammer21is thus caught by the disconnector3after each shot in “single fire.” Before firing another shot, the prestressing of the disconnector3must first release the trigger264forward until the sear edge44is again positioned in front of the trigger cam215. If the trigger264continues to move forward, the hook31is disengaged with the hammer hook213. Thus again, at least the second trigger resistance must be overcome to reach the second trigger stage73. Another situation is described byFIG.7, in which the position “continuous fire” of the selector6is set. Due to the (in most cases) slide-like design of the section of the control shaft61corresponding to the back end33, in this position the disconnector3is pressed down after the release of the hammer21. With the previously described shot firing in “single firing mode” the disconnector3can engage with the hammer21, while with “continuous firing” an engagement of the hook31in the hammer hook213is suppressed. In order to prevent the hammer21from scratching the underside when the bolt carrier piece11moves forward in the case of “continuous fire,” the auto sear unit5comes into play in a manner known to persons skilled in the art. In the “continuous fire” position, the prestressing of the auto spring52causes the auto sear51to engage briefly with the auto sear hook214of the hammer21during the return movement of the bolt carrier11. When the bolt carrier11is advanced, the hammer21is held until the locking process is completed and the bolt carrier11strikes the bottom of the auto sear51, whereby the hammer21is automatically released again. A significant advantage of the trigger unit of the present disclosure is considered to be the possibility of moving the selector6into the “secure” position when the hammer21is in the “struck” position and therefore the trigger unit2is not stressed. This situation is illustrated inFIG.8. As can be seen fromFIG.8A, the hammer21is in the striking position, as can be the case with a fire retardant, i.e. a non-ignited cartridge. The selector6is shown in the “continuous firing position,” whereby the situation is analogous to the “single firing position.” Due to the design of the trigger unit2according to the present disclosure, i.e. due to the separation of the sear4and trigger lever26, despite the use of a common trigger lever axis262or sear axis43, the trigger rear263can be moved downwards into the “securing” position when the selector6is adjusted, as shown inFIG.8B. In this way the sear4can be applied from below to the hammer21under prestress without obstructing the hammer21during a new loading process and immediately engaging again in the hammer cam215. The auto sear unit5can also be brought back into the rest position unaffected by the position of the struck hammer21by adjusting the selector6. This would be impossible with a one-piece trigger lever, which would engage the hammer21directly “forward.” The situation inFIG.8Bthus shows the selector6in the “safe” position, whereby the trigger264is deflected at least until reaching the second trigger stage72. Another embodiment of the present disclosure concerns the formation of the sear edge44, which has a special shape in the contact area with the hammer cam215. An enlarged, albeit schematic, representation of the detail E fromFIG.5Cshows the sear edge44, which preferably has an inclined and/or a convex shape on the surface facing the hammer cam215. A convex curvature of this surface makes it possible for the substantially arcuate movement of the sear4about the trigger axis262, from the time the first release stage71is reached, to cause the reduction in the contact surface between the sear edge44and the trigger cam215to lead to a homogeneous increase in the second trigger resistance. The resulting increase in the surface pressure thus increases substantially linearly with the remaining contact surface, whereas an inhomogeneous increase in the trigger resistance would occur with a sear edge44with a right-angled design. It may be advantageous in certain cases if, as shown, the sear edge44has a convex curvature with a radius r441. Measured from the trigger axis262or sear axis43to the vertex of the curvature, the distance is d442. This radius r is about the distance d, preferably smaller than the normal distance d442between the vertex of the convex curvature (FIG.5n) and the sear axis43. In addition, the vertex of smaller radii in the direction of rotation around the sear axis43can also be off-center on the sear edge44. These correlations can be easily optimized by the person skilled in the art. Preferably, as shown inFIG.10, the surface of the sear edge (44) is convex in regard to an axis parallel to sear axis43. Its radius r441, in relation to the distance d,442, (FIG.4A) between its apex and the trigger axis262or sear axis43(which is the same) lies in the range 0.8 d<r<1.2 d, preferably 0.85 d<r<1.1 d. Especially preferred are such relations with d<r. The trigger unit2according to the present disclosure is primarily described as a drop-in trigger unit, wherein at least the hammer21, the hammer spring211, the disconnector3, the disconnector spring34, the sear4, the sear spring41, as well as the trigger lever26are arranged in a trigger unit housing23according to the aforementioned exemplary embodiments to form a drop-in trigger unit. It has proved to be advantageous if the socket set screws27, as shown for example inFIG.2A, are provided for stressing the drop-in trigger unit. These socket set screws27, penetrating the trigger unit housing23on the underside, are arranged so that they can be actuated from above, whereby the position tolerance in the lower housing1of a firearm can be decisively reduced. The trigger units of the present disclosure are not restricted to the exemplary embodiment shown and described, but can be adapted and modified in various ways. This applies above all to the adaptation to other available weapons, but also to the dimension and geometry of the individual parts. The materials that can be used are the same as in the prior art; the same applies to the manufacturing processes. REFERENCE SIGN LIST 1Lower housing3Disconnector11Bolt carrier31Hook12Grip32Disconnector pivot13Magazine well33Back end14Magazine catch34Disconnector spring15Bolt catch35Disconnector axis2Trigger unit4Sear21Hammer41Sear spring211Hammer spring412Sear spring support212Hammer axis42Bearing recess213Hammer hook43Sear axis214Auto sear hook44Sear edge441r Radius442d Distance215Hammer cam45Sear protrusion23Trigger unit housing451Adjustable sear protrusion24Bearing sleeve46Spring recess241Bushing safety461Ramp25Inner surface5Auto sear unit26Trigger lever51Auto sear261Hammer spring support52Auto spring262Trigger axis6Selector263Trigger rear61Control shaft264Trigger71Rest position27Socket set screw721st trigger stage732nd trigger stage91Barrel direction (front)92Transverse direction (left)93Normal direction (above) | 29,446 |
11859931 | DETAILED DESCRIPTION The present disclosure is directed to firearms that can receive and fire bullets from ammunition with different cartridge case sizes. For example, the receiver may fire a bullet from either a short cartridge, a medium cartridge, or long cartridge after the short, medium, or long cartridge is received by a receiver of the firearm. This firearm may include a receiver portion and a bolt portion that lock together at different relative locations when cartridges of different lengths are received by the receiver portion. The receiver portion may include a first type of alignment retention features (e.g. protrusions) and the bolt portion may include a second type of alignment retention features (e.g. recessions). Once a firearm cartridge is located inside of the firearm, it may be fired based on the receiver and the bolt portions being locked together via physical engagement of the different types of alignment retention features. FIG.1illustrates two different parts that may be used in a firearm consistent with the present disclosure.FIG.1includes a receiver portion110and a portion of a bolt or bolt carrier140. Receiver portion110includes a hole120and several sets of protrusions130. The hole120may extend all the way through the length of receiver110, the hole120located in the top ofFIG.1may be part of a barrel of the firearm. A lower portion of receiver110, not visible inFIG.1may include hole120that is configured to receive cartridges of different lengths. The portion of the bolt or bolt carrier140ofFIG.1includes various features on an internal surface of the bolt portion140. These features include recessions150that are designed to mate with protrusions130of receiver portion110when the recessions of the bolt portion and protrusions of the receiver portion110are aligned. The portion of the bolt140may be sized to fit onto and over all parts of receiver portion110. When the recession features150are aligned with protrusions130, the bolt portion140and the receiver portion110may be locked together based on physical engagement of the protrusions130of receiver110and the recessions of the bolt portion. The process of locking mechanisms110&140together may include the rotation and movement of bolt portion140.FIG.1also includes a series of slots160. These slots could be included in the bolt portion140to reduce a mass of bolt portion as compared to a similar bolt portion that does not include slots160. These slots may also allow relative movement of the bolt portion and receiver when the receiver110and the bolt portion140are not locked together. FIG.2illustrates side and cut out views of parts of a receiver and a bolt that have interlocking features.FIG.2includes receiver210that is similar to the receiver110ofFIG.1.FIG.2also illustrates cut out bolt portions230. Receiver210includes several sets of protrusions220that may engage with protrusions and recession features240. The several different cut out bolt portions230may represent a bolt that has been cut apart. The recessions240ofFIG.2may engage protrusions230when the recessions240are aligned with protrusions230. Here again operation of the bolt may include rotating the bolt portion to a position where the recessions240do not engage protrusions220. FIG.3illustrates a receiver and a bolt portion that are locked together in an orientation.FIG.3includes receiver310and a five-sided bolt portion340. The receiver includes protrusions320and a hole330that extends through the receiver310. Even though, recessions in the receiver340cannot be seen, protrusions320may be engaged by recessions included in bolt portion340. FIG.4illustrates a series of parts of a firearm when a small cartridge is about to be locked into the firearm.FIG.2includes receiver410, cutout bolt portions430, a small sized ammunition cartridge460, and an assembly450that contains a firing pin. The cutout bolt portions430include recessions440that may engage with protrusions420of receiver410. FIG.5illustrates relative positions of the series of parts ofFIG.4when the small cartridge is locked into a receiver.FIG.5includes receiver510, cutout bolt portion520, firing pin assembly530, barrel540, and several cartridges (550,550A,560, &570).FIG.5includes the same type of protrusions in receiver510that were illustrated inFIGS.1-4, here however these protrusions are not numbered. The cutout bolt portion520ofFIG.5also includes recessions that engage the receiver protrusions at a first relative location. While not illustrated inFIG.5, cutout bolt portion520and firing pin assembly530may be part of a bolt carrier assembly that maintains a same relative position between the cutout bolt portion520and firing pin assembly530. FIG.5illustrates how far into receiver510that the small cartridge550A protrudes into receiver510when a bolt carrier assembly is locked onto receiver510. Since the firing pin assembly530pushes cartridge550A into receiver510as illustrated by double arrowed line560. The cartridges ofFIG.5also include a tapered part or shoulder580located at the end of the cartridges where a bullet (i.e. projectile)590is placed with the cartridges are manufactured. FIG.6illustrates relative positions of the series of parts ofFIG.5when a medium sized cartridge is locked into a receiver.FIG.6includes receiver610, cutout bolt portion620, firing pin assembly630, barrel640, and cartridge650.FIG.6includes the same type of protrusions in receiver610that were illustrated inFIGS.1-4, here however these protrusions are not numbered. The cutout bolt portion620ofFIG.6also includes recessions that engage the receiver protrusions at a second relative location. While not illustrated inFIG.6, cutout bolt portion620and firing pin assembly630may be part of a bolt carrier assembly that maintains a same relative position between the cutout bolt portion620and firing pin assembly630. FIG.6illustrates how far into receiver610that the medium sized cartridge650protrudes into receiver610when a bolt carrier assembly is locked onto receiver510. Since the firing pin assembly630pushes cartridge650into receiver610as illustrated by double arrowed line660. Note that the different length of cartridge650as compared to cartridge550A ofFIG.5results in different protrusions of the receiver of these figures engaging different sets of recessions of the cutout bolt portions. InFIG.5, all of the six rows of receiver protrusions engage recessions of the cutout bolt carrier portion, yet inFIG.6only five rows of the receiver protrusions engage recessions of the cutout bolt carrier portion. FIG.7illustrates relative positions of the series of parts ofFIGS.5-6when a larger sized cartridge is locked into a receiver.FIG.7includes receiver710, cutout bolt portion720, firing pin assembly730, barrel740, and cartridge750.FIG.7includes the same type of protrusions in receiver510that were illustrated inFIGS.1-4, here however these protrusions are not numbered. The cutout bolt portion720ofFIG.7also includes recessions that engage the receiver protrusions at a third relative location. While not illustrated inFIG.7, cutout bolt portion720and firing pin assembly730may be part of a bolt carrier assembly that maintains a same relative position between the cutout bolt portion720and firing pin assembly730. FIG.7illustrates how far into receiver710that the larger cartridge750protrudes into receiver710when a bolt carrier assembly is locked onto receiver710. Since the firing pin assembly730pushes cartridge750into receiver710as illustrated by double arrowed line760. Note that the different length of cartridge750as compared to cartridges550A and650ofFIGS.5-6results in different protrusions of the receiver of these figures engaging different sets of recessions of the cutout bolt portions. InFIG.5all of the six rows of receiver protrusions engage recessions of the cutout bolt carrier portion, inFIG.6five rows of the receiver protrusions engage recessions of the cutout bolt carrier portion, and inFIG.7only four rows of the receiver protrusions engage recessions of the cutout bolt carrier portion. FIGS.5,6, and7illustrate that the firearms consistent with the present disclosure may be configured to receive cartridges that include shoulders. Even so, firearms consistent with the present disclosure may be configured to receive any type of firearm cartridges, whether those cartridges include shoulders or not. FIG.8illustrates three different images including an image of a receiver portion, an image of a bolt portion, and an image that includes the receiver portion surrounded by the bolt portion. Receiver portion810located on the left ofFIG.8includes a plurality of protrusions820. The middle image ofFIG.8includes a bolt portion830that includes slots840and protrusion850. Firearm cartridge860is illustrated in the central image ofFIG.8to show a location where a firearm cartridge would be located when that cartridge is locked in place within receiver portion810. For clarity, this central image ofFIG.8does not show the receiver portion located within the portion830. The slots840in the center ofFIG.8may be locations where the bolt portion could move relative to the receiver portion810when the slots840are oriented in a direction that do not lock the receiver portion810and the bolt portion830together. Slots840are illustrated with dashed lines to indicate that these slots may not be voids that cut through a cross section of bolt portion830. The image on the right side ofFIG.8illustrates the receiver portion810contained within the bolt portion830. The image on the right side ofFIG.8include protrusions820of receiver portion810, slots840of bolt portion830, and firing pin assembly850where most of these features are depicted with dashed lines that identify those portions that are contained within bolt portion830. Firing pin assembly850may be a sub-assembly that is attached to bolt portion830. Here a firing pin contained within firing pin assembly850may be used to strike cartridge860when that cartridge is fired. FIG.9includes two images that depict the operation of a mechanism consistent with the present disclosure.FIG.9includes receiver portion910, bolt portion920, firing pin sub-assembly930, cartridge case940, bullet950, gas tube960, cam970, piston980, and barrel990. The images ofFIG.9show changes in relative position of receiver portion910and bolt portion920shortly after a firearm cartridge has been fired. Note that in the left image ofFIG.9, bullet950has just passed an input of gas tube960that is attached to barrel990. As bullet950passes the input of the gas tube960, gas produced by the firing of gunpowder included in the firearm cartridge flows down gas tube960. The gas flowing into the tube is illustrated by an arrowed line marked with the capital letter “G.” As the bullet950moves down barrel990, gas G pushes on piston980, forcing piston980down such that cam970turns and then pushes bolt portion920in a backward (downward inFIG.9) direction. Arrowed lines, one labeled D1in the left image ofFIG.1and another labeled D2in the right image ofFIG.1shows relative motion between bolt portion920and receiver910. This relative motion continues to location D2where cartridge case940is forced to exit the firearm. FIG.9also includes protrusions illustrated as black blocks and slots illustrated as black lines between which the protrusions (black blocks) ofFIG.9are located. The relative orientation of these protrusions inFIG.9allow the relative motion of bolt portion920and receiver portion910based on the protrusions ofFIG.9and recessions (not shown inFIG.9) included in bolt portion920being disengaged. WhileFIG.9illustrates the protrusions being located within the slots, relative motion of bolt portions and receiver portions do not require the protrusions being located exactly asFIG.9illustrates. Rotational motion to unlock a bolt from a receiver may only require a few degrees of rotation. FIG.10illustrates two different views of parts of a firearm consistent with the present disclosure.FIG.10includes a side view1000A located above a partial expanded perspective view1000B of a semi-automatic rifle capable of firing firearm cartridges of different sizes. Each of the views ofFIG.10include a first set of gear teeth1005, a second set of gear teeth1010, gear1015, constant force spring1020, receiver1025, and bolt assembly1030. The upper firearm image1000A also includes barrel1035, gas tube1040, cartridge magazine1045, handle1050, trigger guard1055, and butt1060. After a cartridge is fired in the firearm, pressurized gas generated from the firing of the cartridge moves into the gas tube1040. This gas may force a piston in a (backward/left) direction opposite to the direction that a bullet exits (forward/right) barrel1035.FIG.10identifies that the backward end of the firearm ofFIG.10includes butt1060and that the forward end of the firearm is located at a right side of barrel1035. Force from the gas may force the piston, such as the piston ofFIG.9to actuate a cam mechanism that may cause a portion of bolt assembly1030to rotate and that may also force the bolt assembly1020backward (to the left ofFIG.10). The cam mechanism ofFIG.10may be included in aback part of bolt assembly1030. Motion of the bolt assembly1030may also cause gear teeth1005to move backward (left) to a point where a cartridge case is ejected from the rifle ofFIG.10. The movement of gear teeth1005will force gear1015to rotate and engage gear teeth1010. This action will pull gear teeth1010toward the forward end of the rifle, stretching constant force spring1020. After the cartridge case is ejected from the rifle, force exerted by spring1020will force the bolt assembly1030forward. At this moment a new cartridge located in cartridge magazine1045may be pushed into receiver1025and the cam of bolt assembly1030will return to its original position locking bolt assembly1030and receiver1025together. The process of firing the rifle ofFIG.10may continue until all new cartridges in magazine1045have been fired. Alternatively, a cartridge locked into the rifle ofFIG.10may be removed by pulling on a lever (not illustrated) attached to bolt assembly1030. Note that the partial expanded perspective view1000B ofFIG.10illustrates that gear teeth1005may include two separate rows of parallel teeth. In such an instance, the rifle ofFIG.10may include two different gears, where each respective gear moves along either a row of teeth on the left side of the rifle or along a rog of teeth on the right side of the rifle. The two gear teeth may also engage two separate rows of teeth of gear teeth1010. FIG.11illustrates a gas tube that may be included in a firearm that is coupled to a bolt assembly of a firearm. Item1110ofFIG.11is a gas tube that may include a piston as discussed in respect toFIGS.9-10.FIG.11also includes a bolt assembly including elements or features1120,1130,1135, &1140. Item1120may be a back portion of the bolt assembly that includes an internal cam mechanism. Item1130may be the bolt assembly portion discussed in respect toFIGS.1-9. Item1135may be a firing pin sub-assembly (discussed in respect toFIGS.4-9) that acts to push cartridges into a receiver. Sub-assembly1135may include a firing pin that strikes a primer included in a firearm cartridge. Item1145ofFIG.11is a recession of a portion of a bolt assembly discussed in respect toFIGS.1-8. When the bolt assembly ofFIG.11closes, the central rod (subassembly1135) on the bolt may push a round (cartridge) into the chamber until a shoulder of the round hits a matching taper in a chamber where the round is received. This may then cause the bolt to stop moving forward, while the bolt carrier continues forward under inertia and spring force. Those forces may cause a cam pin and surface to rotate the bolt into the locked position. The fit between the cartridge and chamber may be tight yet may not be perfect. When the cartridge fires, the ductile metal of the cartridge case (typically brass) expands, sealing the chamber and preventing or mitigating gas leakage. This process is called obturation. This may avoid gas leakage without having an interference fit between the cartridge and walls where the round (cartridge) is chambered. The rounds (cartridges) and rifles may be made to close enough tolerances that the cartridge expanding under pressure can form a seal based on obturation of the ductile cartridge case metal. FIG.12illustrates a cam mechanism coupled to a piston that may be used to force a portion of a bolt to rotate and move in a backward direction after a cartridge is fired in a firearm consistent with the present disclosure.FIG.12includes images of a cam in three different positions1200-A1,1200-A2, and1200-A3.FIG.12also includes images1200-B1and1200-B2of a back end of the cam mechanism ofFIG.12. The cam mechanism ofFIG.12includes piston1210and groove1220. The top images1200-A1&1200-B1show the cam mechanism ofFIG.12in a resting position. The middle images1200-A2&1200-B2show piston1210moving in the direction of arrow1230(toward the right ofFIG.12) based on the movement of gas as discussed in respect toFIG.9. Note that piston1210moves along groove1220forcing the cam mechanism to rotate in counterclockwise direction1240. The bottom image ofFIG.12illustrates movement of the piston continue along the direction of arrow1230after piston1210has reached the end of groove1220. After piston1210reaches the end of groove1220, further motion of the piston as indicated by arrow1230forces the entire cam mechanism ofFIG.12to the right along arrows1250. After a cartridge case has been ejected from the firearm based on movement of a bolt assembly attached to the cam mechanism, the cam mechanism and the bolt assembly may be forced back into an original position based on spring force as discussed in respect toFIG.10. WhileFIGS.10-12include features of an automatic or semi-automatic firearm, a firearm may include a manually operated bolt assembly. Here a person operating the firearm could chamber a round by grabbing a protrusion and moving that protrusion in an upward (or downward) direction to rotate a bolt assembly. The person could then manually move the bolt assembly backward, potentially ejecting a previously fired cartridge case, push the bolt forward, and then rotate the bolt assembly back to a locking position. This operation is very similar to how conventional bolt action rifles (e.g. the M1903 Springfield rifle). Here, however sets recessions and protrusions consistent with the present disclosure would allow cartridges of different lengths to be locked into a receiver and fired. The foregoing detailed description of the technology herein has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the claim. | 19,307 |
11859932 | The figures depict various embodiments of the present disclosure for purposes of illustration only. Numerous variations, configurations, and other embodiments will be apparent from the following detailed discussion. DETAILED DESCRIPTION Disclosed herein is a suppressor assembly having reduced gas back flow and a suppressor baffle for use in a suppressor assembly, in accordance with some embodiments of the present disclosure. In one example, a suppressor includes a baffle stack coaxially arranged within an outer housing, which can be cylindrical. The baffle stack has a plurality of generally conical or cone-like baffle structures connected to a baffle stack wall, which can also be cylindrical. The region within the baffle stack wall defines an inner volume that includes the path of the projectile through central openings of each baffle structure. An outer volume is defined between the baffle stack wall and the outer housing, such that the outer volume is concentric with and positioned radially outside of the inner chamber. Individual baffle structures taper proximally from the baffle stack wall to a central opening on the bore axis. At least some of the baffle structures define a through-opening located between the central opening and the baffle stack wall, providing an alternate flow path from baffle to baffle for gases in a radially outer region of the inner chamber. A conduit wall extends between and connects adjacent baffle structures so as to define a gas flow pathway in a radially outer portion of the inner volume. For example, the gas flow pathway passes around a proximal one of the adjacent baffle structures and through the opening defined in a distal one of the adjacent baffle structures. Conduits between adjacent baffle structures can be arranged in alternating sides of the inner chamber to promote a sinuous gas flow path. Flow-directing structures in the outer volume may include pairs of diverging vanes and pairs of converging vanes with respect to gases flowing distally through the suppressor. These flow-directing structures can promote gas flow between the inner chamber and outer volumes by creating localized regions of reduced or increased pressure. For example, converging vanes adjacent the proximal end of the baffle stack can direct gases from the outer volume into the inner chamber with a flow direction that crosses the bore axis. Pairs of diverging vanes can promote gas flow from the inner volume to the outer volume via ports defined in the baffle stack wall. In some embodiments, the suppressor can include an integrated flash hider in the distal end of the suppressor assembly to reduce the visible signature. In one example, the flash hider includes a first flash hider portion and a second flash hider portion. The first flash hider portion vents gases directly from the inner volume, such as gases flowing along the bore axis. The second flash hider portion is located radially outside of the first flash hider portion and can be configured to vent gases directly from the outer volume, from the inner volume, or both. In some embodiments, the flash hider includes a third flash hider portion arranged to vent gases directly from the outer volume in parallel with gases venting from the inner volume. In one example, the third flash hider portion includes ports distributed around a radially outer portion of the endcap that vent gases directly from the outer volume. When the firearm is discharged, the projectile travels through the suppressor along the bore axis, followed by combustion gases. Gases initially expand in a blast chamber in the proximal end portion of the suppressor. A first portion of combustion gases continues along the bore axis and enters the baffle stack through a central opening in the first baffle, sometimes referred to as the blast baffle. A second portion of combustion gases flows into the outer chamber between the baffle stack and outer housing. The second portion of gases may include gases deflected outward from the central axis in the blast chamber, for example. Gases in the outer chamber are largely isolated from and can vent semi-independently of gases flowing through the inner chamber. To more evenly fill the suppressor and to promote gas flow through most of the suppressor volume, some gases can be directed across the bore axis to create a sinuous flow. This elongated flow path delays the exit of gases from the inner chamber, which effectively reduces sound signature. In one embodiment, combustion gases are generally directed in an off-axis direction through the baffle stack as a result of one or more features. A baffle structure can have a central opening that is shaped to promote off-axis flow through the central opening. The central opening to the generally-conical baffle structure can have a step, an offset, a notch, or otherwise can define a non-circular opening, for example, to promote gas flow through the opening in a direction transverse to the central axis. In one such embodiment, the central opening is circular as viewed along the central axis, and has a first half of the opening that is axially offset from an opposite second half of the opening so as to provide an enlarged area as viewed transversely through the opening. Ports along the baffle stack wall direct gases from the inner chamber to the outer chamber, or vice versa. For example, the baffle stack can define ports so that gases near the radially outer portion of a baffle structure can pass into the outer chamber rather than stalling at a dead end between the cone and the outer wall of the baffle stack. Also, gases in a radially outer portion of the inner chamber can pass from one baffle to the next baffle via a conduit that extends between openings in the cone-like baffle structures. When used alone or in combination with other flow-directing features, the baffle stack promotes and/or amplifies a sinuous flow through the inner chamber. Features of the suppressor can be employed to amplify a sinuous or otherwise off-axis gas flow through the suppressor's inner chamber, a tortuous flow path through the outer chamber, and multiple gas flow paths through the flash hider. Various features can be used individually or in combination to provide suitable attenuation of the audible signature, attenuation of the visible signature, and reduction in back flow of pressurized gases into the firearm's receiver, particularly with some suppressors having an overall diameter of greater than two inches. Numerous variations and embodiments will be apparent in light of the present disclosure. General Overview As noted above, non-trivial issues may arise that complicate weapons design and performance of firearms. For instance, one non-trivial issue pertains to the fact that the discharge of a firearm normally produces an audible and visible signature resulting from rapidly expanding propellant gases and from the projectile leaving the muzzle at a velocity greater than the speed of sound. It is generally understood that attenuating the audible report may be accomplished by slowing the rate of expansion of the propellant gases. Reducing the visible signature or visible flash also can be accomplished by controlling the expansion of gases exiting the muzzle. Reducing flash is a function of temperature, pressure, barrel length, and the type of ammunition being fired, among other factors. However, attenuating muzzle flash can adversely affect the performance of sound attenuation and vice versa. Suppressors can have additional challenges associated with reducing visible flash and attenuating sound. In some suppressor designs, for example, slowing down the expansion and release of combustion gases from the muzzle can undesirably result in trapping and delayed release of pressurized gas from the suppressor, which results in a localized volume of high-pressure gases. As a natural consequence, the pressurized gases within the barrel take the path of least resistance to regions of lower pressure. Such condition is generally not problematic in the case of a bolt-action rifle because the operator opens the bolt to eject the spent casing in a time frame that is much greater than the time required for the gases in the suppressor to disperse through the distal (forward) end of the suppressor. However, in the case of a semi-automatic rifle, automatic rifle, or a machine gun, the bolt opens very quickly after firing (e.g., within 1-10 milliseconds) to reload the firearm for the next shot. In this short time, pressurized gases remain in the suppressor and the barrel. Some of the gases remaining in the barrel and the suppressor therefore follow the path of least resistance through the barrel and out through the chamber towards the operator's face rather than following the tortuous path through the suppressor. To avoid introducing particulates and combustion residue to the chamber, and to avoid combustion gases being directed towards the operator's face, it would be desirable to reduce the pressure build up within the suppressor and therefore reduce or eliminate back flow into the receiver of autoloading firearms. Thus, reducing the visible signature while also reducing the audible signature of a firearm presents non-trivial challenges. To address these challenges and others, and in accordance with some embodiments, the present disclosure relates to a suppressor having reduced gas back flow, a suppressor baffle for use in a suppressor assembly, and a suppressor with an integrated flash hider. Compared to traditional baffle-type suppressors, a suppressor of the present disclosure can reduce localized volumes of high-pressure gas and the resulting flow of combustion gases backward through the barrel and into the rifle's receiver after firing, such as may occur in semiautomatic and automatic rifles. The inner and outer chambers divide the gases into inner and outer volumes that can, in some embodiments, better expand to fill and flow through the entire suppressor volume. A suppressor (or a portion thereof) according to the present disclosure can be manufactured by molding, casting, machining, 3-D printing, or other suitable techniques. For example, additive manufacturing—also referred to as 3-D printing—can facilitate manufacture of complex geometries that would be difficult or impossible to make using conventional machining techniques. One additive manufacturing method is direct metal laser sintering (DMLS). As will be appreciated in light of this disclosure, and in accordance with some embodiments, a suppressor assembly configured as described herein can be utilized with any of a wide range of firearms, such as, but not limited to, machine guns, semi-automatic rifles, automatic rifles, short-barreled rifles, and submachine guns. Some embodiments of the present disclosure are particularly well suited for use with a belt-fed machine gun. Suitable host firearms and projectile calibers will be apparent in light of this disclosure. Although generally referred to a suppressor herein for consistency and ease of understanding the present disclosure, the disclosed suppressor is not limited to that specific terminology and alternatively can be referred to as a silencer, sound attenuator, a sound moderator, a signature attenuator, or other terms. Also, although generally referred to herein as a baffle structure, the disclosed baffles are not limited to that specific terminology and alternately can be referred to, for example, as a baffle cone, a tapered wall, or other terminology, even if such structure follows or does not follow a true conical geometry. Further, although generally referred to herein as a flash hider for consistency and ease of understanding the present disclosure, the disclosed flash hider is not limited to that specific terminology and alternatively can be referred to, for example, as a flash suppressor, a flash guard, a suppressor end cap, or other terms. Numerous configurations will be apparent in light of this disclosure. Example Suppressor Configurations FIGS.1and2illustrate front and rear perspective views, respectively, of a suppressor assembly100(or simply “suppressor”100), in accordance with an embodiment of the present disclosure. In this example, the suppressor100has a cylindrical shape that extends along a bore axis10from a proximal end portion12to a distal end portion14. The diameter of the outer housing102can be 1.5-3.0 inches in some embodiments, including 1.5-2.0 inches, 2.0-2.5 inches, and 2.5-3.0 inches. The cylindrical shape is not required, and other geometries are acceptable, including a cross-sectional shape that is hexagonal, octagonal, rectangular, oval, or elliptical, for example. An outer housing102extends between a distal housing end portion104and a proximal housing end portion106. The proximal housing end portion106optionally includes a threaded portion111that can be used to connect the suppressor100to an adapter or quick-disconnect assembly (not shown) suitable for attachment to a firearm barrel, for example. A flash hider200is retained in the distal end portion14. The proximal end portion12defines a blast chamber112. As can be seen inFIG.2, for example, the blast chamber112includes a diffusor cone114that tapers radially inward as it extends distally to meet the baffle structure126of a baffle122. The diffusor cone114defines a plurality of openings. In some embodiments, the blast chamber112is sized to accommodate a muzzle brake, flash hider, or similar muzzle attachment on the barrel of the firearm. For example, the suppressor100is constructed to be installed over a muzzle attachment on the firearm barrel, where the muzzle attachment is received in the blast chamber112; however, no such muzzle attachment is required for effective operation of suppressor100. In one example embodiment, the blast chamber112has an axial length from 0.5 inch to about 3 inches. Numerous variations and embodiments will be apparent in light of the present disclosure. Referring now toFIGS.3-6, various perspective views show a baffle stack120in accordance with the present disclosure.FIG.3is a front and side perspective view of a baffle stack120with a diffusor cone114and flash hider200.FIG.4is a rear and side perspective view of a baffle stack120with the diffusor cone114and flash hider200.FIG.5is a rear and side perspective view of baffle stack120with the flash hider200.FIG.6is an exploded, top and rear perspective view of a baffle stack120with flash hider200. In some embodiments, the baffle stack120has three or more baffles122between a flash hider200and diffusor cone114. In the example shown, the baffle stack120has six baffles122a-122f, where the baffles122are arranged sequentially and with the central openings136on the central axis or bore axis10to define a projectile flow path therethrough. As shown inFIGS.3-4, a mounting portion116with diffusor cone114is positioned proximally of the first baffle122a. Note that the mounting portion116has a cylindrical portion116athat is generally the same size as the outer housing102(shown inFIG.1). The diffusor cone114tapers in size from the inside of the cylindrical portion116ato join the baffle wall segment124of the first baffle122a. The diffusor cone114defines openings to direct a portion of gases to the outer chamber of the suppressor100. In some embodiments, the baffle stack120includes a plurality of individual baffles122, each of which includes an annular (e.g., cylindrical) baffle wall segment124and one or more baffle structures126of generally conical shape that are connected to the baffle wall segment124and taper to a central opening. Other shapes of the baffle wall are acceptable including a rectangular, hexagonal, octagonal, oval, or other cross-sectional geometry. In other embodiments, the baffle stack120can be made as a single component, such as using additive manufacturing. In embodiments having individual baffles, the baffle wall segments124abut or connect to one another to define a tubular baffle stack wall125. The baffle wall segments124can be connected to one another by welding, a threaded interface, or an interference fit, for example. In other embodiments, the entire baffle stack120, or portions thereof, can be formed as a single monolithic structure. For example, the baffle stack120can be made using additive manufacturing techniques such as direct metal laser sintering (DMLS). In embodiments where the baffle stack120is a monolithic structure, the baffle stack wall125may not distinctly define individual baffle wall segments124, but the baffle stack120can be considered as having baffle portions corresponding to the equivalent structure formed as distinct baffles122. Principles discussed herein for a baffle stack120having distinct baffles122apply to a baffle stack120formed as a unitary structure and vice versa. The structure of individual baffles122is discussed in more detail below. The baffle stack120includes flow-directing structures130on the outside of the baffle stack wall125. In various examples, the flow-directing structures130can be connected to one or both of an outer surface of the baffle stack wall125and an inner surface of the outer housing102. The flow-directing structures130can be vanes, walls, ridges, partitions, or other obstructions that cause collisions with flowing gases and result in a non-linear gas flow through the outer chamber109. In some examples, flow-directing structures130can include alternating vanes130′ that extend part way the outer housing102and the baffle stack wall125, where the alternating position of the flow-directing structures130can define an oscillating flow path for the gases as they flow towards exit at the distal end of the suppressor100. In the examples ofFIGS.3-6, the flow-directing structures130are configured as vanes130′ having a planar or helical shape. The vanes130′ are on the outside of the baffle stack wall125and arranged in a zig-zag or herringbone-type pattern. For example, each baffle wall segment124has vanes130′, each of which extends transversely to the bore axis10and has an axial length roughly equal to the axial length of the baffle wall segment124. In some instances, part of a vane130′ may extend beyond the end of the baffle wall segment124, such as illustrated. Ends of adjacent vanes130′ can be directed towards each another to make a V shape or vertex132, even though the ends of vanes130′ may or may not close the vertex132. As shown in this example, vanes130′ define a gap or opening137(also shown inFIGS.7A-7D) at the vertex132for gas flow therethrough. Each vertex132is positioned to point generally along the bore axis10either distally or proximally. In some embodiments, vanes130′ are generally arranged in a circumferential grid with vertices132arranged along lines that are parallel to the bore axis10, and in rows arranged circumferentially around the baffle stack120. Vanes130′ defining a vertex132pointing proximally can be referred to as diverging vanes130′ and vanes130′ defining a vertex132pointing distally can be referred to as converging vanes. In this example, the first baffle122adefines an initial gas port127located adjacent the vertex of converging vanes130′ on the first baffle122a. This initial gas port127is positioned to amplify the initial phase of a sinuous flow of gases within the inner chamber108by directing gases into the inner chamber108in a direction that crosses the bore axis10(e.g., downward as oriented inFIG.6). In the examples ofFIGS.3-6, the initial gas port127is shown as being on the top side of the baffle stack120for ease of discussion. Note, however, that the baffle stack120and suppressor100are not constrained to any particular rotational orientation and the initial gas port127can be on the side, bottom, or other location. When the initial gas port127is positioned along the top of the baffle stack120, for example, it directs gases downward across the bore axis10to reinforce or accentuate a sinuous flow pattern that is oriented in a vertical plane; in other rotational orientations of initial gas port127, the sinuous flow of gases may be similarly rotated about the bore axis10. Flow of gases through the suppressor100is discussed in more detail below. The baffle stack120in the examples ofFIGS.3-6defines a plurality of additional gas ports128each of which is positioned between diverging vanes130′. The additional gas ports128are distributed about the baffle stack wall125with at least some of the gas ports128being formed at the joint between adjacent baffle wall segments124. In some embodiments, when the gas port127is along the top of the baffle stack120, additional gas ports128are positioned along sides of the baffle stack120. The gas ports128can be positioned to permit gases to pass between the outer chamber109rather than stall in a corner or similar region within the inner chamber108. In some embodiments, the pressure is greater in the outer chamber109, resulting in gas ports128functioning as inlet ports for gas flow from the outer chamber109to the inner chamber108. Note, however, that gas dynamics within the suppressor100depend on many factors and the gas flow through various ports could reverse directions during the firing cycle. For example, gases may flow in either direction between the inner chamber108and the outer chamber109. The flash hider200is installed adjacent the final baffle122(baffle122fin this example) with portions of the flash hider200received within the baffle wall segment124. The flash hider200can be secured to the baffle stack120by welding, threaded engagement, a frictional fit, or other by engagement with the outer housing102. Optionally, the flash hider200defines recesses221in the distal end portion to facilitate engagement with a spanner or other tool used to assemble the suppressor100with the mount110, or to screw the suppressor100onto the barrel or barrel attachment. Example embodiments of a flash hider200are discussed in more detail below. Referring now toFIGS.7A-7D, a baffle122is illustrated in a side view, a bottom view, a top and rear perspective view, and a front perspective view, respectively, in accordance with an embodiment of the present disclosure. Baffle122in this example is also shown as baffle122din the exploded view ofFIG.6. Baffle122has a cylindrical baffle wall segment124connected to a generally conical baffle structure126that extends rearwardly as it tapers in size from the baffle wall segment124to the central opening136aligned with the bore axis10. As noted above, the central opening136provides a pathway for a projectile along the bore axis10. In this example, the baffle structure126generally has a frustoconical geometry with a linear taper. In other embodiments, the baffle structure126can have a stepped profile or other non-linear taper, as will be appreciated. In other embodiments, the baffle structure126can have a polygonal cross-sectional shape, such as a rectangle, hexagon, or star. In this example, the central opening136has a stepped shape (as viewed from the side) such that a step134extends horizontally through the center of the central opening136, dividing the central opening136into a first portion136a(e.g., an upper half) and a second portion136b(e.g., a lower half), where the first portion136ais axially offset from the second portion136b. As a result of the step134, the cross-sectional area of the central opening136in the direction of the bore axis is circular, and it is smaller than the cross-sectional area of the opening in an oblique, transverse (e.g., downward) direction. This larger opening allows for less restrictive gas flow in somewhat oblique, transverse direction to the bore axis, thus promoting a sinuous flow path. The cross sectional area of the central opening136in the axial direction is circular and smaller than the cross-sectional area in the transverse direction. In one embodiment, gases flow through the central opening136in a direction approximately parallel to the wall of the baffle structure126, such as an angle with the bore axis10from 15-60 degrees, including 30-50 degrees, 20-40 degrees, 25-35 degrees, about 30 degrees, about 35 degrees, about 40 degrees, or about 45 degrees. The step134can be formed, for example, by machining away the upper or lower part of the baffle structure126at the central opening136. In other embodiments, the central opening136can be bored at an angle with respect to the bore axis10, such as an angle of 30-60° to result in the central opening136having an oval shape. In yet other embodiments, the larger portion136acan have an enlarged cross-sectional area as a result of a crescent-shaped recess added to the bore area, a second bore formed at a downward angle and intersecting the central opening136to increase the size of part of the central opening136, a notch added to the area of the opening, or other approach. Between the first portion136aof the central opening136and the baffle wall segment124, the baffle structure126defines a first through opening142. Between the second portion136bof the central opening136and the baffle wall segment124, a radially outer portion of the baffle structure126defines a second through opening146. In the example shown, the first through opening142and second through opening146are oriented 180° from each other on opposite sides of the central opening136. A conduit144or chute extends between adjacent baffle structures126and directs a portion of gases through the inner chamber108using through openings142,146. For example, the conduit144extends rearwardly from the outside of one baffle structure126to the inside of an adjacent baffle structure126while also connecting to the baffle wall segment124. The second through opening146can direct gases into the conduit144that leads to the first through opening142of a distally located baffle122. In some embodiments, the second through opening146is bounded in part by the baffle wall segment124. When baffles122configured as shown in the example ofFIGS.7A-7Dare assembled sequentially with each baffle being 180° out of phase with the preceding baffle, the conduit144of one baffle122receives gases from the second through opening146of a preceding baffle and delivers those gases to the next baffle via the first through opening142of the subsequent baffle. The result is a sinuous gas flow within the inner chamber that crosses the bore axis10. This sinuous flow pattern can occur along a vertical plane or other plane as desired. Gas ports128along the sides of the baffle wall segment124direct gases into the outer chamber109from radially outer regions along the sides of the inner chamber108. Sample gas flow paths are discussed in more detail below. Flow-directing structures130configured as vanes130′ are on the outside of the baffle wall segment124. Vanes130′ are arranged in a zig-zag pattern moving circumferentially around the baffle wall segment124. As a result, circumferentially adjacent vanes130′ have either a diverging or converging arrangement, where the vertex132of each pair of vanes130′ is directed along the bore axis10. In this example, the vanes130′ defining each vertex132do not make contact (or do not make complete contact) so as to define an opening137between ends of the converging vanes130′ and to permit gases to flow through the vertex132. In some embodiments, the distal ends of each vane130′ has a V-shaped notch while the proximal end of the vane130′ is substantially straight. In some embodiments, each vertex132can have an opening137of the same or different size compared to other vertices132. Also, openings between diverging vanes130′ or converging vanes130′ can be of the same or different size and geometry. Numerous variations and embodiments will be apparent in light of the present disclosure. The baffle wall segment124can define one or more gas ports128positioned between diverging vanes130′. When the gas port127is along the top of the baffle stack120, such as shown inFIG.5, gas ports128are positioned along the sides of the baffle wall segment124. Gas ports128in this example have a semicircular shape, but other shapes are acceptable. Optionally, the baffle wall segment124can define gas ports in various other locations. Further, a radially outer portion of a given baffle structure126may define one or more through openings142,146that permit passage of gases within the inner chamber108, such as gases moving between adjacent baffle structures126. Referring now toFIGS.8and9, a front perspective view and a front perspective view, respectively, illustrate longitudinal sections of a suppressor100, in accordance with an embodiment of the present disclosure. Broken lines and arrows in these figures represent example gas flow paths. Note, however, that the arrows are for illustration only and may not represent all gas flows and may not accurately represent changes in gas flow patterns that may occur throughout the firing cycle, as will be appreciated. The suppressor100defines an inner chamber108radially inside of the baffle stack wall125and an outer chamber109between the baffle stack wall125and the outer housing102. As high pressure gases enter the suppressor100, the gases expand initially into the blast chamber112. A first portion of gases flows into the inner chamber108via the central opening136of the first baffle122a. A second portion of gases passes into the outer chamber109by flowing around the baffle structure126of the first baffle122aand through openings in the diffusor cone114. After entering the outer chamber109, gases generally continue to flow towards the distal end portion14where these gases vent through the flash hider200. A gas port127in the baffle wall segment124of the first baffle122a(or blast baffle) is positioned to direct gases into the inner chamber108in a direction crossing the bore axis10. The central opening136of the second baffle structure126bis stepped to direct gases across the bore axis10in the same general direction as the gas port127, which is downward in this example. Similarly, conduits144around baffle structures126direct gases in the radially outer portion of the inner chamber108to flow across the bore axis10through first through openings142. Some of the gases in the inner chamber108pass through gas ports128to the outer chamber109. For gases flowing through the inner chamber108, individual features of the baffle122can be included to promote flow in a direction across the bore axis10and disrupt gas flow along the bore axis10. In combination, these features promote one or more sinuous or non-linear gas flow paths through the inner chamber108. These features include the gas port127in the first baffle122athat directs gases across the bore axis10, the stepped profile of the central opening136that causes gases to flow through the central opening136in a direction transverse to the bore axis10, and a conduit144and first through opening142in the baffle cone126. The suppressor100can also include features that result in low backpressure, which reduces the flow of gases back through the barrel and receiver during the firing cycle. One such feature is a second through opening146in the baffle structure126that allows gases to pass from one baffle to the next via conduit144. Another feature is a gas port128positioned to draw gases into the outer chamber109from the radially outer portion of the inner chamber108, rather than stalling in the corner between the baffle structure126and the baffle stack wall125. Yet another feature is the outer chamber109, in which a large portion of total combustion gases volume flows with generally less resistance than the more tortuous flow path through the inner chamber108. Further, a flash hider200is configured to vent gases from the outer chamber109either directly or after first entering the inner chamber108with less flow restriction than traditional baffle suppressors featuring a central opening only. Examples of a flash hider200are discussed below. For gases in the outer chamber109, vanes130′ in diverging and converging pairs increase turbulence and force a tortuous flow path to the distal end portion14. Collisions with the vanes130′ and other flow-directing structures130result in energy loss and transfer of heat from the gases. As noted above, diverging vanes130′ create a localized region of lower pressure that draws gases out of the inner chamber108via gas ports128. Conduits144on opposite sides of the inner chamber108are positioned sequentially to amplify a sinuous or alternating gas flow path through the inner chamber108. The conduits144direct gases through the crossflow opening142in the baffle structure126and across the bore axis10. Baffles122in the baffle stack120need not have the same features in all embodiments. For example, only the first baffle122adefines a gas port127and gas ports128may be present in alternating baffles122. Additionally, adjacent baffles can be rotated 180° or some other amount to promote a sinuous and/or swirling gas flow. Numerous variations and embodiments will be apparent in light of the present disclosure. FIG.10Aillustrates a top sectional view of the suppressor100ofFIG.8andFIG.10Bshows a front perspective view of the section shown inFIG.10A, where the section is taken 90° to that ofFIGS.8-9, in accordance with an embodiment of the present disclosure. Gases flowing through the inner chamber108can pass through gas ports128to the outer chamber109. In this example each gas port128is positioned in the corner between a baffle structure126and baffle stack wall125. Such placement avoids or reduces stalled gas flow in these areas. To facilitate gas flow from the inner chamber108to the outer chamber109, rather than in the reverse direction, gas ports128can be positioned between diverging vanes130′ which create a localized region of low pressure. First through openings142and second through openings146in baffle structures126are also shown along with conduits144. In this example, gases in the inner chamber108can exit the suppressor through additional second outer volumes236of the flash hider200, and gases in the outer chamber109can exit the suppressor through radially outer volumes222. Referring now toFIGS.11A-11D, a flash hider200is shown in a top and front perspective view, a front view, a rear perspective view, and a side cross-sectional view as viewed along line D-D ofFIG.11B, respectively, in accordance with an embodiment of the present disclosure. InFIG.10D, part of the outer housing102is shown. The flash hider200extends along the bore axis10from a proximal end202to a distal end203. An outer wall224extends between and connects the proximal end202and distal end203. The proximal end202defines a central opening208for passage of a projectile and gases. Ports230in the outer wall224adjacent the proximal end202provide an alternate entry point for gases to enter the flash hider200. In this example, the flash hider200includes a flange or distal wall204extending radially outward from the distal end203of the outer wall224, in effect providing an endcap as part of the flash hider200. In some embodiments, the rim206of the endcap or distal wall204can be connected to the outer housing102, such as by welding, a frictional fit, or a threaded connection. The outer wall224defines an expanding volume as it extends distally. The outer wall224directs propellant gases away from the bore axis10and limits the expansion of the propellant gases. In some embodiments, the outer wall224has a frustoconical shape that defines an outer wall angle A with respect to the bore axis10. Examples of acceptable values for the outer wall angle A include 10-45°, including 15°-20°, and 16-18°. In other embodiments, the outer wall224can have other cross-sectional shapes, such as a square, rectangle, hexagon, or other polygonal or elliptical shape. The outer wall224(or portions thereof) can have a linear or non-linear taper from the distal end203to the proximal end202. Examples of a non-linear taper include a curved (e.g., elliptical or parabolic) or a stepped profile. The volume of the flash hider200within the outer wall224includes a first flash hider portion216and a second flash hider portion220. The first flash hider portion216vents a first portion of gases that enter the flash hider200through the central opening208. For example, the first flash hider portion216vents gases flowing through the inner chamber108along the bore axis10. The second flash hider portion220vents a second portion of gases that enter the flash hider200through one or more ports230in the outer wall224of the flash hider200. For example, the second flash hider portion220vents gases from the outer chamber109and/or gases in the radially outer portion of the inner chamber108. In this example, the second flash hider portion220vents gases from both the outer chamber109and the inner chamber108via radially outer volumes222. In some embodiments, the first flash hider portion216includes an inner volume216awith a conical shape that expands distally from the central opening208. As shown inFIG.11B, for example, the inner volume216aincludes the frustoconical volume circumscribed by and defined in part by the radially inner faces242of the flow partitions240. The first flash hider portion216also includes first outer volumes216bpositioned radially outside of and continuous with the inner volume216a. In this example, each first outer volume216bis positioned radially between the inner volume216aand the circumferential wall244, where each first outer volume216bis also located circumferentially between adjacent flow partitions240of the second flash hider portion220. The first portion of gases entering through the central opening208can expand along the inner volume216aand can further expand into the first outer volumes216b. In one example, the inner volume216ahas a frustoconical geometry extending along the bore axis10. In some such embodiments, the inner faces242of the flow partitions240have an inner wall angle B (shown inFIG.11D) with the bore axis10from 4-15°, including 5-8°, or 6-7°, for example. Such a value for the inner wall angle B has been found to slow down propellant gases exiting to the environment as well as to reduce the amount of hot propellant gases that mix with ambient air/oxygen. Accordingly, and without being constrained to any particular theory, it is believed that such an inner wall angle B permits adequate gas expansion yet also desirably reduces the size of a “Mach disk” or “flow diamond”—appearing as an orange or red flash—as propellant gases transition from supersonic to subsonic flow. The second flash hider portion220includes a plurality of radially outer volumes222that are interspersed circumferentially with the first outer volumes216bof the first flash hider portion216. The radially outer volumes222are defined within flow partitions240connected to the outer wall224. In this example, each flow partition240connects to the proximal end202of the flash hider200adjacent the central opening208and extends forward to the distal end203. Accordingly, each flow partition240isolates one of the radially outer volumes222from the first flash hider portion216and in part defines the inner volume216aof the first flash hider portion216. In this example, three radially outer volumes222generally resemble sectors of an annular region located between the frustoconical inner volume216aand the outer wall224. The second flash hider portion220can have other numbers of radially outer volumes222, such as two, four, or some other number. In one example, each flow partition240generally has a U shape as viewed from the distal end203. The flow partitions240can be rectangular, rounded, or have some other geometry. The radially outer volumes222are distributed and spaced circumferentially about the bore axis10and are located radially outside of the inner volume216aof the first flash hider portion216. In some embodiments, all flow partitions240have the same dimensions and are evenly distributed about the bore axis10, although this is not required. The second flash hider portion220optionally also includes additional second outer volumes236that are positioned laterally between adjacent flow partitions240and radially between the outer wall224and a circumferential wall244between adjacent flow partitions240. In this example, each additional second outer volume236is located radially outside of the first outer volume216bof the first flash hider portion216, so that a first outer volume216band an additional second outer volume236share a region between adjacent flow partitions240and are separated by the circumferential wall244. The additional second outer volumes236are shown as having a reduced cross-sectional area compared to the radially outer volumes222, but this is not required. For example, each additional second outer volume236can have a reduced radial dimension, but a greater circumferential dimension compared to these dimensions of the radially outer volumes222, resulting in a cross-sectional area that is about equal to or even greater than that of the radially outer volume222. Gases can enter the radially outer volumes222of the second flash hider portion220from the inner chamber108via ports230in the proximal portion of the outer wall224, in some embodiments. When the flash hider200is part of a suppressor assembly, some or all of the gases flowing through the suppressor along a radially outer flow path can enter the second flash hider portion220through ports230. Absent any openings through the flow partition240, and absent any gases entering the second flash hider portion220through the distal end203, gases entering the central opening208are isolated from and cannot flow through the radially outer volumes222of the second flash hider portion220. One advantage of venting radially outer volumes or off-axis flow of the suppressor100is to reduce the pressure of the gases flowing along the bore axis10. In doing so, flash can be reduced. Venting through the second flash hider portion220also can reduce the pressure in the suppressor100and therefore reduce the back flow of gases into the firearm's chamber, such as when the suppressor100is used with semi-automatic or automatic rifles. Further, isolating the gas flow through the second flash hider portion220from the first flash hider portion216can inhibit mixing and turbulence of gases exiting the flash hider200, and therefore reduce the visible signature of the firearm, as will be appreciated. In some embodiments, ports230into radially outer volumes222are oriented generally parallel to the bore axis10so as to prevent a line-of-sight into the suppressor100through radially outer volumes222. In one such embodiment, the proximal end portion of the outer wall224protrudes radially outward at these ports230so as to preclude a line of sight into the suppressor100. As shown inFIGS.11B-11C, for example, these ports230are generally oriented parallel to the bore axis10due to a radial expansion of the outer wall224. FIGS.12A-12Cillustrate a suppressor100with a flash hider200in accordance with another embodiment of the present disclosure.FIG.12Ais a front perspective view of a distal end portion of a suppressor100.FIG.12Bis a side view of a section as viewed along line BC-BC ofFIG.12A, andFIG.12Cis a front perspective view of the section as viewed along line BC-BC. In this embodiment, the flash hider includes a first flash hider portion216that includes the inner volume216aand first outer volumes216b, similar to the embodiment discussed above with reference toFIGS.11A-11D. A second flash hider portion220includes radially outer volumes222and additional second outer volumes236, similar to as discussed above. Compared to the embodiment ofFIGS.11A-11D, this embodiment also includes a third flash hider portion246with vents248positioned radially outside of each additional second outer volume236. Gases in the outer chamber109can exit the flash hider200directly through vents248. Gases in the outer chamber109can also exit the suppressor directly through radially outer volumes222. Unlike the embodiment ofFIGS.11A-11D, the cross-sectional view ofFIG.12Cillustrates gas flow paths for gases in the outer chamber109to exit the suppressor100via vents248and radially outer volumes222, where the first flash hider portion216and additional second outer volumes236vent gases from the inner chamber108. Gases from the inner chamber108can also exit the suppressor100via radially outer volumes222, such as shown inFIGS.12B-12C. FIGS.13A and13Billustrate part of a suppressor100with a flash hider200, in accordance with another embodiment of the present disclosure.FIG.13Ais a front perspective view showing a distal end portion of a suppressor100with the flash hider200.FIG.13Bis a side view showing a section as viewed along line B-B ofFIG.13A. In this embodiment, the flash hider200includes a first flash hider portion216that includes the inner volume216aand first outer volumes216b, similar to the embodiment discussed above with reference toFIGS.11A-11D. A second flash hider portion220is radially outside of the first flash hider portion216and includes radially outer volumes222positioned radially outside of the inner volume216a, and additional second outer volumes236positioned radially outside of first outer volumes216bof the first flash hider portion216. This embodiment further includes third flash hider portion246radially outside of the second flash hider portion220. The third flash hider portion246includes vents248positioned radially outside of some or all of the additional second outer volumes236and radially outside of some or all of the radially outer volumes222. In this example, the third flash hider portion246includes six vents248distributed circumferentially. The first flash hider portion216vents gases from the inner chamber108and that enter the flash hider200through the central opening208. In this example, the second flash hider portion220vents gasses flowing directly from the inner chamber108and directly from the outer chamber109. Gases in the inner chamber108can exit the flash hider200through the additional second outer volumes236or through radially outer volumes222via ports230. Gases in the outer chamber109can exit the flash hider200directly through radially outer volumes222. Thus, the second flash hider portion220vents gases from both the inner chamber108and outer chamber109in this example. In other embodiments, the second flash hider portion220can be configured to directly communicate only with the inner chamber108or only with the outer chamber109. For example, radially outer volumes222can communicate directly with the inner chamber108via ports230, such as shown inFIGS.12B-12C, so that the second flash hider portion220vents gases directly from the inner chamber108and vents248vent gases directly from the outer chamber109. Gases in the outer chamber109can exit the flash hider200directly through vents248. Gases in the outer chamber109can also exit the suppressor directly through radially outer volumes222. The cross-sectional view ofFIG.12Cillustrates gas flow paths for gases to exit the suppressor100from the outer chamber109via vents248and radially outer volumes222, where the first flash hider portion216and additional second outer volumes236vent gases from the inner chamber108. As will be appreciated in light of the present disclosure, a suppressor assembly100provides multiple gas flow paths that can be configured to reduce the audible and visible signature of the firearm. As discussed above, combustion gases can be divided into two volumes of gas that are largely separated from each other to more evenly and more completely fill the entire volume of the suppressor100. These gas volumes pass through the corresponding inner and outer chambers (with some mixing therebetween) before exiting the suppressor100through a flash hider200. Flow of part of the gases through the outer chamber can significantly reduce the back flow of pressurized gases into the firearm. This mixing of gases between the inner chamber108and outer chamber109allows for better filling of the chambers by the combustion gases, longer flow paths, increased gas turbulence, better cooling, and a faster reduction in total energy of the gases. These in turn, can produce the benefits described above. It will be appreciated that the gases flowing through the inner chamber108are slowed and/or cooled by the operation of the baffles122, which additionally induce localized turbulence and energy dissipation, thus reducing (or “suppressing”) the sound and/or flash of expanding gases. For example, as the gases collide with baffles122and other surfaces in the suppressor, the gases converge and then expand again in a different direction, for example. The various collisions and changes in velocity (direction and/or speed) result in localized turbulence, an elongated flow path, and heat and energy losses from the gases, thereby reducing the audible and visual signature of the rifle. Further Example Embodiments The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent. Example 1 is a suppressor comprising a hollow tubular housing extending along a bore axis from a proximal end to a distal end. A baffle stack within the hollow tubular housing extends along the bore axis from a proximal baffle stack end to a distal baffle stack end. The baffle stack has a tubular baffle wall with a plurality of cone-like baffle structures connected to an inside of the baffle wall and tapering in a rearward direction to a central opening on the bore axis. The suppressor defines an inner volume inside of the tubular baffle wall and an outer volume between the tubular baffle wall and the hollow tubular housing. Flow-directing structures in the outer volume include pairs of diverging vanes and pairs of converging vanes with respect to gases flowing distally through the suppressor. A conduit wall extends between and connects adjacent baffle structures, wherein the conduit wall defines a gas flow pathway in a radially outer portion of the inner volume. The gas flow pathway passes around a proximal one of the adjacent baffle structures and through an opening defined in a distal one of the adjacent baffle structures. Example 2 includes the subject matter of Example 1, where the tubular baffle wall defines an initial gas port adjacent the proximal baffle stack end. Example 3 includes the subject matter of Example 2, where the initial gas port is positioned between a pair of converging vanes. Example 4 includes the subject matter of any one of Examples 1-3 and further comprises a diffusor cone in a proximal end portion of the suppressor, the diffusor cone tapering in a distal direction from the tubular outer housing to the baffle stack and defining a plurality of openings, where the gas port is positioned adjacent a distal end portion of the diffusor cone. Example 5 includes the subject matter of any one of Example 2-4, where the initial gas port is configured to direct gases from the outer volume to the inner volume and through a vent opening in a baffle structure in a proximal end portion of the baffle stack. Example 6 includes the subject matter of Example 5, where gases passing through the vent opening intersect with gases flowing along the bore axis. Example 7 includes the subject matter of any one of Examples 1-6, where the suppressor includes two or more conduits with at least one conduit on a first side of the bore axis and at least one conduit on an opposite second side of the bore axis, and where the two or more conduits alternate sequentially between the first side of the bore axis and the second side of the bore axis. Example 8 includes the subject matter of Example 7, wherein the suppressor includes at least three conduits. Example 9 includes the subject matter of Examples 7 or 8, wherein the two or more conduits define a sinuous gas flow path through the inner volume. Example 10 includes the subject matter of Example 9, wherein the sinuous gas flow path crosses the bore axis. Example 11 includes the subject matter of any one of Examples 1-10, where the central opening of at least some of the baffle structures has a first portion and a second portion, where the first portion is axially offset relative to the second portion. Example 12 includes the subject matter of Example 11, wherein the first portion is semicircular and the second portion is semicircular, so that as viewed along the central axis the first and second portions in combination define a circular central opening. Example 13 includes the subject matter of any one of Examples 1-12, where the pairs of converging vanes and the pairs of diverging vanes generally define a zig-zag pattern around an outside of the tubular baffle wall, the pattern including circumferential rows of vanes and axial columns of vanes, wherein adjacent vanes in the circumferential rows have an alternating orientation with respect to the bore axis. Example 14 includes the subject matter of Example 13, where individual vanes of the pairs of converging vanes and pairs of diverging vanes have a helical shape. Example 15 includes the subject matter of Example 13 or 14, wherein vertices of pairs of converging vanes are aligned along first axes generally parallel to the bore axis, and wherein vertices of pairs of diverging vanes are aligned along second axes generally along to the bore axis, the first axes interspersed with the second axes around the baffle stack. Example 16 includes the subject matter of any one of Examples 1-15 and further comprises a flash hider in fluid communication with the baffle stack and connected to the distal end of the hollow tubular housing. Example 17 includes the subject matter of Example 16, wherein the flash hider includes a first flash hider portion configured to vent a first portion of gases from the inner volume and a second flash hider portion configured to vent a second portion of gases from the outer volume and from the inner volume. Example 18 includes the subject matter of Example 17, wherein the flash hider further defines a third flash hider portion configured to vent gases directly from the outer volume. Example 19 includes the subject matter of Example 18, wherein the second flash hider portion is radially outside of the first flash hider portion and the third flash hider portion is radially outside of the second flash hider portion. Example 20 is a suppressor that includes a baffle stack with a cylindrical wall around an inner volume and extending along a central axis. The baffle stack includes a plurality of cone-like baffle structures each of which is connected to the cylindrical wall and tapers rearwardly to a central opening, where at least some of the baffle structures define a vent opening between the central opening and the baffle stack wall. An outer housing around the baffle stack has an inner surface spaced from and confronting the cylindrical wall, where the suppressor defines an outer volume between the cylindrical wall of the baffle stack and the outer housing. Flow-directing features are in the outer volume. A diffusor cone is in a proximal end portion of the suppressor, the diffusor cone tapering in a distal direction between the outer housing and the baffle stack and defining a plurality of openings. A conduit wall extends between and connects adjacent baffle structures of the baffle stack, wherein the conduit wall defines a gas flow pathway in a radially outer portion of the inner volume. The gas flow pathway passes around a proximal cone of the adjacent baffle structures and through the vent opening defined in a distal cone of the adjacent baffle structures. An end cap is connected to a distal end of the outer housing, the end cap defining a central opening aligned with the central axis. Example 21 includes the subject matter of Example 20, where a proximal end portion of the cylindrical wall of the baffle stack defines an initial gas port between a pair of converging vanes, the initial gas port in direct fluid communication with a vent opening in one of the plurality of baffle structures. Example 22 includes the subject matter of Example 20 or 21, where the central opening of at least some baffle structures of the plurality of baffle structures defines a step as viewed from a side of the suppressor, such that a first portion of the central opening is spaced distally along the central axis from a second portion of the central opening. Example 23 includes the subject matter of any of Examples 20-22, wherein the end cap is configured as a flash hider, the flash hider including a first flash hider portion configured to vent a first portion of gases directly from the inner volume and a second flash hider portion configured to vent gases from both the inner volume and the outer volume. Example 24 includes the subject matter of Example 23, wherein the flash hider further defines a third flash hider portion configured to vent gases directly from the outer volume. Example 25 is a suppressor baffle comprising an annular baffle wall extending axially along a bore axis from a first end to a second end; a baffle structure connected to the tubular baffle wall and extending along the bore axis away from the tubular baffle wall and defining a central opening aligned with the bore axis. Flow-directing structures are on an outside of the tubular baffle wall and include vanes oriented transversely to the bore axis. The vanes include converging vanes and diverging vanes, wherein each pair of converging vanes and pair of diverging vanes generally defines a vertex and an open mouth opposite the vertex. The baffle structure defines a through-opening between the central opening and the tubular baffle wall. A conduit around the through opening extends rearwardly and is configured to engage a baffle structure of a proximally located suppressor baffle. When two or more suppressor baffles are assembled together, the conduit defines a gas flow path around a rearward baffle structure and through the through-opening of the forward baffle structure. Example 26 includes the subject matter of any of Example 25, wherein the vertex is an open vertex permitting gas flow through the vertex. Example 27 includes the subject matter of Example 25 or 26, wherein individual pairs of diverging vanes and individual pairs of converging vanes direct gases along a helical gas flow path. Example 28 includes the subject matter of any of Examples 25-27, wherein the tubular baffle wall is cylindrical. Example 29 includes the subject matter of any of Examples 1-28, wherein the tubular baffle wall defines one or more openings adjacent an intersection between the baffle structure and the tubular baffle wall. Example 30 includes the subject matter of Example 29, wherein each of the one or more openings is positioned between a pair of diverging vanes on the outside of the tubular baffle wall. Example 31 includes the subject matter of any one of Examples 25-30, wherein the central opening has a stepped shape as viewed from the side, the stepped shape defining a first portion of the central opening that is axially offset from a second portion of the central opening. Example 32 includes the subject matter of Example 31, wherein the first portion and second portion of the central opening together define a circular shape as viewed along the central axis. Example 33 includes the subject matter of any of Examples 25-32, wherein the vanes are arranged in a zig-zag pattern around a circumference of the tubular baffle wall. Example 34 includes the subject matter of any of Examples 25-33, wherein each of the vanes follows a helical path. Example 35 is a suppressor baffle stack including a plurality of suppressor baffles as disclosed in Examples 25-34. Example 36 includes the subject matter of Example 35, wherein the baffle stack includes at least three suppressor baffles. Example 37 is a suppressor comprising the baffle stack of Example 36. The foregoing description of example embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the present disclosure be limited not by this detailed description, but rather by the claims appended hereto. Future-filed applications claiming priority to this application may claim the disclosed subject matter in a different manner and generally may include any set of one or more limitations as variously disclosed or otherwise demonstrated herein. | 60,830 |
11859933 | DETAILED DESCRIPTION Exemplary embodiments provide for a gunner stand that comprises an easily replaceable bearing system for use in the field that also reduces vibration from the vehicle on the bearings, decreasing wear and tear, and reducing audible rattling. Gunner stands, present in many armored vehicles, raise and lower from within the vehicle to allow a gunner shoot from a turret at the top of the vehicle. In order to raise and lower the stand, the top platform that the gunner stands on may be attached to a scissor lift. The scissor lift includes bearings that slide along rails, allowing the platform to raise and lower. Over time, the bearings can wear out from use and vibrations of the vehicle. The bearings need to be replaced in the field, as the sending the armored vehicle to a shop is not feasible during military operations. Therefore, the bearing system must be easily replaceable in the field and also withstand vibrations from the vehicle. In general, a gunner stand comprises a top platform and a bottom platform. The bottom platform may be secured to the floor of an interior of a vehicle. The top platform is connected to the bottom platform by a scissor lift comprising a replaceable bearing system. The bearing system comprises a sliding shuttle having a backplate and a protruding area. The protruding area may have a hexagonal geometry, but is not limited to a hexagonal geometry. The bearing system also comprises biased members, including but not limited to leaf springs made of bearing material, the geometry of which complement the geometry of the shuttle. The geometry of the shuttle may be adjustable to allow for adjustment of the tension in the biased members. The biased members are removably attached to the shuttle. In some embodiments the biased members are attached using screws and/or rivets. This allows for easy removal of the bearing system from the gunner stand for easy replacement in the field. An exemplary gunner stand is depicted inFIG.1. An embodiment of the gunner stand includes two platforms; a bottom platform (100) attached to the floor of a vehicle, and a top platform (101) for a gunner to stand on. The two platforms (100,101) are connected by a scissor lift comprising four scissor lift legs (102a-102d), wherein the scissor lift is capable of raising and lowering the top platform (101). The scissor lift comprises four rails (103a-103d), two on each platform (100,101) parallel to each other. The scissor lift may comprise four legs (102a-102d), wherein each leg is attached to a bearing system (104) in the rails (103a-103d) on one end and attached to the platforms (100,101) at the other end by a hinge (105a-105d). The ends of the legs (102a-102d) attached to the rails (103a-103d) may slide back and forth by way of the bearing system (104), allowing the top platform (101) to raise and lower. In some embodiments, the rails (103a-103d) may be made of aluminum, but may be made of steel, bronze, or other metals and alloys. The scissor lift legs (102a-102d) may also include supports (106) to stabilize the legs (102a-102d) in place. A front view of an embodiment of a bearing system (104) is depicted inFIG.2. The bearing system (104) depicted inFIG.2is loaded into a top rail (103), with a leg of the scissor lift (102) extending downward. As seenFIG.2, a leg (102) of the scissor lift is attached to the front of the sliding shuttle (107). The sliding shuttle (107) is attached to biased members (108a,108b), including but not limited to leaf springs, attached to the top and bottom of the shuttle (107). The bearing system (104), including the shuttle (107) and biased members (108a,108b), is loaded into the rail (103) by sliding the bearing system (104) in at one end of the rail (103). The bearing system (104) is capable of sliding back and forth along the rail (103), allowing the scissor lift to raise and lower the top platform (101). As can be seen inFIG.2, the rail (103) may have a notch (110) which allows for securement of the shuttle's (107) backplate (111) in the rail (103) as it slides back and forth along the rail (103). The biased members (108a,108b) and scissor lift leg (102) may be attached to the shuttle (107) using screws, for example Chicago screws (109), which allow for easy removal of the bearing system (104) components in the field. FIG.3depicts a side view of an exemplary gunner stand, including cutaway views of the top bearing system (112) and bottom bearing system (104). In an embodiment, a gunner stand comprises a top platform (101) and a bottom platform (100) connected by a scissor lift. As seen inFIG.3, the legs (102a,102b) of the scissor lift are connected to the platforms (100,101) by a hinge (105b) on one side, and to a bearing system (104) in the rails (103a,103b) on the other side. In some embodiments, the top bearing system is a double bearing system (112), wherein the shuttle (107) has two protruding sections (113) and four biased members (108a,108b). Each protruding section (113) has a biased member (108a) on the top portion and a biased member (108b) on the bottom portion, each complimenting the geometry of the protruding section (113). The top bearing may be a double bearing system (112) in order to better withstand vibrations and stress on the platform (101) from the weight of a gunner, as well as have holes or slots for locking pin positions to lock the exemplary gunner stand in upper and lower positions. In some embodiments, the bottom platform (100) only needs a single bearing system (104). The bearings (104) are loaded into the rails (103a-103d) such that the biased members (108a,108b) are under tension in the rails (103a-103d) and press against the top and bottom of the rails (103a-103d). The tension keeps the bearings (104) within the rails (103a-103d) as they slide back and forth as the top platform (101) is raised and lowered. The biased members (108a,108b) may be attached to the shuttle (104) with screws (109), for example Chicago screws or rivets in some embodiments, which allows for easy replacement in the field. A front view of an exemplary gunner stand is depicted inFIG.4. The top platform (101) and bottom platform (100) are connected by scissor lift legs (102). The scissor lift legs (102) may comprise support struts (106) connecting adjacent legs (103) for further stability and structural support of the gunner stand. The bottom platform (100) may be attached to the floor of a vehicle while the top platform (101) may be raised and lowered to accommodate a gunner. The scissor lift legs (102) are attached to the platforms (100,101) by hinges (105) on one end, and to rails (103) on the other. An exemplary gunner stand has four rails (103a-103d) and four scissor lift legs (102a-102d). The scissor lift legs (102) are attached to the rails (103) by a bearing system (104). The bearing system (104) may comprise a shuttle (107) having a backplate (111) and a protruding section (113). The backplate (111) of the shuttle (107) may fit into a notch (110) on the rail (103) in order to secure it within the rail (103). The top and bottom of the protruding section (113) of the shuttle (107) may have leaf springs or other biased members (108a,108b) attached. The biased members (108a,108b) are attached to the shuttle (107) under tension, and when loaded into the rails (103) exert force on the rails (103), keeping the bearing system (104) within the rails (103). The biased members (108a,108b) and scissor lift legs (103a-103d) may be attached to the shuttle (107) with screws (109), allowing for easy removal in the field in the event of damage to the bearing system. An exploded view of an exemplary replaceable bearing system (104) is depicted inFIG.5. The bearing system (104) of the present invention is easily replaceable in the event of wear and tear or other damage in the field. A shuttle (107) comprises a backplate (111) and protruding geometry (113) in the z plane. The protruding geometry (113) may be hexagonal in shape, or a polygonal shape. The geometry of the protrusion (113) in the x-y plane may include angles greater than 90° to create an oblong polygonal shape. In some embodiments, the protruding geometry (113) has a smaller area in the x-y plane than the area of the backplate (111). In some embodiments, the shuttle (107) is made of bronze that may be impregnated with oil so that no additional lubricant is needed for the shuttle (107) to slide in the rails (103). A top biased member (108a) and a bottom biased member (108b) are attached to the protruding geometry (113) of the shuttle (107) under tension. In some embodiments, the biased members (108a,108b) are leaf springs. The biased members (108a,108b) may be made of a high tensile strength material. In some embodiments, the biased members (108a,108b) are attached using screws and/or rivets. The biased members (108a,108b) may be attached to the shuttle using screws (109), such as Chicago screws or rivets, for easy removal. The use of screws allows for easy disassembly and reassembly of the bearing system in the field in the event of damage. An exploded view of an exemplary double bearing system (112) is depicted inFIG.6. A double bearing system (112) could be used in the upper rails (103b,103c) in order to provide increased structural support to withstand vibrations and the load of a gunner. A double bearing system (112) may also have holes or slots for locking pins to lock the position of the exemplary gunner stand in several raised or lowered positions. A double bearing system (112) may also be used in the lower rails (103a,103d). In some embodiments, a double bearing system (112) comprises a backplate (111) capable of being secured in a rail (103). The backplate (111) may have one protruding area (113) comprising two hexagonal areas. The backplate (111) may also have two protruding areas. The geometry of the protruding area (113) may be any polygonal shape. Biased members (108a,108b) may be attached to the top and bottom of the protruding sections (113). The shape of the biased members (108a,108b) may be complementary to the geometry of the protruding sections (113). The biased members (108a,108b) may be leaf springs in some embodiments. The biased members (108a,108b) may be attached to the shuttle (107) under tension so as to exert a force on the rails (103). In some embodiments, the biased members (108a,108b) are attached to the shuttle with Chicago screws (109). Note that in some embodiments a triple or greater bearing system (104) is possible. A biased member (108) as machined and as formed is depicted inFIG.7. As machined, a biased member (108) or leaf spring may be a rectangle and may be made of a high tensile strength material. Other polygonal shapes are possible for the biased member (108). The biased member (108) may have rounded corners and/or sharp corners, and linear edges and/or curved edges. As machined, the biased member (108) may be flat. In order to be attached to the shuttle under tension, the biased member (108) may be formed into a biased or sloped configuration. The biased member (108) may have one or more linear slopes and/or a continuous curve. The biased member (108) may include holes (114) for attaching the biased member (108) to the shuttle (107) with screws or other fasteners. The biased member (108) may be coated in ultra high molecular weight polyethylene, or other low friction polymer coating, in order to facilitate sliding in the rails (103). The biased member may also be made of ultra high molecular weight polyethylene. An exemplary shuttle (107) is depicted inFIG.8. In some embodiments the shuttle (107) is made of bronze, though the shuttle (107) may be made of any material. The shuttle (107) may comprise a back plate (111) capable of sliding into a notch (110) on the rails (103) in order to secure the shuttle (107) on the rails (103). The shuttle (107) may have a protruding geometry (113) in a shape that compliments that of the biased members (108a,108b). The shuttle (107) may be hexagonal in shape or another polygonal shape. The shuttle (107) may comprise holes (116) on each end extending through the top to the bottom for securing the biased members (108a,108b). The shuttle (107) may comprise a hole (115) on its face for securing a scissor lift leg (102). The shuttle (107) is easily removeable from the scissor lift leg (102) to allow for easy disassembly and reassembly in the field. An example of the biased member (108a) attached to the shuttle (107) is depicted inFIG.9. The biased members (108a,108b) create tension on the rails (103), keeping the bearing system (104) in place within the rails (103). The biased member (108a) is attached to the shuttle (107) at two or more ends of the shuttle (107). In some embodiments, the biased member (108a) is attached to the shuttle (107) for example with screws (109), for example Chicago screws, or rivets or the like. The protruding geometry of the shuttle (113) is complementary to that of the biased member (108a) such that the geometry of the shuttle (107) forces the biased member (108a) to bow out, creating tension on the rails (103). In some embodiments, the biased members (108a,108b) are leaf springs, but may be any high tensile strength material. The protruding geometry (113) of the shuttle (107) may be adjusted in order to adjust the tension in the biased members. A cutaway side view of a bearing system (104) and rail (103) is depicted inFIG.10. A shuttle (107) may be attached to two biased members (108a,108b) and loaded into a rail (103) attached to one of the platforms (100,101) of the gunner stand. The biased members (108a,108b) are attached under tension so as to bow out away from the protruding geometry (113) of the shuttle (107) and exert force on the rails (103), keeping the bearing (104) in place as it slides back and forth on the rail (103). In some embodiments, the biased members (108a,108b) are attached using Chicago screws (109) in order to be easily removed and replaced. INCORPORATION BY REFERENCE References and citations to other documents, such as patents, patent applications, patent publications, journals, books, papers, web contents, have been made throughout this disclosure. All such documents are hereby incorporated by reference in their entirety for all purposes. EQUIVALENTS The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting on the invention described herein. Scope of the invention is this indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. | 14,867 |
11859934 | DETAILED DESCRIPTION The present disclosure relates to an ergonomic handle of a firearm cleaning apparatus that is used to clean the interior barrel (i.e., the bore) of a gun. Various embodiments of the ergonomic handle of a firearm cleaning apparatus will be described in detail with reference to the drawings, wherein reference numerals represent like parts and assemblies throughout the several views. Reference to various embodiments does not limit the scope of the firearm cleaning apparatus disclosed herein. Additionally, any examples set forth in this specification are not intended to be limiting and merely set forth some of the many possible embodiments for the firearm cleaning apparatus. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but these are intended to cover applications or embodiments without departing from the spirit or scope of the disclosure. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Existing gun bore cleaning devices rely on handles that are based on screwdriver handle technology where the cleaning rod is coaxial to the handle itself. This arrangement is sufficient to clean a gun barrel; however, that arrangement leads to extra stress on a user's wrist when the rod is pushed and pulled within a gun barrel. Further, when a cleaning rod is equipped with a cleaning accessory on one end, friction can be significantly increased when the cleaning rod is inserted into a firearm bore due to the nature of the accessories and the increased drag they cause inside a firearm bore. One of the improvements of this disclosure is the reorientation of the user's hand on a more ergonomic handle. The vertical orientation of the handle grip allows the user to have more leverage when pushing and pulling a cleaning rod within a gun barrel while reducing the stress on the user's wrist. Additionally, many different accessories may be attached to the accessory end of the cleaning rod, such as a jag, a bore brush, a star chamber brush, a star chamber mop, a star chamber pad, and a slotted tip. The accessory connection of the present disclosure is not limited to the previous list; any barrel cleaning accessory may be structured and configured in a way to allow attachment to the accessory connection. A cleaning rod with a jag attachment and solvent patch takes considerable force to move within a gun barrel; by improving the handle of the gun rod to be in a vertical orientation, stress may be reduced on the user, and cleaning may be enhanced. Adding a rolling bearing to the connection between the cleaning rod and the ergonomic handle adds additional stress reduction on both the user's wrist and the firearm itself; a rotatable accessory will have less chance of damaging the internal surface of the firearm barrel. The cleaning rod itself may be constructed of carbon fibers, stainless steel, brass, aluminum, coated steel, or fiberglass. There is no limit to the material that may be chosen for the construction of the cleaning rod; the only limitation is on materials that would harm the interior of a gun barrel. An example of one embodiment may be seen inFIG.1, where an ergonomic handle, connector, and cleaning rod are fully displayed. The ergonomic handle can have a core and a bolster, wherein the core can be comprised of a top end, a grip, and a base opposite the top end. In some embodiments, the core and the bolster of the ergonomic handle can be hollow, as illustrated inFIG.1A. In other embodiments, one or both of the core and the bolster may be solid while the remaining component(s) are hollow. The view ofFIG.1shows the device, where the ergonomic handle can be comprised of a bolster115disposed on the top end110of the core near the rear and top of the device. Directly below the top end110can be the grip130that can be covered by an overlay material130M; just below the bolster115may be a palm engaging surface130B. The bolster115can provide an additional leverage point and can help to prevent a user's hand from slipping off of the ergonomic handle100; this is a vast improvement over screwdriver type cleaning rod handles where a user would need more grip strength to prevent slippage. The bolster115overhangs the palm engaging surface130B by the inclusion of an inward curvature disposed on the palm engaging surface130B just below the bolster115. On the opposite side of the grip130with respect to the palm-engaging surface130B may be a finger-engaging surface130A. As illustrated inFIG.1, the finger-engaging surface130A can contain two indentations130D to provide engagement for three fingers. An overlay material130M for the grip may be textured in some embodiments with a non-slip surface to provide users more purchase when gripping. For example, overlay material130M can be manufactured from a non-slip material providing a high friction coefficient such as, but not limited to, a natural or synthetic rubber or similar material. The overlay material130M may be overmolded around the grip130, and the grip130and overlay material130M can be structured and configured such that the grip130and overlay material130M substantially cannot be non-destructively separated. However, other embodiments may have overlay material130M and grip130manufactured separately, and structured and configured to be fit together after manufacture. As described above and illustrated inFIG.1, the core can further include a base120disposed on the bottom of the ergonomic handle100; connected to the base may be connector125for cleaning rod140positioned below the finger-engaging surface. The connector125can be comprised of a connection for a cleaning rod140and a housing described further herein and illustrated inFIG.1A, to allow the cleaning rod140to rotate when engaged within a firearm barrel. The bearings can reduce the chance of marring the inner surface of a firearm barrel by allowing the cleaning rod the ability to move with any rifling on the inner surface of a firearm barrel; this occurs primarily when the cleaning rod140has an accessory (not shown) attached to an accessory end145of the cleaning rod140. FIG.1Ais the cross-sectional view taken from the line A-A of the ergonomic handle100shown inFIG.1. InFIG.1A, connector125is comprised of a housing12511that contains two bearings125B, a connection end125C, and rod125R along with two washer/spacer125W. Connection end125C can be attached to both a cleaning rod140and a rod125R, wherein the cleaning rod140and the rod125R are attached with an adhesive to generate a non-reversible connection to the connection end125C. In some embodiments, the two bearings125B can be disposed on the opposite ends of the rod125R, wherein the two bearings125B are secured to the housing12511allowing the free rotation of the cleaning rod140, the connection end125C and the rod125R. In some embodiments, the bearings125B are rolling bearings. In some embodiments, the rod125R can be comprised of steel; however, other embodiments may use other construction materials for the rod. On the outside surface of each of the two bearings125B is a washer/spacer125W, shown clearly inFIG.1A. A first washer/spacer125W is disposed between a bearing125B and the connection end125C, and a second washer/spacer125W is disposed on the outside of the bearing125B; both washer/spacers125W have the rod125R within their central opening and are coaxial with the rod125R. The housing12511can be disposed within the base120and can provide stability for the cleaning rod140that allows a user to apply more force when cleaning a firearm barrel. The connector125C in this embodiment can be made of brass; other materials are contemplated for use in the connector. In this embodiment, the housing12511can be comprised of metal; however other embodiments may include a housing12511that is comprised of plastic or any other suitable material. A locking nut (not shown) may be connected to the end of the rod125R to lock the two bearings125B securely in place. Further, in the cross-section ofFIG.1A, the threading145A is shown for an accessory end145of a cleaning rod140. Any firearm barrel accessory provided with a threaded attachment may be connected to the accessory end145with the use of the threading145A. Examples of accessories that may be attached are a jag, a bore brush, a star chamber brush, a star chamber mop, a star chamber pad, or a slotted tip. These accessories may also include their own accessories, such as patches or swabs, which may subsequently be treated with firearm cleaning solutions and solvents. In the example ofFIG.2, an embodiment of an ergonomic handle for cleaning a firearm is shown in a front elevation. As described above, finger-engaging section130A of the grip130can be covered in an overlay material130M and can be portioned into three finger sections130AS separated by two indentations130D. The connector125and the cleaning rod140are demonstrated as being coaxial in this view ofFIG.2; the connector125C can be disposed within the housing12511, which in turn can be disposed within the base120. The coaxial nature of the cleaning rod140and the connector125is also clearly shown inFIG.2; all of the elements within the housing12511may be coaxial. In the example ofFIG.3, an embodiment of an ergonomic handle for cleaning a firearm is shown in a right-side view. As illustrated, bolster115can protrude into the plane defined by the palm-engaging surface. In other embodiments, the bolster115may extend further beyond the plane defined by the palm-engaging surface as illustrated in the protrusion ofFIG.3; embodiments with such extensions of bolster115can also include an inward curvature disposed on the palm engaging surface130B just below the bolster115to insure that a user's hand does not slip from the ergonomic handle. Furthermore, the angle of deflection of the bolster115may be more acute or oblique with respect to the plane defined by the top end110. The bolster115enhances the stability of a user's grip on the ergonomic handle100by providing a point of purchase for a user's hand. In traditional screwdriver handle configurations for firearm bore cleaning devices, such purchase is lacking; a user's hand could easily slide forward when applying pressure leading to their hand slipping off of the screwdriver handle, which may lead to harm from the user's hand impacting the firearm. In this embodiment of an ergonomic handle100, both the right and left sides are symmetrical, as demonstrated in the left side view ofFIG.4and the right side view ofFIG.3; other embodiments may include changes in the overlay material130M of the grip130that favor a right-handed user or a left-handed user. Such handed configurations will see changes to both the palm-engaging section130B and the finger-engaging section130A. Further, as illustrated inFIGS.3and4, the cleaning rod140, when attached to the connector, and the ergonomic handle can define an acute angle. For example, the angle defined by cleaning rod140and the ergonomic handle can be between 45 and 89 degrees, as illustrated inFIGS.3and4. In the example ofFIG.5, the rear elevation view shows a bolster115that has the same width as the ergonomic handle100and is protruding directly from the top end110. In some embodiments, the grip of the core may have a shorter width than the ergonomic handle100. Therefore, the overlay material130M can surround a perimeter of at least a portion of the grip130to make the width of the ergonomic handle100uniform. In the examples shown inFIG.6, the top side view of an embodiment of the present disclosure demonstrates the alignment of the ergonomic handle100and the cleaning rod140; all visible parts of this embodiment can be centered on the plane that runs from the palm-engaging section130B to the accessory connection145. Again, in this embodiment of the ergonomic handle100, both the right and left sides of the ergonomic handle100are symmetrical; as described earlier, other embodiments may include differential configurations for right-handed and left-handed users. Cleaning rod140can be maintained in its central location to provide proper distribution of user applied force, whether by a right-handed or left-handed user when using the device to clean a firearm barrel. In the example of the ergonomic handle100shown inFIG.7, the bottom view of an embodiment of the present disclosure demonstrates the alignment of the ergonomic handle100and the cleaning rod140; all visible parts of this embodiment can be centered on the plane that runs from the palm-engaging section130B to the accessory connection145. In this embodiment, again, both the right and left sides of the ergonomic handle100are symmetrical. An ergonomic handle100can be used to clean a firearm by having a user grip the ergonomic handle100then connect an accessory, wherein the accessory is equipped with a threaded end, to an accessory end145by attaching the accessory's threaded end to the threading145A of the accessory end145; any such threaded accessory such as a jag, a bore brush, a star chamber brush, a star chamber mop, a star chamber pad, or a slotted tip may be attached to the accessory end145. Once attached, the combined cleaning rod140and accessory can be inserted into a firearm barrel; the user, while still gripping the ergonomic handle100can begin pushing and pulling the ergonomic handle100a plurality of times so the accessory can remove the unwanted residue within the firearm barrel. Once the desired amount of cleaning has been achieved by the user, the combined cleaning rod140and accessory can be removed from the firearm barrel. The cleaning rod140used within the barrel can be comprised of a carbon fiber material; such a material will reduce the possibility of marring the inner surface of a firearm barrel when the combined cleaning rod140and accessory are pushed and pulled within the barrel. Additionally, the combined cleaning rod140and accessory may spin freely within the barrel with the assistance of the rolling bearings125R connected to the opposite end of the cleaning rod140within the housing125. Some accessories that can be attached to the accessory end145may include their own attachable accessories; for instance, an accessory can be combined with a cleaning patch that may be saturated with a solvent to assist in the removal of residue within a firearm barrel. The various embodiments described above are provided by way of illustration only and should not be construed to limit the claims attached hereto. Those skilled in the art will readily recognize various modifications and changes that may be made without following the example embodiments and applications illustrated and described herein and without departing from the true spirit and scope of the following claims. | 14,923 |
11859935 | DETAILED DESCRIPTION The headings provided herein, if any, are for convenience only and do not necessarily affect the scope or meaning of the claimed invention. For the purpose of the present disclosure, the terms mobile devices and wireless devices are used interchangeably. INTRODUCTION In a number of situations, it can be beneficial to maintain an accurate count of available ammunition within a magazine of a weapon. For example, a soldier will often want to know how much ammunition is available to him or her. Although the soldier may be aware of the amount of ordinance assigned to the soldier, during a firefight, the soldier may lose track of how many shots have been taken and how much ammunition remains. Certain embodiments described herein provide an improved ammunition tracking system that determines the amount of ammunition within a magazine and outputs the information for display to a user. The system includes a magnet and a set of Hall effect sensors that determine a location of the magnet within the magazine. Based at least in part on the determination of the location of the magnet, the system determines the amount of ammunition within the magazine. Further, in certain embodiments disclosed herein, the system can track the total amount of ammunition available to a user (e.g., a soldier, policeman, or hunter). The system can register one or more magazines to a user or a particular weapon. The system can then maintain a count of ammunition available to the user or the weapon based on the ammunition within the one or more magazines. Advantageously, in certain embodiments, by monitoring ammunition within each magazine as well as the number of magazines available, a user can be presented with an accurate count of the total ammunition available to the user without the user needing to track the amount of ammunition fired or used. In addition, in certain embodiments, a magazine installed or inserted within the weapon can communicate a count of the ammunition within the magazine to the weapon and/or to one or more accessories installed on the weapon or within proximity of the weapon. This communication can be accomplished using an optical transceiver, and/or radio included in the magazine. Advantageously, in certain embodiments, using the optical transceiver eliminates cables that can be susceptible to damage during manufacture and/or use. Additional features and embodiments of the magazine and weapon system are described below with respect to the figures. Example Magazine FIG.1illustrates a cross-section of an embodiment of the inside of a magazine100that may be used with certain embodiments disclosed herein. Further,FIG.2illustrates a line drawing of the embodiment of the cross-section of the magazine100illustrated inFIG.1. With respect toFIG.1, the magazine100can include a housing102that includes a chamber104for holding one or more bullets or cartridges106of ammunition. It should be understood that the features disclosed herein can be used with any type of magazine capable of holding any type of ammunition. As illustrated, the magazine includes a follower108that is configured to push the cartridges106towards an egress point of the magazine100. The follower108includes a magnet110. As cartridges106are removed from the magazine (e.g., when a cartridge106is fired), the follower108moves towards the egress point of the magazine100shrinking the size of the chamber104that holds cartridges106within the magazine100. Consequently, as the follower108moves, so does the magnet110, thereby altering the location of the magnet110within the magazine100. The location of the magnet110can be used to determine the number of cartridges106within the housing102of the magazine100. To determine the magnet's110location, the magazine100includes a number of Hall effect sensors112. The Hall effect sensors112can detect the location of the magnet110using the Hall effect. The Hall effect relates to the production of a voltage difference across an electrical conductor when a magnetic field is perpendicular to a current in the conductor. Thus, as the magnet110approaches a Hall effect sensor112within the magazine100, a voltage will be produced that can be measured by electronic circuitry within, and/or external to, the magazine100. Based on the detected voltage, the location of the magnet110can be determined and consequently, the location of the follower108. This information can be used to determine the quantity of ammunition within the chamber104. In certain embodiments, the Hall effect sensors112may be replaced by any type of transducer, electrical, magnetic, or electromagnetic sensor that produces or modifies an electrical signal or measurable electrical property (e.g., a voltage, a current, or a resistance) based at least in part on a detected magnetic field from a magnet. For example, in some embodiments, an anisotropic magnetoresistive or anisotropic magnetoresistance (AMR) sensor may replace or complement the Hall effect sensors112. In some embodiments, the microelectromechanical systems (MEMS) magnetic field sensor may be used instead of, or in addition to, the Hall effect sensors112. Other examples of sensors that may be used in place of or in addition to the Hall effect sensors112may include sensors that measure or detect negative magnetoresistance, giant magnetoresistance, tunnel magnetoresistance, or extraordinary magnetoresistance. In some embodiments, the change in the type of sensor112may be paired with a change of the type of magnet110. The combination of the Hall effect sensors112and the magnet110may create a linear encoder. The set of sensors112may be used to encode or identify particular locations within the magazine100. Based on the location of the magnet110with respect to the set of sensors112, the number of cartridges loaded within the magazine100can be determined. The position or location of the magnet110may change as the number of cartridges within the magazine100changes. Based on the change in location of the magnet110, as determined by the one or more Hall effect sensors112that generate electrical signals, the number of cartridges within the magazine100can be determined. In some embodiments, each, or all, of the sensors112may generate an electrical signal based on the sensors relative location to the magnet110. In some such cases, the location of the magnet110may be determined based on the respective signals generated by the set of sensors112. In some embodiments, the one or more electrical signals generated by the one or more sensors112may be compared to a table that is stored in the circuitry or memory of the magazine100. Based on the comparison between the generated electrical signals and the data stored in the table, the location of the magnet110may be determined. In some embodiments, the signals generated by the sensors112are compared to data stored within the table. In other embodiments, values that are generated or determined based on the signals generated by the sensors112are compared to data stored within the table. Although the linear encoder is primarily described herein as being formed from a magnet and a set of magnetic sensors, the linear encoder is not limited as such. For example, the linear encoder may use an optical, capacitive, inductive, or resistive system. For instance, with an optical system, the magnet may be replaced with a light source, such as an LED. The magnetic sensors may be replaced with light sensors that are capable of determining the linear encoder position based on the one or more light sensors that detect light from the light source. A capacitive system may similarly be implemented to detect a capacitance between an element attached to the follower and an element aligned with the cartridges in the magazine. Further, the linear encoder may be an absolute encoder or an incremental encoder. In some embodiments, the number of sensors112included within the magazine corresponds to the maximum number of cartridges that can be loaded into the magazine100. Thus, if the bottommost sensor that is closest to the butt of the magazine100generates an electrical signal, it can be determined that the magazine includes the maximum number of cartridges. In another example, if the sensor112closest to the feed point or ingress/egress point of the magazine100generates an electrical signal, it can be determined that the magazine100is empty. It should be understood that multiple sensors112may generate a signal. The number of cartridges within the magazine100may be determined by interpolating or averaging the signals, or by selecting the sensor112that generates the strongest electrical signal. However, with some magazines, having a one-to-one correspondence between the number of sensors112and the maximum number of cartridges that the magazine100can hold is insufficient to accurately determine the number of cartridges within the magazine100. There are several reasons why having a one-to-one correspondence between the number of sensors112included in the magazine100and the number of cartridges the magazine100can hold when fully loaded is insufficient. For example, one reason why having a one-to-one correspondence between sensors and cartridges is that, even for a particular size cartridge, there may be variation in cartridge size between manufacturers such that the alignment between the cartridges within the magazine100and the sensors112may vary for different manufacturers of the same ammunition type. As another example, some magazines are capable of holding different types of cartridges, which may vary in size and may result in a different position of the magnet110for the same number of cartridges. For instance, the magazine100may be capable of holding both .458 SOCOM rounds and .223 rounds, which differ in size, thereby causing a location of the magnet110with respect to the sensors112to differ for the same number of cartridges. In some cases, even if the same type of ammunition from the same manufacturer is consistently used with the magazine100, wear and tear of the magazine100over time may cause the alignment of the magnet110with respect to the sensors112to change for the same number of cartridges. For example, the rigidity of the plastic or the strength of the spring in the follower may change over time. In some cases environmental conditions, such as temperature, may cause a change in the magazine100that affects the alignment of the magnet110with the sensors112for a particular number of cartridges inserted into the magazine100. Further, with many weapons, inserting the magazine100into the weapon may cause pressure to be applied to the cartridges within the magazine100. This pressure may shift the cartridges compared to when the magazine100is not inserted into the weapon, thereby modifying the alignment of the magnet110with the sensors112. As such, in certain embodiments, the determination of the number of cartridges within the magazine100may change based on whether the magazine100is inserted into the weapon resulting in an inaccurate count of the cartridges either when the magazine100is inserted into the weapon or when the magazine100is not inserted into the weapon. For at least the above reasons, in some cases having a one-to-one correspondence between the number of sensors included in the magazine100and the maximum number of cartridges that the magazine100can support may be insufficient. Thus, in certain embodiments, the magazine100may include more sensors112than the total number of cartridges that the magazine100can hold or is designed to hold. Advantageously, in certain embodiments, by including a greater number of sensors in the magazine100than the number of cartridges the magazine100can hold when at capacity, the accuracy of the measurements made by the linear encoder of the magazine100may be increased, thereby reducing or eliminating the problems associated with having a one-to-one correspondence between the sensors and the number of cartridges that the magazine100can support. To determine the number of cartridges within the magazine100, the linear encoder may further include a hardware processor that can compare the signals generated by the sensors112with the calibration table stored in a memory to determine the number of cartridges within the magazine100. The hardware processor may generate a value, such as a coordinate value indicating a relative location of the magnet110with respect to a coordinate system associated with the sensors112. The hardware processor may compare the value to values within the calibration table to identify a number of cartridges corresponding to the value. In certain embodiments, the hardware processor may interpolate signal values from the sensors112to more accurately identify the location of the magnet110, and consequently the number of cartridges loaded within the magazine100. In certain embodiments, the number of sensors within the magazine may match the number of cartridges the magazine is capable of holding. In some such embodiments, a linear encoder may be used to determine the number of cartridges in the magazine. In certain embodiments, the linear encoder may eliminate the problems described above when the granularity of sensors matches the maximum ammunition capacity of a magazine. The location of the magnet in the magazine with respect to the sensors in the magazine may be determined based on the electrical signals received from two or more sensors. This location may be mapped to a calibration table to determine the number of cartridges within the magazine. By mapping the location to the calibration table, instead of using the exact position of the magnet with respect to the sensors to determine an ammunition count, a range of positions may be used to determine the ammunition count enabling the linear encoder to account for wear and tear of the magazine and differences in cartridge sizes. Thus, instead of determining that a magazine includes X cartridges because sensor X detects the magnet, it can be determined that because the magnet is between sensors X and Y, the magazine includes X cartridges. With magazines that require a one-to-one correspondence between the sensor and the ammunition count, a change in the position of the magnet due to, for example, wear and tear may cause an error in determining the ammunition count in the magazine. However, with a magazine that includes a linear encoder, the amount of ammunition in the magazine can be determined even when the magnet does not align with a particular sensor. As described above, the number of cartridges106in the magazine100can be determined based on the location of the magnet110with respect to the one or more Hall effect sensors112. Thus, to determine the number of cartridges106, it is desirable that the Hall effect sensors112maintain particular positions within the magazine100. However, during manufacture and/or over time, the position of the Hall effect sensors112may shift. For example, vibrations caused by the firing of the weapon within which the magazine100is loaded may result in movement of the Hall effect sensors112. To maintain the position of the Hall effect sensors112, one or more alignment pins114can be used to maintain the position of the circuit board that includes the Hall effect sensors112. In some embodiments, the use of an alignment pin is unnecessary, and the alignment pin may be omitted. For example, the Hall effect sensors may be secured or built into the housing of the magazine100such that the sensors112do not shift over time. Alternatively, or in addition, the calibration table may be updated or modified over time using a calibration or recalibration process. The calibration process may be used to determine the location of the magnet110with respect to the sensors112. By calibrating or recalibrating the magazine100, the table that identifies the number of cartridges loaded within the magazine100based on a determination of the location of the magnet110with respect to the sensors112may be updated. Thus, changes in location of the magnet110with respect to the sensors112for a particular number of cartridges loaded within the magazine100may be captured by the calibration process, thereby reducing or eliminating inaccuracies in the measurement of the number of cartridges loaded into the magazine that may be introduced by wear and tear of the magazine or variation in ammunition loaded into the magazine. This calibration processes described in more detail below with respect toFIG.32. As is described further herein, the amount of cartridges within the magazine can be presented to a user via a display included in the magazine100. However, in some cases, it may be desirable to transmit information associated with the magazine, such as the number of cartridges within the magazine or the state of a battery used to power the circuitry included in the magazine, to the weapon or another system. In some embodiments, the magazine100can include an optical transceiver116or radio to transmit the information to the weapon or another device. Further, the magazine100may include a digital to optical signal adapter enabling the conversion of a digital signal created by the electronic circuitry122to an optical signal for transmission by the optical receiver116. Moreover, the magazine100may include an optical to digital signal adapter enabling the conversion of an optical signal received at the optical transceiver116capable of being processed by electronic circuitry122. The enlarged portion120of the bottom of the magazine100illustrates additional details of the magazine100. As illustrated within the enlarged portion120, the magazine includes electronic circuitry122that receives one or more electrical signals from the one or more Hall effect sensors112. Based on the one or more electrical signals, the electronic circuitry122determines the position of the magnet110and/or follower108within the magazine100and consequently, the number of cartridges106within the magazine100. The electronic circuitry122may include a hardware processor or programmable hardware, such as a field programmable gate array (FPGA), which may be configured to interpret the signals received from the sensors112and determine the location of the magnet110and consequently the number of cartridges within the magazine. Further, the hardware processor or programmable hardware of the electronic circuitry122may be configured to perform one or more of the processes described herein. Further, the magazine100includes a display124, which as illustrated may be a 7 segment light emitting diode (LED) display. However, it should be understood that the display124is not limited to a 7 segment LED display and can include any type of display that can output, among other things, the number of cartridges within the magazine100to a user. The electronic circuitry122and/or the Hall effect sensors112can be powered by a battery126that can be included within the magazine100. In some embodiments, the battery126is rechargeable using, for example, wireless charging. Alternatively, and/or in addition, the battery may be replaced by opening a cap128. The cap128can be a sealing cap that is configured to prevent any moisture or detritus from entering the magazine100. Further, the magazine100may include a control interface130(such as, for example, a control button) that can be used to activate or deactivate the system for measuring the cartridges in the magazine100. Moreover, the control interface130may be used to modify the brightness of the display124. In some embodiments, the control interface130can be used to show or hide the display of the number of cartridges within the magazine100. In some cases, a user may determine that light from the display124is undesirable. For example, in some combat situations, it may be desirable to reduce as much light as possible emanating from the soldier. Advantageously, in certain embodiments, by being able to control the display of the ammunition within the magazine100, or other information, a user can determine whether or not the display124is active. Moreover, in some cases, the control interface130can be used to control whether ammunition information is displayed on an alternative display, such as via a scope attached to the weapon or a helmet worn by the user. In some embodiments, the control interface130is disposed on a portion of the magazine100that is inserted into the firearm. In such embodiments, the control interface130can only be accessed by the user when the magazine100is not in the firearm. This configuration can help reduce or eliminate false readings that may occur if the control interface130is operated while the magazine is inserted in the firearm. In certain embodiments, the control interface130comprises one or more of a control panel, a control button, a wired control interface, a wireless control interface, or a combination of interfaces. For example, the control interface130may include a cell phone, laptop, or other wireless device. A user may interact with the control interface130to request a count of ammunition within one or more magazines. Further, a user may interact with the control interface130to trigger calibration or recalibration of one or more magazines to update a calibration or linear encoder table used to determine a number of cartridges within a magazine. FIG.3illustrates an embodiment of the Hall effect sensors112within an embodiment of the magazine100. As illustrated, the Hall effect sensors112may include a plurality of sensors112that can each provide a signal to the electronic circuitry122(seeFIG.1) of the magazine100. In some cases, each Hall effect sensor112provides a signal to the electronic circuitry122. In other cases, a subset of the Hall effect sensors112, such as those within a threshold distance of the magnet110, provide a signal to the electronic circuitry122. The Hall effect sensors112may be included as part of a circuit board302. The circuit board302may be designed to precisely place, within a threshold degree of tolerance, the Hall effect sensors112with respect to the location of the magnet110when the magazine100is loaded with a particular amount and type (e.g., size) of ammunition. Alternatively, or in addition, the electronic circuitry122may be calibrated based on the position of the Hall effect sensors112and the magnet110. In some embodiments, the circuit board302may be a flexible circuit board. Advantageously, in certain embodiments, by using a flexible circuit board, the hall circuit board302can be shaped to match the curvature of the magazine100and to maintain the Hall effect sensors112at a particular distance from the magnet110when the magnet is positioned perpendicular to a particular Hall effect sensor112. As previously mentioned, one or more alignment pins114may be used to position the circuit board302within the magazine100.FIG.4Aillustrates an embodiment of the alignment pin114within an embodiment of the magazine100. Although the alignment pin114is illustrated as a rod, it should be understood that other shapes may be used. For example, the alignment pin114may include fingers or other protrusions from the pin114that can be used to grip or further hold the circuit board302in place. Further, in the embodiments ofFIG.4A, the Hall effect sensors112are positioned at the rear of the magazine100and the magnet110is positioned at the rear. In some embodiments, the alignment pin114is omitted.FIG.4Billustrates another embodiment of the Hall effect sensors112within an embodiment of the magazine without an alignment pin. As illustrated inFIG.4B, in some implementations, the circuit board302, and associated Hall effect sensors112, may be positioned on a side of the housing of the magazine as opposed to the back of the magazine behind the cartridges. In some embodiments, the Hall effect sensors112may be positioned near the front402of the magazine, where the front is defined as the direction in which the cartridges are ejected when fired from a firearm. As described in more detail below, by positioning the Hall effect sensors112near the front of the magazine, another Hall effect sensor may be positioned at the rear of the magazine for determining when the magazine is inserted into a weapon or firearm without the additional Hall effect sensor detecting the magnet110positioned near or adjacent to the front edge402(e.g., the edge in which the cartridges are pointing when loaded within the magazine100) of the magazine100. Further, the Hall effect sensors112may be positioned along the vertical or y-axis of the magazine to form a vertical column of sensors. Many magazines are not completely rectangular in shape, but are instead curved to some degree. Accordingly, in certain embodiments, the Hall effect sensors may be positioned substantially vertically along an axis that matches the curvature of the magazine. In certain embodiments, a magnet attached to a follower108may align in the horizontal or x-axis of the magazine with one or more of the Hall effect sensors112when one or more cartridges are loaded in the magazine. In certain implementations, one or more of the Hall effect sensors112may align with the magnet attached to the follower108when no cartridges are loaded in the magazine. FIG.5illustrates an embodiment of the follower108within an embodiment of the magazine100. Often, followers include a solid protrusion from the follower body that pushes cartridges within the magazine towards the egress of the magazine. In certain embodiments of the follower108, the solid protrusion is replaced by a spring-loaded plunger502. Advantageously, in certain embodiments, replacing the solid protrusion with the spring-loaded plunger502creates a space for inclusion of the electronic circuitry122. Thus, in certain embodiments, the magazine100can include the electronic circuitry122for automatically and electronically monitoring the number of cartridges within the magazine with minimal or no increase in the size of the magazine100compared to magazines that are not capable of electronically monitoring the number of cartridges within the magazine.FIG.6illustrates a line drawing of the embodiment of the follower illustrated inFIG.5. Example Magazine Housing FIG.7illustrates an embodiment of the outside of a magazine100.FIG.8Aillustrates a line drawing of another perspective of the outside of the magazine illustrated inFIG.7. As previously described, the magazine100may include a display124and an optical transceiver116. One or more of the display124and the optical transceiver116may be part of the housing102. Alternatively, the display124and/or the optical transceiver116may be inserted into gaps or spaces within the housing102. In some embodiments, the magazine100may include a machine-readable code that includes a unique identifier for the magazine. This machine-readable code may be a bar code, a matrix code, a quick response (QR) code, or any other type of machine-readable code. Alternatively, or in addition, the magazine100may include a radio frequency identification (RFID). Advantageously, in certain embodiments, by including a machine-readable code and/or tag on the magazine100, the magazine100can be registered with a weapon enabling a user, such as a soldier, to track the total ammunition available to the user. In some embodiments, the magazine100may be registered with the weapon upon insertion of the magazine100within the weapon. When the magazine is loaded into the weapon, circuitry within the weapon may communicate with circuitry within the magazine to register the magazine with the weapon. Tracking the total ammunition enables a user to monitor ammunition while in a particular environment. For example, a soldier in the field can monitor ammunition without needing to remember how many magazines are in the soldier's gear or how many magazines or cartridges the soldier has used. In some embodiments, the machine-readable code and/or RFID tag can be modified to reflect the amount of ammunition remaining in the magazine. Alternatively, or in addition, an indicator may provide a status of the magazine. For example, when the magazine is empty or has less than a threshold number of cartridges, the indicator may turn red informing the user that the cartridge is empty or has below a threshold number of cartridges. Thus, for example, when a soldier is in the field, he or she can easily determine whether a magazine is a loaded magazine or a spent magazine that is now empty or is close to empty. In some use cases, a user may register a magazine that is currently registered with one weapon with another weapon. For example, a first soldier may take a magazine registered to the soldier's weapon and give it to a second soldier, who may then register the magazine with his or her weapon. In certain embodiments, when the second soldier registers the magazine with his or her weapon, the magazine may be deregistered from the first soldier's weapon. In some embodiments, the magazine may send a signal to the first soldier's weapon to indicate that the magazine should be deregistered from the first soldier's weapon. In other embodiments, the weapon of the first soldier may deregister the magazine from the weapon and the weapon is no longer able to communicate with the magazine. In some cases, the weapon may no longer be able to communicate with the magazine because the magazine moves outside of communication range of the weapon. Alternatively, or in addition, the weapon may no longer be able to communicate with the magazine because when the magazine is registered with the second weapon, the second weapon changes an identifier of the magazine. For example, each weapon may provide an identifier to the magazine that is based at least in part on the weapon's identifier. Thus, when a magazine is registered to a weapon, the magazine's identifier may change and the magazine may no longer respond to attempts to communicate with the magazine from another weapon with which the magazine was previously registered. In some embodiments, the magazine100may include a carbon fiber casing. The use of a carbon fiber casing can reduce the weight of the magazine100offsetting any added weight from the additional sensors and electronics. Further, in some embodiments, the magazine100may include a rubber sleeve. The rubber sleeve can improve the strength and durability of the magazine100as well as provide additional protection for the added electronics to help prevent damage during impacts (e.g., if the magazine is dropped). As illustrated inFIG.8A, the control interface130may be a button or other interface element positioned near the base of the magazine. A user may interact with the control interface130to cause a cartridge count to be displayed on a display of the magazine100, on a display of a weapon, or any other display that can present the cartridge count. In some cases, the magazine may be inserted into a traditional weapon, or a weapon that does not support one or more of the embodiments described herein. In some such cases, the magazine100may present an inaccurate count of the cartridges within the magazine100when the user interacts with the control interface130because, for example, the pressure applied to the cartridges in the magazine by the weapon or because of the inability to determine when a cartridge is within a chamber of the weapon. In some cases, the weapon may support embodiments disclosed herein, but the battery may be drained or the circuitry of the weapon may be off. In such cases, an inaccurate cartridge count may also be displayed because the status of the bolt and/or chamber may be unknown. FIG.8Billustrates an embodiment of a magazine with a control interface130positioned near the egress of the magazine in accordance with certain embodiments. In particular the controller interface130is positioned at a location on the magazine100that is inaccessible when the magazine100is inserted into a weapon. Advantageously, by moving the control interface130to a location the magazine that is inaccessible when the magazine100is inserted into the weapon, a user cannot interact with the control interface130and therefore, cannot obtain a cartridge count without removing the magazine100from the weapon. Thus, a magazine is prevented from displaying an inaccurate cartridge count in cases where the magazine is inserted into a weapon that does not support embodiments disclosed herein, or in cases where the weapon does support the embodiments disclosed herein, but in which the battery of the weapon is drained or dead, or in which the circuitry of the weapon is inactive or turned off. Further, the control interface130may be positioned within a recess820preventing the weapon from interacting with the control interface130when the magazine is inserted into the weapon. FIG.8Cillustrates the magazine ofFIG.8Binserted into a weapon. As illustrated, the control interface130is not accessible when the magazine100is inserted into the weapon. Accordingly, a user cannot interact with the control interface130to obtain a cartridge count within the magazine. It should be understood that the cartridge or ammunition count may be obtained by either removing the magazine, or interacting with a weapon that includes the embodiments disclosed herein for obtaining the ammunition count from the magazine and for determining the status of the chamber and/or bolt of the weapon. Example Non-Contact Connector In some embodiments, the magazine100can communicate with a weapon using a non-contact connector, such as an optical connector. In some embodiments, wireless radio frequency communication or electrical communication may be used between the magazine and the weapon. However, in some embodiments, the weapon may be fired at a faster rate than the speed of the RF communication leading to an inaccurate ammunition count being displayed to the user. Further, electrical communication connections may become damaged by the environment. Advantageously, in certain embodiments, using an optical connector may enable faster communication compared to radio communication. Further, optical connectors can be cleaned using water and are less susceptible to damage by the environment or rough handling compared to electrical connectors. FIG.9Aillustrates an embodiment of a non-contact optical connector116included as part of the magazine. The optical transceiver116may be positioned at a location that enables the optical transceiver116to line up or mate with an optical transceiver included in a weapon that is designed to be used with the magazine100, or for which the magazine100is designed. The optical transceiver116is configured to communicate using light and thus, can mate with another optical transceiver without making physical contact. However, in some cases, the optical transceiver116may be in physical contact with another optical transceiver to minimize or eliminate potential interference from ambient light. Further, while there are advantages to using an optical transceiver116, such as the elimination of breaks in a wire-based communication system, in some embodiments the optical transceiver116may be supplemented by or replaced by other types of transceivers, such as a wireless transceiver, or a wire-line transceiver configured to communicate with the weapon when metal contacts included in the transceiver of the magazine contact corresponding metal contacts in a transceiver of the weapon. In some embodiments, the magazine100may communicate with a weapon using inductive, electromagnetic, or electrical communication. For example, an alignment of inductive elements may enable a data and/or power transfer between the weapon and the magazine100. As another example, when the magazine is inserted into the weapon, pins or other contact points of a transceiver on the magazine may align with corresponding pins or contact points on the weapon enabling electrical communication. In some cases, the transceiver116may use near field communications (NFC) and/or radio frequency communication. Alternatively, or in addition, the transceiver116may use ultra-wide band communication to reduce the impact of interference on the communications and/or to reduce the chance of eavesdropping. The ultra-wide band communication may communicate over a larger bandwidth than conventional narrowband communication or carrier wave transmission. In some embodiments, the ultra-wide band communication may occur over a bandwidth exceeding 500 MHz. In some embodiments, the transceiver116may use spread spectrum communication. Further, in some embodiments, the transceiver116may use encrypted communications or a secure channel during communication. In some embodiments, the transceiver116may use Bluetooth®, Wi-Fi®, the RADIUS protocol, or any other type of narrow band or wide band communication protocols. FIG.9Billustrates an embodiment of a signal convertor900within a magazine and a signal converter902within a weapon that enables communication between the optical transceiver116of the magazine and an optical transceiver of a weapon, such as the optical transceiver1602described with respect toFIG.16below. In certain embodiments, the signal converter900may be or may be included as part of the optical transceiver116. The signal converter900can be configured to convert between electrical digital signals received from a magazine processor906and optical digital signals, which can be transmitted optically to a signal converter902of a weapon. It should be understood that the reverse process is possible as well. In other words, the signal converter902of the weapon may communicate optically with the signal converter900of the magazine. An electrical signal can be passed from a magazine processor906to an optical transmitter910. This electrical signal may be a digital signal used to provide information about the magazine100to a weapon. For example, the magazine may communicate a cartridge count to the weapon. The optical transmitter910can be configured to convert an electrical digital signal into a corresponding optical digital signal. The optical transmitter910can convert electrical signals to optical signals using appropriate techniques, such as, for example, by outputting an optical signal proportional to the input electrical current. The optical transmitter910can be any suitable component for converting electrical digital signals to optical digital signals, such as, for example, HXT4101A-DNT manufactured by GigOptix, Inc. of San Jose, CA The output of the optical transmitter910is an optical digital signal that can be coupled to a collimating lens920. In some embodiments, an input optical digital signal passes through a focusing lens921configured to substantially focus a collimated optical signal onto an optical receiver912. The focused optical digital signal can be substantially directed and focused onto the optical receiver912configured to convert an optical digital signal into a corresponding electrical digital signal. The corresponding electrical digital signal can be provided to the magazine processor906. This electrical digital signal may be an acknowledgement of the ammunition count, a command to obtain a magazine count, an updated calibration table for a linear encoder of the magazine, or any other command or data that the weapon may supply to a magazine. The optical receiver912can convert optical signals to electrical signals using any appropriate technique such as, for example, outputting an electrical current that is proportional to the input power of the optical signal. The optical receiver can be any suitable component for converting optical digital signals to electrical digital signals, such as, for example HXR-4101A-DNT-T manufactured by GigOptix, Inc. of San Jose, CA. The output optical digital signal output by the optical transmitter910can be collimated by collimating lens920. The collimated optical signal may pass through an output gap922before passing through a focusing lens. The collimated optical signal passes through an input gap923before being focused by focusing lens921. In some embodiments, the output and input gaps922and923can be about 1 mm between lens elements. In some embodiments, the gaps922and923may be greater than or equal to about 2 mm, less than or equal to about 1 mm, about 0.5 mm, or about 1.5 mm. In some embodiments, the gaps922and923can have differing distances between lens elements, such as, for example, there can be about 1.5 mm between lens elements920and926in output gap922and there can be about 0.8 mm between lens elements921and927in input gap923. In some embodiments, the non-contact optical connection can include transparent windows928and929of the signal converter900that are configured to have an exterior surface that is substantially aligned with an exterior surface of the corresponding transparent windows928and929of the signal converter902. The transparent windows928and929can be configured to be substantially transmissive for wavelengths that correspond to wavelengths of light used in the optical-transceiver918of the magazine and the weapon. The transparent windows928and929can be treated with coatings to make them more durable, scratch resistant, hydrophobic, polarized, filtered, and the like. The transparent windows928and929can provide a protective surface for the lens elements920,921,926and927. The transparent windows928and929can provide a surface that is cleaned with relative ease to maintain optical coupling between components of the magazine and the weapon. In some embodiments, the signal converters900and902may include power connectors and power transmission lines that can optionally be used to supply power to the magazine and/or to the weapon. In some such embodiments, the power may be used to power a processor in the magazine or the weapon. Alternatively, or in addition, the power may be used to charge a battery in the magazine or the weapon. In certain embodiments, the magazine processor906of the magazine may be the electronic circuitry122or may be included in the electronic circuitry122of the magazine100. The magazine processor906may be an FPGA, a microprocessor, or a custom processor configured to at least determine the ammunition count for cartridges inserted into the magazine100. Further, the magazine processor906may control a display of the magazine100. The firearm processor908of the weapon may be or may be included in the electronic circuitry2002described in more detail inFIG.20. The firearm processor908may be an FPGA, a microprocessor, or a custom processor configured to perform one or more of the embodiments described herein. Further, the firearm processor908may determine a number of cartridges registered to the firearm or weapon via the inclusion of the cartridges in one or more magazines registered with the firearm or weapon. In some embodiments, the firearm processor908may determine whether a cartridge is loaded within a chamber of the firearm. In addition, the firearm processor908may determine whether the firearm is jammed. Example Magazine Display FIG.10illustrates an embodiment of a display124included as part of the magazine. The display124is positioned at the bottom of the side of the magazine100that faces the user so that the user can view the display124while creating minimal interference with the chamber that includes the cartridges and to prevent blockage from the weapon when the magazine100is inserted into the weapon. However, the location of the display124is not limited as such and may be positioned elsewhere on the magazine100. For example, the display124may be positioned closer to the optical transceiver116(seeFIG.7) so that the display124is closer to eye level with a user when the user is using the weapon. In some cases, the display124may extend outwards from the housing102providing room for installation of the display124without interfering with the placement of cartridges within the magazine100. As illustrated inFIG.10, the display124may be inset. Advantageously, in certain embodiments, the inset display124can improve visibility to the user while reducing visibility of the display124by other observers, such as enemy combatants. Further, the display124may comprise an LED display, an LCD display, an OLED display, a touchscreen display, or any other type of display. Moreover, the display124may display one or more data items to a user. For example, the display124may display a number of cartridges within the magazine, a number of cartridges fired from the magazine, a number of shots fired by the weapon, a number of magazines available to the user, a jam state of the weapon, whether a cartridge is within a chamber of the weapon, and any other information relating to the status of the magazine loaded in the weapon, magazines available to the user, and/or the weapon itself. In some embodiments, the display124may be optional and/or may supplement an additional display associated with the weapon and/or other gear (e.g., a helmet or goggles) of the user. In certain embodiments, the output of the number of cartridges within a magazine, or a number of cartridges available to a user across magazines, maybe an auditory output that is received via a user's radio or headset. In some cases, the user may receive both a visual and audio indicator of the number of cartridges available. Example Magazine Circuitry FIG.11Aillustrates an embodiment of the electronic circuitry122and display124included as part of the magazine. As illustrated, the electronic circuit122may be fit into the base or cap128of the magazine.FIG.11Billustrates an alternative view of the embodiment of the electronic circuitry122and display ofFIG.11A. As previously illustrated with respect toFIG.1, the battery126may be or may be shaped as an AA or AAA battery. Alternatively, the battery may be a discus or circular shaped battery, similar to a watch battery in shape. The battery may be a lithium-ion battery, an alkaline battery, a nickel cadmium battery, a nickel metal hydride battery, or any other type of battery. In some cases, the battery may be a rechargeable battery. In embodiments where the battery is a rechargeable battery, the battery may be recharged by placing the magazine on a recharger or a charging pad. In some embodiments, the battery may be removed from the magazine for replacement or charging purposes. In other embodiments, the battery may not be removed from the magazine. For example, in some cases, the electronics of the magazine may be housed in a waterproof housing, and the battery may be sealed within the housing. FIG.12illustrates an embodiment of a display circuit1200for a display. The display may be part of a magazine, a weapon, a scope, or other component of a weapon or weapon system. The display circuit1200may be included as part of the display or may be in communication with the display. In some embodiments, the display circuit1200may be included as part of the electronic circuitry of the weapon, such as the electronic circuitry2002described below with respect toFIG.20. The display circuit1200may include a plurality of light emitting diodes (LEDs)1202. Alternatively, or in addition, these light emitting diodes1202may be included as part of the display and may be in electrical communication with the display circuit1200. The LEDs1202may include ultra-bright LEDs that are visible when a user is in full daylight. The ultra-bright LEDs1202may have a very bright output enabling them to be viewable during the daytime. For example, ultra-bright LEDs1202may have a luminosity of 100 millicandelas (mcd), 200 mcd, 300 mcd, 500 mcd, or more, or any range between the foregoing. Further, the ultra-bright LEDs1202may be dimmed to a very low level enabling them to be viewed through night vision goggles without interfering with the user's ability to use the night vision goggles. For example, the ultra-bright LEDs1202may be configured to consume milliwatts of power during the daytime and may be configured to consume microwatts of power when a user is wearing night vision goggles. Further, the ultra-bright LEDs1202may be configured to output light and varying degrees of magnitude. Thus, there may be one brightness level during a sunny day, another brightness level during a cloudy day, another brightness level at night, and another brightness level when a user is wearing night vision goggles. In some embodiments, the LEDs1202may be dual-mode or multi-mode LEDs that are capable of functioning at different levels of brightness based at least in part on an amount of power received and/or on a control signal. In other words, the LEDs1202may provide different brightness outputs based at least in part on the amount of power received. In certain embodiments, the LEDs1202are configured to provide a seven segment display. Further, the LEDs1202may display multiple numbers. In some embodiments, the LEDs1202may be configured to display characters and/or symbols instead of or in addition to the numbers. The display circuit1200may control the LEDs1202to make them viewable during the daytime, at nighttime, or as previously described when a user is wearing a vision goggles. To enable the different viewable modes, the display circuit1200includes multiple resistor paths or other current driving device that are in communication with a battery1210. For example, the display circuit1200may include two resistor paths: a first path that includes the resistor1204and a second path that includes resistor1206. The resistor1204may be relatively small resistor (e.g., a 10, 20, 50, or 100 ohm (Ω) resistor, or any value between the foregoing resistances) compared to the resistor1206which may be a relatively large resistor (e.g., a 1, 2, 10, or 50 mega-ohm (MΩ) resistor, or any value between the foregoing resistances). Further, the display circuit1200may include a switch1208that may connect the battery1210to the light emitting diodes1202via the path that includes the resistor1206or the path includes the resistor1204. In certain embodiments, when a user is not wearing night vision goggles, the LEDs1202may be connected to the battery1210via the path that includes the smaller resistor1204resulting in a brighter output compared to when the LEDs1202are connected to the battery1210via the path that includes the larger resistor1206. In contrast, when it is determined that the user is wearing night vision goggles, LEDs1202may be connected to the battery1210via the path includes the larger resistor1206resulting in a dimmer output compared to the when the LEDs1202are connected to the battery1210via the path includes the smaller resistor1204. In some embodiments, the display circuit1200may include additional resistor paths or other driver circuitry that connect the battery1210to the light emitting diodes1202depending on the brightness of the ambient light and/or whether the user is wearing night vision goggles. The switch1208may select the resistor paths to connect the battery1210to the light emitting diodes1202based on a control signal received from a controller1212. The controller1212may configure the switch1208based on input from a user interface switch1214. The user interface switch1214may enable a user to turn off the light emitting diodes1202, identify that the user is wearing night vision goggles, or select a brightness auto adjustment mode that automatically selects a resistor path so as to adjust the brightness of the LEDs1202based on an ambient light detected by the light sensor1216. In some embodiments, the display circuit1200may support pulse width modulation (PWM). In some such embodiments, the controller1212may include a pulse width modulation (PWM) controller1218. PWM may be used to set or adjust a duty cycle for the power supplied to the LEDs1202. In other words, instead of supplying power from the battery1210to the LEDs1202at a constant level, the PWM controller1218can alternate between supplying power and not supplying power to the LEDs1202. For example, when the display controlled by the display circuit1202is activated or turned on, the PWM controller1218can alternate equally between supplying and not supplying power to the LEDs1202(e.g., 50% duty cycle) for a repeating set of time periods (e.g., a set of clock cycles). As another example, the PWM controller1218may supply power 75% of the time (e.g., 75% duty cycle) or 25% of the time (e.g., 25% duty cycle) for a repeating set of time periods (e.g., a set of clock cycles). Advantageously, in certain embodiments, by using PWM, greater control over the brightness of the LEDs1202can be achieved. Further, a greater variety of brightness levels can be achieved. Thus, the LEDs1202can be made bright or dim based on the amount of sunlight. Further, the LEDs1202can be dimmed to very low levels when the user is wearing night vision goggles. In some embodiments, the LEDs1202may be dimmed to a level not visible to the human eye, but that is visible to a user wearing night vision goggles. In certain embodiments, the PWM controller1218may implement PWM by adjusting the switch1208between an open position (as illustrated inFIG.12) and a closed position with the path that includes the larger resistor1206(when in night vision mode) or the smaller resistor1204(when not in night vision mode). In some embodiments, the display circuit1200may include a controller for controlling an augmented reality display. In some embodiments, the output of the display may be projected onto a scope generating an augmented reality display interface. In other embodiments, the augmented reality display may be output via another display of the user, such as via the user's goggles or night vision helmet. In certain embodiments, the magazine may include an accelerometer, a gyroscope, and/or other types of motion sensors that can detect when a user has picked up the magazine or when the magazine is in motion. When it is determined that the magazine is in motion, the magazine circuitry can be transitioned from an off-state to an on-state or from a sleep-mode to an on-state. Transitioning to an on-state may include turning on a processor of the magazine to enable a determination of cartridge count within the magazine. However, in certain embodiments, the display124of LEDs1202remain unpowered to prevent the emission of light at an undesired time. Example Weapon System FIG.13illustrates an embodiment of a weapon1300with the magazine100inserted.FIG.14illustrates a modified view of the embodiment of the weapon ofFIG.13. Referring toFIG.13, the weapon may optionally include a scope1302and/or a rail mounted display1304. In certain embodiments, the scope1302and/or display1304may present a user with some or all of the information that can be displayed on the display124. Further, in some cases, the scope1302and/or display1304may display additional and/or complementary information from what is displayed on the display124and/or another display. For example, the display124may display the amount of cartridges within the magazine100, the scope1302may display the amount of cartridges within the loaded magazine and the number of magazines available to the user, and the display1304may display the number of shots fired. Electronics for controlling the additional displays, such as the scope1302and the display1304may be located within the handle of the weapon1300. The handle may include a control1306for activating or deactivating electronics included in the weapon1300. Further, the control1306may be used to cycle between different display options including the data to display and the display sources (e.g., the scope1302or the display1304). In some cases, the control1306may also control the display124of the magazine100. To display data and to control the various display sources, the weapon1300may include one or more optical transceivers, similar to the optical transceiver116of the magazine100. Cables or wires may be used to communicate between the electronics within the handle of the weapon1300and various connection points in the weapon1300that enables accessories connected to the weapon1300to serve as display devices. For example, a cable may connect the electronics to an optical transceiver that communicates with a transceiver on the scope1302. FIG.15illustrates a line drawing of the embodiment of the weapon ofFIG.13without the inserted magazine.FIG.16illustrates a line drawing of the embodiment of the weapon ofFIG.13and an un-inserted magazine. As can be seen with respect toFIG.16, the weapon includes a lip1602that extends from the insertion port1604where the magazine100is inserted into the weapon1300. The lip1602and the magazine100may be designed to align the optical transceiver116of the magazine100and an optical transceiver that may be included in the lip1602of the weapon1300. The mating of the optical transceiver116and the transceiver in the lip1602of the weapon1300can be seen inFIG.17, which illustrates an embodiment of the weapon handle of the weapon1300and the magazine100. In some embodiments, the optical transceiver116may be configured to communicate using infrared communications. Alternatively, or in addition, the optical transceiver160may be configured to communicate using other optical or frequency communication bands. In certain embodiments, as previously described with respect to the optical transceiver116, the transceiver included in the lip1602may be a contactless transceiver. Further, in some implementations of the weapon1300, the transceiver included in the lip1602may be a non-optical transceiver, such as a near field communications (NFC) reader. In some embodiments, the weapon1300may further include an optical scanner that can scan or otherwise access a machine-readable code included on the magazine100. The weapon1300can register one or more magazines by scanning the machine-readable code included on each magazine. Alternatively, the weapon1300can access an RFID tag on the magazine to register the magazine. Once the magazine has been registered with the weapon1300, the weapon1300can monitor the status of the magazine, such as whether the magazine has been inserted into the weapon, whether the magazine is empty, or the number of cartridges included in the magazine. In some cases, the weapon1300may use near-field communication, Bluetooth™, Wi-Fi™, or any other type of communication to communicate with one or more magazines within a particular distance of the weapon1300. In some embodiments, the weapon1300can receive one or more status signals from one or more magazines registered with the weapon. Advantageously, in certain embodiments, the weapon1300can aggregate the information received in the status signals to determine a status of a user's total ammunition. For example, the weapon1300can determine and present to the user a total quantity of ammunition available to the user, the total number of magazines available to the user, the number of loaded and/or empty magazines in the user's gear, etc. FIG.18illustrates an embodiment of the ammunition status display1304attached to a rail1802of the weapon1300.FIG.19illustrates an embodiment of the ammunition status display1304separate from the weapon1300. It should be understood that the display1304may be attached to other locations of the weapon1300. The display1304may include a number of controls1804and1806for configuring the data that is presented on the display1304. For example, the display1304may present a count of shots fired, a count of ammunition available, and/or a count of ammunition within a loaded magazine. Further, the display1304may include circuitry for detecting when shots have been fired. For example, the display1304may include circuitry for detecting vibrations within the barrel1808of the weapon1300and/or audio from a bullet being fired. Using the detected vibrations and/or audio, the display1304can count shots fired. Example Weapon Internals FIG.20illustrates a cross-section of an embodiment of the weapon1300with the magazine100installed.FIG.21illustrates a line drawing of the cross-section of the embodiment of the weapon ofFIG.20with the magazine installed. Referring toFIG.20, as illustrated, the handle2000may include electronic circuitry2002that can process data received from the magazine100. This electronic circuitry2002may be powered by a battery2004that can be installed in the handle. In certain embodiments, the battery2004may also power the electronic circuitry included in the magazine100. In some such embodiments, the battery126of the magazine100may be optional or omitted. In other embodiments, the battery2004may be used to charge the battery126and/or vice versa. The battery within the weapon used to power the electronic circuitry2002may be of the same type as the battery in the magazine100. In other cases, the battery used for the weapon may differ from the battery used for the magazine100. In addition to processing data relating to the amount of cartridges within the magazine100, the electronic circuitry2002can track a number of magazines available to a user; the status of the magazines, such as cartridges per magazine, empty or full status, etc.; the number of rounds fired since a particular time, such as per day, during a particular mission, since manufactured, etc.; and the position of a bolt2010within a buffer tube2012. In some embodiments, the electronic circuitry2002may include a transmitter for transmitting some or all of the data to a display that is separate from the weapon1300. For example, the data may be transmitted to a heads up display (HUD) within a helmet or goggles of a user. In some embodiments, data may be transmitted to a command center enabling a commanding officer to monitor ammunition count of soldiers, or other users, in the field or during a mission. In certain embodiments, electronic circuitry2002of the weapon may be configured to determine a number of shots fired by the weapon. The electronic circuitry2002may determine the number of shots fired by tracking a change in the cartridges available to the user. For example, if it is determined that a magazine has 12 cartridges at the first point in time and at a second point in time it is determined that the magazine has 5 cartridges, the electronic circuitry2002may determine that seven cartridges have been fired by the weapon. However, if electronic circuitry2002determines a change in the cartridges available in a magazine that is not inserted into the weapon, the electronic circuitry2002may determine that a user has added or removed cartridges without firing the cartridges from the weapon. Similarly, if electronic circuitry2002determines that the magazine is no longer registered to the weapon, the total count of cartridges available to the user may be reduced without increasing a shot count for the weapon. In some embodiments, the electronic circuitry2002may determine the number of shots fired based at least in part on the number of measured movements of a bolt in the weapon. The bolt may include a cylinder, a rod, or other movable portion of the weapon located within a buffer tube. In certain embodiments, the movement of the bolt can be used to open or close the chamber of the weapon. Further, movement of the bolt may facilitate loading a cartridge into a chamber and/or expelling a spent cartridge from the chamber or the weapon. If the bolt is moved, but a trigger is not pulled, electronic circuitry2002may determine that a shot was not fired despite movement of the bolt. Similarly, if the bolt is moved, but a magazine has not been inserted into the weapon or a magazine inserted into the weapon has no cartridges, the electronic circuitry2002may determine that a shot was not fired despite movement of the bolt. In some embodiments, the electronic circuitry2002may be configured to communicate with different magazines to obtain a total count of magazines registered with the weapon. The electronic circuitry2002may include a wireless transceiver to communicate with the magazines registered with the weapon. This wireless transceiver may use ultra-wideband communications to reduce the possibility of interference by other systems or users and/or to reduce the possibility that the communications are captured by other systems or users. In some embodiments, the communication may be encrypted or use a secure channel. Further, communication between the weapon and systems of another user or a command center may be encrypted. Thus, in some instances, electronic circuitry2002of the weapon of one user may use encrypted ultra-wide band communication to communicate with a system of another user or a computing system at a command center. In some implementations of the magazine100, data may be transmitted from the magazine100to a display that is separate from the magazine100and the weapon1300. For example, as described above with respect to the handle electronics2002, data from the magazine100may be transmitted to a HUD in a helmet or eyewear. As previously described, the magazine100and/or weapon1300can track ammunition within the magazine100. Further, the electronic circuitry2002can determine whether a cartridge or bullet is loaded into a chamber2014of the weapon1300. The weapon1300may include a magnet2016positioned within the buffer tube2012. Further, the weapon1300may include one or more Hall effect sensors2018within the buffer tube2012. The Hall effect sensors2018can be used to determine a location of the magnet2016within the buffer tube2012. Using the location of the magnet2016, the electronic circuitry2002can determine whether a cartridge has been loaded into the chamber2014. Similar to the electronic circuitry122, the electronic circuitry2002may include a hardware processor, or programmable hardware, and may be configured to perform one or more of the processes described herein. In some cases, a user may insert a magazine100that includes the features disclosed herein for determining a cartridge count into a traditional weapon that does not include the features disclosed herein. Although the magazine100can still determine the cartridge count in the magazine, the weapon may be unable to communicate whether a cartridge is loaded into the buffer tube. To prevent a user from obtaining a false reading of the cartridge count within the weapon (e.g., the cartridges in the magazine combined with the cartridge, if any, in the buffer tube), a button or other deactivation trigger may be positioned on the magazine100. When the magazine100is inserted into a weapon that does not include the capability of detecting a cartridge in the buffer tube, the button or other deactivation trigger may be activated causing the electronics of the magazine100to be deactivated. This button may be positioned such that when the magazine100is inserted into the weapon, a portion of the weapon presses against the button. For example, the button may be positioned near the top of the magazine such that the button is inserted within the weapon along with the magazine and is pressed against an inner surface of the insertion port of the weapon. Accordingly, a user will not obtain a cartridge count of cartridges within the weapon. To determine the number of cartridges within the magazine, a user can eject the magazine100and reactivate the circuitry to determine the cartridge count within the magazine100. The circuitry may automatically be reactivated when the magazine is ejected as the button, or other deactivation trigger, will no longer be in contact with the weapon. A weapon configured to detect a cartridge in a chamber may be configured to not press against the button of the magazine thereby preventing the cartridge count circuitry from being deactivate. For example, the insertion port may include a gap or notch that prevents the button on the magazine from being depressed. As an alternative, the circuitry of the weapon may communicate its functionality to the circuitry of the magazine. Upon receipt of a message form the weapon that it is capable of detecting a cartridge in the chamber, the magazine may activate its cartridge counting capabilities. In some embodiments, the control interface of the magazine can be positioned on a portion of the magazine that is inaccessible to the user when the magazine is inserted into the firearm. In some embodiments, the electronic circuitry2002may track a number of times a weapon has been fired. Accordingly, the electronic circuitry2002may provide maintenance information to a user or maintenance warnings alerting the user when one or more portions of the weapon should be serviced. In some cases, the electronic circuitry2002may track a number of times a weapon is by an fired since the weapon, or a portion thereof, was last serviced. The electronic circuitry2002may be capable of processing auxiliary data received from auxiliary sources and/or output to auxiliary systems. For example, the weapon1300may include a receiver or transceiver that can receive information from a range finder or a global positioning system (e.g., GPS). This information may be provided on a display, such as one integrated with a scope of the weapon. Thus, a user can view the cartridges available to the user in one or more magazines, a position of the user within a geographic area, and a distance to a target. Advantageously, by displaying the auxiliary information on a scope of the weapon, a user can obtain or view the information without removing his or her focus from a potential target. In another example, the weapon1300may communicate information obtained by the weapon1300or determined by the electronic circuitry2002to a radio of the user for transmission to another user, such as a unit leader or commander. For instance, an ammunition count may be communicated over the radio to a user. FIG.22illustrates a line drawing of a cross-section of an embodiment of the handle2000and magazine100inserted into the weapon1300.FIG.23illustrates a line drawing of another perspective of the cross-section of the embodiment of the handle and magazine inserted into the weapon ofFIG.22. With reference toFIG.22, the handle2000can further include a cap2202that can be opened to replace the batteries2004that power the electronic circuit2002. Further, the handle2000may include optical connections2204that communicate data to one or more optical transceivers that provide the data to one or more displays (e.g., the display1304or scope1302) for display to a user. FIG.23illustrates a line drawing of another perspective of the cross-section of the embodiment of the handle and magazine inserted into the weapon ofFIG.22.FIG.23further illustrates the alignment of the optical transceiver116of the magazine100and the optical transceiver2302located in the lip1602of the weapon1300. As illustrated by the line drawing, when inserted into the weapon1300, the magazine100is aligned such that the optical transceiver116aligns with the optical transceiver2302. Further, the spacing between the magazine100and the weapon1300may be sufficiently small to prevent ambient light from interfering with the optical connection. FIG.24Aillustrates an embodiment of a non-contact optical connector2302included in the lip1602of the insertion point for inserting a magazine100into the weapon1300. This non-contact optical connector2302can function as a transceiver for receiving data from a magazine100when the magazine100is inserted into the weapon1300. Although typically the optical transceiver116transmits data and the optical transceiver2302receives data, in some embodiments, the optical transceiver116may also receive data and/or the optical transceiver2302may also transmit data. Advantageously, in certain embodiments, by enabling the weapon1300to transmit data to the magazine100, the display124of the magazine100can display data gathered by the weapon1300, such as shot count or total magazines available. In certain embodiments, it is desirable to confirm whether the magazine100is inserted into the weapon1300. For example, a different calibration table for a linear encoder may be used when the magazine is inserted into the weapon1300. In some cases, the magazine100can determine it is positioned within the weapon1300by communicating via the optical transceiver. But in some cases, the magazine100may not be able to communicate using the optical transceiver because, for example, the weapon is deactivated or the battery within the weapon1300is uncharged or dead. One method of determining whether the magazine is inserted into the weapon1300is to use a magnetic sensor, such as a Hall effect sensor that can detect when it is within a threshold distance of a magnet positioned within the insertion port of the weapon1300or at a location that is within a threshold distance of the magazine100when the magazine100is inserted into the insertion port of the weapon1300. FIG.24Billustrates a portion of the magazine100with a transceiver116and a magnetic sensor2410. The magnetic sensor2410may be a Hall effect sensor. Because the magnetic sensor2410is collocated with the transceiver116, the magnetic sensor2410may be aligned with a corresponding transceiver2302when the magazine100is inserted into a weapon1300. FIG.24Cillustrates a portion of the weapon1300with a transceiver2302and a magnet2420. Because the magnet2420is collocated with the transceiver2302, the magnet2420may be aligned with the magnetic sensor2410and the transceiver116when the magazine100is inserted into the weapon1300. Thus, the magazine100can determine from a signal generated by the magnetic sensor2420when the magazine is inserted into the weapon1300. The magazine100can then determine whether to load a calibration table associated with an inserted magazine or a calibration table associated with an uninserted magazine when determining an ammunition count. Further, as the magnetic sensor2410is located at the rear2425of the magazine100, while the magnetic sensors112of the linear encoder and magnet110are located near the front edge402of the magazine, the magnetic field generated by the magnet110will not be detected, or will not generate a strong enough signal in the magnetic sensor2410to cause an incorrect determination of insertion status. Similarly, as the magnet2420in the weapon1300is located adjacent to the rear of the magazine when the magazine is inserted into the weapon, the magnetic field generated by the magnet2420will not be detected by the sensors112or will not generate a strong enough signal in the magnetic sensors112to cause an incorrect determination of ammunition count. FIG.25illustrates an embodiment of a buffer tube2012within a weapon1300. As previously described, the buffer tube may have a set or series of Hall effect sensors2018. In certain embodiments, the Hall effect sensors2018may be supplemented and/or replaced by other types of sensors, such as optical sensors. Using the Hall effect, the sensors2018can detect the motion or action of a bolt2010based on the movement of a magnet2016attached to the bolt2010. Based on the position of the bolt2010, a hardware processor included, for example, in the handle of the weapon, can determine whether a cartridge has been loaded into a chamber of the weapon. Thus, advantageously, the weapon can determine the total ammunition count within the weapon based on a summation of the ammunition in a magazine loaded into the weapon and the amount of cartridges within the chamber of the weapon. Although the number of cartridges within a chamber of the weapon is typically 0 or 1, embodiments disclosed herein can be adapted for use with weapons that may load multiple cartridges into a chamber or multiple chambers of the weapon, such as with a combination gun that has chambers configured to load different types of ammunition (e.g., a combination shotgun and rifle). In some embodiments, the movement of the bolt2010can be detected based on principles other than the Hall effect. For example, if the sensors2018are light sensors, the movement of the bolt2010can be detected based on the change of light around the sensors2018. This light may filter in via the barrel of the weapon and/or the chamber. Alternatively, or in addition, the light may be based on a light source (e.g., an LED) attached to the bolt2010. FIG.26illustrates a line drawing of an embodiment of a buffer tube2012and stock2602of a weapon1300. As illustrated, the stock2602may be at least partially hollow allowing for the buffer tube2012to extend into the stock2602and providing room for the bolt2010to move into the stock2602. In some embodiments, the bolt2010may be extended compared to other bolts to enable the bolt to travel through the extended buffer tube. Advantageously, by extending the buffer tube2012into the stock2602, additional space may exist in the weapon1300compared to weapons that use a solid stock. This additional room may be used to insert the sensors2018. Further, using a hollow stock enables the weapon1300to maintain its weight despite the addition of the magnet, sensors, electronic circuitry, and/or batteries. It should be understood that in certain embodiments, the weapon1300can implement features of the present disclosure without modification to the stock of the weapon. For example, the sensors may be positioned between the chamber and the stock. Example Use Case FIG.27illustrates an example use case of certain embodiments described herein. In this non-limiting example use case, a soldier is approaching enemy combatants on a bridge. The soldier may consider engaging the enemy combatants, but does not wish to engage without a minimum quantity of ammunition loaded in the soldier's weapon. To ensure a minimum ammunition quantity in the weapon, the soldier could replace the magazine with a new magazine. However, this could result the soldier carrying a number of partially filled magazines without knowing how much ammunition is available in total to the soldier. Alternatively, using embodiments disclosed herein, the user can look at a display, such as via the scope and see the total ammunition count within the weapon including the magazine and chamber of the weapon (e.g., 30 cartridges in the illustrated example). Further, the soldier can be presented with a total ammunition count available to the soldier via additional magazines the soldier is carrying (e.g., 175 cartridges in the illustrated example). Alternatively, the numbers illustrated via the display in the scope may represent the cartridges available in the weapon out of the total capacity of the magazine. As illustrated inFIG.27, the information presented to the soldier is presented in as part of a HUD and/or augmented reality display observable through a scope on the weapon. Thus, when the soldier looks through the scope, the soldier can see the targets in front of the weapon, and also see information provided by the weapon to the scope regarding the cartridges available to the user in, for example, the loaded magazine. The use case presented inFIG.27is one non-limiting example use case. Other uses cases are possible. For example, a police force can use embodiments disclosed herein to monitor the ammunition available to its officers. Further, a hunter could use embodiments disclosed herein to monitor the ammunition available during a hunt. Moreover, in certain embodiments, the weapon can include a transmitter for transmitting ammunition information to a command post or other location monitored by users (e.g., commanders) associated with the carrier of the weapon. Alternatively, or in addition, the weapon may communicate with another system of the user, such as the user's helmet, which may communicate the ammunition information to the command post. Advantageously, the ability to monitor ammunition at a command post or other location enables a user, such as a commander or police captain, to monitor the ammunition of a user (e.g., a solider or policeman) to determine whether reinforcements are needed, additional ammunition is needed, or if an unexpected firefight is occurring. For instance, a police captain can determine if a traffic officer has fired his weapon by receiving an alert that an amount of ammunition carried by an officer has changed. As another example, a field commander can determine that soldiers have been ambushed while guarding supplies based on an unexpected change in ammunition. Example Ammunition Count Determination Process FIG.28presents a flowchart of an embodiment of an ammunition count determination process2800. The process2800can be implemented by any system that can determine a count of the number of cartridges within a magazine. For example, the process2800, in whole or in part, can be implemented by electronic circuitry included in the magazine, such as the electronic circuitry122included in the magazine100. This electronic circuitry122can include hardware, such as a hardware processor, that can perform the process2800. In some embodiments, the electronic circuitry122includes application-specific hardware configured to perform the process2800. In other embodiments, the hardware may include a computer processor programmed with special instructions configured to perform the process2800. Further, the electronic circuitry122may include control circuitry for controlling one or more of the magnet110and/or the sensors112. In some embodiments, some or all of the process2800may be performed by electronic circuitry in the weapon, such as the electronic circuitry2002. The electronic circuitry2002may include some or all of the embodiments of the electronic circuitry122. To simplify discussion and not to limit the disclosure, portions of the process2800will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process2800may be performed by other systems. For example, operations described as being performed by the electronic circuitry122may alternatively be performed by the electronic circuitry2002. The process2800begins at block2802wherein, for example, the electronic circuitry122generates a magnetic field within a magazine100using a first magnet110. In certain embodiments generating the magnetic field may include producing an electric current in a wire that surrounds the magnet, such as with an electromagnet. In other embodiments, the magnetic field is generated by the magnet110. In some such cases, the electronic circuitry122may not be involved in generating the magnetic field. However, electronic circuitry122may be involved in determining the strength of the magnetic field. Using a first set of sensors112within the magazine100, the electronic circuitry122at block2804detects a location of the magnet110based at least in part on the magnetic field measured by the set of sensors112. In certain embodiments, the location of the magnet110is determined based on the particular sensor from the set of sensors that detects the magnetic field of the magnet110. In other embodiments, the location of the magnet110is determined based on the strength of the magnetic field as detected by a plurality of the sensors112. At block2806, the electronic circuitry122determines a number of cartridges within the magazine100based at least in part on the location of the first magnet as determined at the block2804. For example, if it is determined that the location of the first magnet110is at a particular location, the electronic circuitry122can determine that there are a particular number of cartridges within the magazine100based on a correspondence between the locations of the magnet110within the magazine100and the quantity of cartridges remaining within the magazine100. In some embodiments, the electronic circuitry122may access a number of tables stored in a memory of the electronic circuitry122that identify a corresponding between one or more signals generated by sensors included in the magazine100and the number of cartridges loaded in the magazine100. In some embodiments, the electronic circuitry122may perform an interpolation process to determine a position of a magnet within the magazine100based on a plurality of signals received from sensors of the magazine100. At block2808, the electronic circuitry2002generates a magnetic field within a buffer2012of a weapon1300using a second magnet2016. In certain embodiments, the block2808may include one or more of the embodiments described with respect to the block2802. Using a second set of sensors2018within the buffer2012, the electronic circuitry2002at block2810detects a location of the second magnet2016based at least in part on the magnetic field generated at block2808. In certain embodiments, the block2810may include one or more of the embodiments described with respect to the block2804. At block2812, electronic circuitry2002determines whether a cartridge is within a chamber2014of the weapon1300based at least in part on the location of the second magnet2016. Determining whether a cartridge is within the chamber2014may include determining whether a bolt2010has fully cycled or is within a particular position as determined based at least in part on the position of the magnet2016within the buffer tube2012and/or the stock2602. In certain embodiments, the block2812may include one or more of the embodiments described with respect to the block2806. In certain embodiments, the number of cartridges within the magazine and/or whether a cartridge exists within a chamber of the weapon may be output on a display on the magazine, on the weapon, on another device of a user of the weapon, or on a device in another location, such as a command station. In some embodiments, the count of ammunition or cartridges in the weapon may be displayed using the example process2900described below with respect toFIG.29. In certain embodiments, one or more of the blocks2808through2812may be optional or omitted. For example, in some embodiments, the number of cartridges within the magazine may be determined without the magazine being inserted into a weapon. Example Ammunition Count Display Process FIG.29presents a flowchart of an embodiment of an ammunition count display process2900. The process2900can be implemented by any system that can display a count of the number of cartridges within a weapon including a chamber of the weapon and a magazine inserted into the weapon. For example, the process2900, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process2800will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process2800may be performed by other systems. For example, operations described as being performed by the electronic circuitry122may alternatively be performed by the electronic circuitry2002and vice versa. The process2900begins at block2902wherein, for example, the electronic circuitry2002determines a magazine insertion status. Determining the magazine insertion status may include detecting whether a signal is received at the optical receiver2302of the weapon1300. At block2904, a display of the weapon1300, such as the display1304, displays the magazine insertion status. At decision block2906, electronic circuitry2002determines whether the bolt2010is open. Determining whether the bolt2010is open may include determining the position of the magnet2016as described with respect to the process2800. If it is determined at decision block2906that the bolt2010is open, electronic circuitry2002determines that the chamber2014is empty and may set a chamber counter corresponding to whether a cartridge exists within the chamber2014to0. At block2910, electronic circuitry2002determines a weapon count, or a count of the cartridges within the weapon1300. In some cases, determining a count of the cartridges within the weapon1300includes determining a number of cartridges within a magazine inserted into the weapon1300. Thus, the count of cartridges within the weapon, or the weapon count, may include the number of cartridges within the chamber2014and the number of cartridges within a magazine100, when inserted into the weapon1300. In cases where no magazine is inserted into the weapon, the weapon count will be equal to the number of cartridges, if any, within the chamber2014. At block2912, a display, such as a HUD display of a scope1302, displays the total count of cartridges within the weapon1300. The count of cartridges within the weapon1300may be displayed for a particular period of time or until there is a change in a count of cartridges, such as by a removal or insertion of a magazine or the firing of the weapon. Alternatively, or in addition, the count of cartridges may be displayed in response to an interaction by a user with a user interface, such as a user pressing a button on the weapon or on the display device. In some cases, the count of cartridges may be displayed on multiple displays, such as on a display attached to the rail of the weapon, a display integrated into the magazine, and/or a display generated in the scope1302. Upon completion of, or in parallel with, operations associated with the block2912, the process2900may return to the block2902. In some embodiments, the process2900advances from the block2912to the block2902in response to detecting a change in the status of the weapon, such as the removal or addition of a magazine, a firing of a cartridge, or a detection of a change in the position of the bolt2010. If it is determined at the decision block2906that the bolt2010is not open, electronic circuitry2002determines at the decision block2914whether the bolt2010has completely cycled. In some cases, when the bolt is completely cycled, a cartridge is inserted into the chamber2014. In certain embodiments, the decision block2914may also include determining whether a magazine is inserted into the weapon. Determining whether the bolt has completely cycled includes determining a position of the magnet2016within a buffer tube2012as previously described with respect to the process2800. If it is determined at the decision block2914that the bolt is not completely cycled, a display of the weapon1300, such as the scope1302, displays a message or indicator corresponding to an unknown state at block2916. At block2918, electronic circuitry2002waits until it detects that the bolt is charged, or completely cycled indicating that cartridge may have been loaded into the chamber2014. In some embodiments, if the bolt does not completely cycle for a threshold period of time, it may be determined that the weapon is jammed. If it is determined that the weapon is jammed, a display of the weapon, or other display in communication with the weapon, such as an augmented reality display of a user's helmet, may output a weapon jammed indicator. At decision block2920, electronic circuitry2002determines whether the magazine count has been reduced by 1. The magazine count corresponds to the number of cartridges within the magazine100. In some embodiments, the decision block2920may include determining whether the magazine count has been reduced by some value other than one. If it is determined that the magazine count has not been reduced by one, the process2900returns to the decision block2906. If it is determined at the decision block2920that the magazine count has been reduced by one or if it is determined at the decision block2914that the bolt has completely cycled, the process2900proceeds to the decision block2922. At decision block2922, electronic circuitry2002determines whether the magazine count is less than zero. If it is determined that the magazine count is less than zero, indicating negative cartridges within the magazine, then the process2900proceeds to the block2916where a message or indicator corresponding to an unknown state is displayed. In some embodiments, an unknown state indicator may be triggered in other circumstances. For example, when an error occurs in the electronic circuitry2002or if a conventional magazine is inserted into the weapon that does not include the features disclosed herein, an unknown state flag may be set and displayed to the user. In some embodiments, if a conventional magazine is inserted into the weapon, a display of the weapon may indicate that the magazine does not support the cartridge counting features, or other features, disclosed herein. If it is determined that the decision block2922that the magazine count is not less than zero, and the process2900proceeds to the block2924where the chamber count is set to one. At block2926, the electronic circuitry2002determines the weapon count indicating the number of cartridges within the weapon1300. In certain embodiments, the block2926can include one or more of the embodiments of the block2910. At block2928a display of the weapon1300displays the weapon count or the number of cartridges within the weapon1300inclusive of both the chamber and a magazine, if any, installed within the weapon1300. In certain embodiments, the block2928can include one or more embodiments of the block2912. Example Magazine Calibration Process FIG.30presents a flowchart of an embodiment of a magazine calibration process3000. The process3000can be implemented by any system that can calibrate an ammunition count system of a magazine. For example, the process3000, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3000will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3000may be performed by other systems. For example, one or more operations of the process3000may be performed by a computing device that is configured to communicate with the magazine100. In some embodiments, the process3000is performed by a manufacturer of the magazine100. The process may be an automated process performed as part of the process of manufacturing the magazine100. Alternatively, the process3000may be performed in whole or in part by a user calibrating the magazine100prior to sale or distribution, or by a user that has purchased or otherwise obtained the magazine100. For example, a user can instruct the magazine100to enter a calibration mode before performing a calibration process3000. In some embodiments, the calibration process3000can be repeated for a magazine. Repeating the process3000can improve the calibration if the magazine100has become worn and/or if a different type of ammunition is loaded into the magazine. Thus, in some embodiments, some or all of the operations of the calibration process3000can be used as a recalibration process and/or be used in place of some or all of the operations of the process3200described with reference toFIG.32. The process3000begins at block3002where a maximum number of cartridges is inserted into a magazine. The maximum number of cartridges may be the maximum number of cartridges that the magazine is designed or configured to hold. In some embodiments the maximum number of cartridges may differ based on the manufacturer of the cartridges or the type of cartridges. The insertion of the cartridges into the magazine may be performed by an automatic magazine loader or other automated machine. Further, the loading of the cartridges into the magazine may be performed as part of the manufacturing process for the magazine. In some embodiments, the cartridges may be manually loaded into the magazine. At block3004, a linear encoder position corresponding to the number of cartridges in the magazine is recorded in a calibration table. The linear encoder position may be determined from one or more signals received from one or more magnetic sensors, such as Hall effect sensors112, by a hardware processor. This hardware processor may be included within the electronic circuitry of the magazine, such as the circuitry122. Determining the linear encoder position may include determining a position of a magnet, such as a magnet attached to a follower of the magazine. The detected position of the magnet, as determined by the signals received from the one or more Hall effect sensors112, is associated with the number of cartridges inserted into the magazine. The relationship between the linear encoder position, or the detected position of the magnet, and the number of cartridges in the magazine is stored in the calibration table. Thus, after loading the magazine with a maximum number of cartridges, the linear encoder position recorded at the block3004is associated with the maximum number of cartridges can be loaded into the magazine. In some embodiments, the operations of the block3004may be performed one more time after determining that the magazine is empty in order to store a linear encoder position for when the magazine is empty. At block3006, a cartridge is removed from the magazine. The cartridge may be removed from the magazine by an automated machine. For example, a machine used during the manufacturing process of the magazine may be used to add and/or remove cartridges from the magazine during the magazine calibration process3000. Alternatively, in certain embodiments, the cartridge may manually be removed from the magazine. At decision block3008, it is determined whether the magazine is empty. In certain embodiments, a machine used during manufacturing of the magazine determines whether the magazine is empty. Alternatively, or in addition, electronic circuitry of the magazine determines whether the magazine is empty based at least in part on a number of cartridges that have been ejected from the magazine or a location of the linear encoder. In some embodiments, a user indicates that the magazine is empty. If the magazine is not empty, the process returns to the block3004where an updated linear encoder position corresponding to the updated number of cartridges in the magazine is recorded in the calibration table. The operations associated with the blocks3004,3006, and3008may be repeated until it is determined that the magazine is empty. If it is determined at the decision block3008that the magazine is empty, a maximum number of cartridges is inserted into the magazine at the block3010. The block3010may include one or more of the embodiments described with respect to the block3002. At block3012, the magazine is inserted into a weapon. The magazine may be inserted into a weapon as part of an automated manufacturing process for the magazine. Alternatively, the magazine may be inserted into the weapon by a user. At block3014, a linear encoder position corresponding to the number of cartridges in the magazine is recorded in the calibration table. The block3014may include one or more of the embodiments described with respect to the block3004. In some embodiments, the block3014may include recording the linear encoder position in a different calibration table in the calibration table of the block3004. Alternatively, the linear encoder positions recorded at the block3014may be recorded in a different portion of the calibration table. In some embodiments, the linear encoder positions recorded at the blocks3004and3014may be marked or otherwise tagged to identify that the linear encoder positions are associated, respectively, for the magazine not being inserted into the weapon and with the magazine being inserted into the weapon. At block3018, a cartridge is removed from the magazine. Removing a cartridge from the magazine may include cycling a bolt in the weapon to load a cartridge from the magazine into a chamber of the weapon and/or to expel a cartridge from a chamber of the weapon. In some embodiments, the magazine may be removed from the weapon, a cartridge may then be removed from the magazine, and the magazine may be re-inserted into the weapon. In some such embodiments, the block3018may include one or more of the embodiments previously described with respect to the block3006. At decision block3020, it is determined whether the magazine is empty. The decision block3020may include one or more of the embodiments previously described with respect to the decision block3008. If the magazine is not empty, the process returns to the block3014where a new linear encoder position is recorded in the calibration table for the current number of cartridges within the magazine. The operations associated with the blocks3014,3018, and3020may be repeated until it is determined that the magazine is empty. If it is determined at the decision block3020that the magazine is empty, the calibration table is stored in a storage of the magazine at block3022. In some embodiments, the operations of the block3014may be performed one more time after determining that the magazine is empty in order to store a linear encoder position for when the magazine is empty. The storage of the magazine may be a non-volatile storage. The calibration table may be identified as a default or manufacturer default calibration table in the non-volatile storage. In some embodiments, the calibration table may be stored in a storage that is external or independent of the magazine. For example, copies of the calibration table may be stored in a manufacturer database. Advantageously, in certain embodiments, by recording linear encoder positions for each load state of the magazine corresponding to the quantity of cartridges in the magazine, it is possible during use of the magazine to determine the number of cartridges within the magazine. Thus, a user can determine the number of cartridges within the magazine without manually counting the cartridges or keeping track of the number of shots fired. Further, by generating calibration table for when the magazine is inserted into a weapon and when the magazine is not inserted into a weapon, it is possible to more accurately track a number of cartridges loaded within the magazine. Typically, different pressures are applied to cartridges within the magazine when the magazine is loaded into the weapon compared to the when the magazine is not loaded into the weapon. These different pressures may cause the cartridges within the magazine to move. Thus, using the same calibration table or linear encoder positions stored within the calibration table for when the magazine is both inserted and not inserted may lead to an inaccurate count of cartridges loaded within the magazine. By separately recording the linear encoder positions for when the magazine is loaded into the weapon and when the magazine is not loaded into the weapon, a more accurate ammunition count can be determined for the magazine. In some embodiments, the pressure on the cartridges in the magazine varies based on a position of the bolt of the weapon. For example, when the bolt is drawn back into the buffer tube of the stock and the chamber is open, the pressure applied to the cartridges in the magazine inserted into the weapon may differ from the pressure applied to the cartridges when the bolt is pushed forward towards the barrel of the weapon and slides over the top cartridge, or cartridge closest to the feed point, of the magazine. In some embodiments, the pressure applied to the cartridges in the magazine when the magazine is not inserted into the weapon may be the same as the pressure applied to the cartridges when the magazine is inserted into the weapon, but the bolt is slid back, or the chamber is open. The movement of the cartridges because of the additional pressure that may be applied to the cartridges by the bolt when it is slid forward over the cartridges in the magazine may be 1, 2, or 3 millimeters, or any range of values between the foregoing distances. In some embodiments, the movement of the cartridges may be more or less and may depend, for example, on how large the bolt is relative to the buffer tube and/or the strength of the spring within the magazine. In certain embodiments, the process3000may include creating a first calibration table for when the magazine is inserted into the weapon and the bolt is closed and a second calibration table for when the magazine is inserted into the weapon and the bolt is open. The number of cartridges within the magazine may then be determined with reference to the first calibration table when the bolt is closed and with reference to the second calibration table when the bolt is open. In some cases, the second calibration table may also be used when the magazine is not inserted into the weapon. Alternatively, a third calibration table may be used for determining a number of cartridges in the magazine for when the magazine is not inserted into the weapon. In certain embodiments, the block3012may include identifying the weapon that the magazine is loaded into. In some cases, different weapons may have different impacts on the magazine and the relative position of the linear encoder for a particular number of cartridges loaded within the magazine. Thus, in certain embodiments, different calibration tables may be generated for the magazine for different weapons. In such cases, the process3000, or parts thereof, may be repeated for different weapons and each calibration table generated for each weapon may be stored separately along with an identifier of the type of weapon in the storage or non-volatile memory of the magazine. Further, in certain embodiments, a particular magazine may be capable of supporting different types of ammunition or the same type of ammunition manufactured by different manufacturers. In some such cases, the different types of ammunition, or ammunition manufactured by different manufacturers, may differ in the impact on the position of the linear encoder when a particular number of cartridges are loaded in the magazine. In some such cases the process3000may be repeated for each type of ammunition that is capable of being loaded into the magazine and/or for each manufacturer's version of a particular type of ammunition. For example, a particular magazine may be capable of holding .458 SOCOM, .223, 5.56, .50 Beowulf, or 6.5 Grendel ammunition. Because the size of the different types of ammunition may vary, the position of the linear encoder for a particular number of cartridges in the magazine may vary. Thus, in certain embodiments, the process3000may be repeated for each type of ammunition capable of being loaded into a particular magazine to create a calibration table, or other data structure, for each type of supported ammunition type. The identity of the ammunition or manufacturer may be stored with the calibration table corresponding to the ammunition type or manufacturer. Example Calibration Table Based Ammunition Count Process FIG.31presents a flowchart of an embodiment of a calibration table based ammunition count process. The process3100can be implemented by any system that can determine an amount of ammunition in a magazine or in a weapon with the magazine inserted using a calibration table. For example, the process3100, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3100will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3100may be performed by other systems. For example, one or more operations of the process3100may be performed by a computing device that is configured to communicate with the magazine100. The process3100begins at block3102where, for example, a hardware processor included in the electronic circuitry122determines whether a magazine100is inserted into a weapon1300. This hardware processor may be a field programmable gate array (FPGA) processor, a general-purpose processor, an application specific integrated circuit (ASIC), a microcontroller, a single board computer, or any other type of processor or computing device that may be used to determine the amount of ammunition in the magazine and/or in the weapon using a calibration table. The hardware processor may determine that the magazine is inserted into the weapon by communicating with electronic circuitry2002within the weapon. Alternatively, or in addition, the hardware processor of the electronic circuitry122may determine that the magazine is inserted into the weapon1300based at least in part on pressure detected on a pressure sensor of the magazine100. In some embodiments, the hardware processor of the electronic circuitry122may determine that the magazine is inserted into the weapon1300based at least in part on one or more signals received from one or more magnetic sensors within the magazine100. In some implementations the magazine determines whether it is inserted into the weapon1300based on input from a user. At block3104, the hardware processor determines an ammunition type for cartridges or ammunition loaded into the magazine. Alternatively, or in addition, a user may provide input that identifies an ammunition type for cartridges or ammunition loaded into the magazine. Determining the ammunition type may include determining a type of the ammunition loaded into the magazine and/or the manufacturer of the ammunition loaded into the magazine. In some cases, determining the ammunition type for cartridges loaded in the magazine may include determining the ammunition type for cartridges that are capable of being loaded into the magazine, but which may not currently be loaded into the magazine because, for example, the magazine has yet to be loaded with ammunition or all of the ammunition that was loaded into the magazine has been fired. In some cases, ammunition type is determined automatically by scanning a code, such as a quick response (QR) code or other type of machine-readable code on a cartridge or on a box that included the cartridge. In some embodiments, the block3104is optional or omitted. For example, in some embodiments, the magazine100is capable of being loaded with only one type of ammunition or variation between manufacturers of a particular type of ammunition is sufficiently small enough to not impact the ability of the hardware processor to count the number of cartridges within the magazine. At block3106, the hardware processor selects a calibration table based at least in part on the ammunition type and/or whether the magazine is inserted into the weapon. In some embodiments, the calibration table may be selected based at least in part on whether the bolt of the weapon is open or closed. The calibration table may be loaded from a non-volatile memory within the magazine100and/or within the weapon1300. The calibration table may be selected from one or more calibration tables generated for the magazine. In some embodiments, the magazine100includes a single calibration table because, for example, the magazine100supports being loaded by only a single type of ammunition or because calibration tables were only generated for a single type of ammunition. In cases where the magazine100can only be loaded by a single type of ammunition, the calibration table may be selected at the block3106based on whether the magazine is inserted into the weapon. In some embodiments, the portion of the calibration table accessed may be based on whether the magazine100is inserted into the weapon. At block3108, the hardware processor determines a linear encoder position for a linear encoder of the magazine100. Determining the linear encoder position may include determining a position of a magnet attached to a follower of the magazine. In some embodiments, the magnet is not attached to the follower, but moves as the follower moves. The follower may move towards an entry or egress point of the magazine as cartridges are expelled from the magazine. Conversely, the follower may move towards the base of the magazine as cartridges are loaded into the magazine. Thus, as cartridges are loaded or expelled from the magazine, a magnet or linear encoder position may change. In certain embodiments, the hardware processor determines the linear encoder position by mapping one or more signals received from one or more magnetic sensors to data stored in the calibration table. As the signals produced by the one or more magnetic sensors vary based on a location of a magnet with respect to the magnetic sensors, the linear encoder position can be determined based on the signals received from the one or more magnetic sensors. At block3110, hardware processor uses the calibration table to determine a count of the number of cartridges in the magazine based at least in part a linear encoder position. Determining the count of the number of cartridges in the magazine may include comparing the linear encoder position to the calibration table to determine a number of cartridges corresponding to the linear encoder position. In some embodiments, the hardware processor may use the linear encoder position as an index for accessing the calibration table to determine a number of cartridges associated with the linear encoder position. In some embodiments, the hardware processor may determine a range of values stored in the calibration table that includes the linear encoder position. A number assigned to the range of values indicate the number of cartridges associated with the linear encoder position. In some embodiments, the linear encoder position correlates to the number of cartridges loaded in the magazine. For example, if the linear encoder position is identified as five, indicates that there are five cartridges loaded within the magazine. In other embodiments, the linear encoder position does not directly correlate to the number of cartridges within the magazine, but is instead mapped within the calibration table to the corresponding number of cartridges within the magazine. For example, a linear encoder position between 5.1 and 5.3 may be mapped to a count of seven cartridges within the magazine, and a linear encoder position between 5.3 and 5.7 may be mapped to a count of the cartridges within the magazine. At decision block3112, the hardware processor determines whether there is a cartridge in a chamber of the weapon. Determining whether there is a cartridge in the chamber of the weapon may include determining whether an electrical signal is received from one or more magnetic sensors within the buffer tube of the weapon. In certain embodiments, the hardware processor of the magazine receives the electrical signal from the one or more magnetic sensors of the buffer tube to determine a position of the bolt and consequently, whether there is a cartridge within a chamber of the weapon. In other embodiments, a hardware processor of the weapon itself, such as a hardware processor within a handle of the weapon, determines whether there is a cartridge in the chamber of the weapon. In certain embodiments, the decision block3112is optional or omitted. For example, if it is determined that the magazine is not inserted into the weapon, the decision block3112may be omitted. In other embodiments, the decision block3112may be performed by the weapon regardless of whether a magazine is inserted into the weapon. It can typically be determined whether a cartridge is in the chamber of the weapon based on magnetic sensor from the plurality of magnetic sensors that provides a signal to the hardware processor. Thus, is generally unnecessary to have a calibration table for the weapon and to determine whether a cartridge is in the chamber. However, in certain embodiments, the calibration table may be generated for the weapon to facilitate determining whether a cartridge is in the chamber. If it is determined at the decision block3112that there is not a cartridge in the chamber of the weapon, the hardware processor outputs for display a count of the cartridges at the block3114that is included in the magazine. The block3114may include providing a count of the cartridges to a display or a display controller for output to a user. For example, the count of the cartridges may be provided to the controller1212which may cause the cartridge count to be displayed by the light emitting diodes1202. If it is determined at the decision block3112that there is a cartridge in the chamber of the weapon, the hardware processor increments the count by one at the block3116. In certain embodiments, the count may be incremented by more than one. For example, if the weapon is a multi-chambered weapon or is a weapon that loads multiple cartridges at a time into a corresponding number of chambers, the count may be incremented by one for each chamber that is determined to be loaded. The incremented count of cartridges may then be output for display at the block3114. Further, in some embodiments, the block3114may output separately or together a count of the number of cartridges in the magazine and a count of the number of cartridges within a chamber of the weapon. In some embodiments, the number of cartridges within the magazine and/or chamber is stored within a memory at the block3114instead of or in addition to being displayed. For example, in some embodiments, the count of cartridges is stored within a memory and is only displayed upon request by a user or in response to an event, such as the firing of a cartridge, a loading of the magazine, or the clearing of a jam in the weapon. Example Magazine Recalibration Process FIG.32presents a flowchart of an embodiment of a magazine recalibration process. The process3200can be implemented by any system that can recalibrate a magazine or an ammunition count system of the magazine. For example, the process3200, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3200will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3200may be performed by other systems. For example, one or more operations of the process3200may be performed by a computing device that is configured to communicate with the magazine100. The process3200begins at block3202where, for example, a hardware processor included in the electronic circuitry122detects a trigger to recalibrate the magazine100. The trigger may include interaction by a user with a user interface, such as a pressing a button or a powering on of circuitry within the magazine. Alternatively, or in addition, the trigger may be related to a passage of time. For example, the magazine may be recalibrated once a day, once a week, once a month, or some other determined time. In some cases, the trigger may be the loading or unloading of the magazine a particular number of times. The amount of force applied to cartridges within the magazine may differ for each weapon. In some cases, even weapons of the same manufacturer may apply a different amount of force to the cartridges within the magazine because, for example, of manufacturing tolerances or wear and tear of the weapon. Thus, in some embodiments, the process3200may be triggered or initiated each time a user uses a different weapon with the magazine. In some cases, when the magazine100is first obtained by a user, or at any other time the user selects, the user may initiate the process3200. In some embodiments, the trigger may be a command received in response to a user interacting with a user interface on a magazine, on a weapon, or on any type of wireless device that can communicate with the magazine. For example, a user may access a user interface on a smartphone or a laptop to request recalibration of a set of magazines. The wireless device may communicate wirelessly with each magazine registered to the user, or a subset of identified magazines, to trigger the recalibration process for the magazines. At block3204, the hardware processor determines an insertion status for the magazine corresponding to whether the magazine is inserted into a weapon. The insertion status may be determined based at least in part on the ability of the hardware processor in the magazine to communicate with electronic circuitry of the weapon. In some cases, a mechanical switch in the magazine may be triggered based on the insertion of the magazine into the weapon, thereby indicating whether or not the magazine is inserted into the weapon. In other cases, an optical or electrical switch is triggered based on the insertion of the magazine into the weapon, thereby indicating whether or not the magazine is inserted into the weapon. In some cases, a user may indicate the insertion status of the magazine by interacting with a user interface on the magazine or on a system in communication with the magazine. At block3206, hardware processor accesses a calibration table of the magazine based at least in part on the insertion status. The calibration table may be accessed from a memory of the magazine. In some embodiments, the calibration table may be accessed from a memory of the weapon. In some embodiments, the calibration table may be selected from a plurality of calibration tables based at least in part on the insertion status of the magazine, and identity of a type of cartridges loaded into the magazine, or a manufacturer of the cartridges loaded into the magazine. At block3208, the hardware processor determines a count of a number of cartridges loaded in the magazine. The count of the number of cartridges loaded into the magazine may be identified by a user interacting with a user interface of the magazine. Alternatively, in certain embodiments, the number of cartridges loaded into the magazine is assumed to be a particular number of cartridges required for recalibrating the magazine. For example, a user may be instructed to load the magazine with a particular number of cartridges corresponding to the maximum number of cartridges may be loaded into the magazine. In some embodiments, the process3200, or certain operations thereof, may be repeated using different numbers of cartridges within the magazine. For example, the process may be repeated using zero cartridges, the maximum number of cartridges, the number of cartridges corresponding to the magazine being half-full, or a number of cartridges corresponding to the magazine being one third full. In some embodiments, a user may be instructed, such as via a user interface of the magazine, to load a particular number of cartridges into the magazine. The user may be instructed to load different numbers of cartridges into the magazine across multiple performances of portions of the process3200to generate the modified calibration table. At block3210, the hardware processor determines an expected linear encoder position based at least in part on the calibration table and the count of the number of cartridges. Determining the expected linear encoder position may include determining in the calibration table where the linear encoder is expected to be within the magazine for the identified number of cartridges loaded into the magazine. In some cases, the hardware processor may use the number of cartridges as an index value to access the calibration table to determine the expected or anticipated linear encoder position for the linear encoder of the magazine. At block3212, the hardware processor determines a linear encoder position. In some embodiments, the block3212may include one or more of the embodiments previously described with respect to the block3108. At block3214, the hardware processor determines a difference between the current linear encoder position and the expected linear encoder position to obtain an adjustment factor. In certain embodiments, the difference between the current linear encoder position and the expected linear encoder position is zero and consequently the adjustment factor is zero. However, in other embodiments, the difference between the current linear encoder position and expected linear encoder position is nonzero. The difference between the two encoder positions may be nonzero because, for example, the ammunition used to calibrate the magazine initially and the ammunition used during the recalibration process3200may differ in type or a manufacturer. In some cases, the difference may be attributable to tolerances for the ammunition manufactured by a manufacturer. In some cases, the magazine is loaded with a default calibration table created using a model magazine for some magazine other than the current magazine being recalibrated. In some such cases variations may occur during the manufacturing process resulting in the linear encoder position for a particular number of cartridges loaded in the magazine differing from that of the default calibration table. Moreover, in some cases, the structure of the magazine may change over time due to wear and tear of the magazine. For example, the stiffness of the magazine housing or the springiness of the spring may change over time as the magazine is used or is exposed to different environmental factors. In some such cases, the position of the linear encoder for a particular number of cartridges loaded in the magazine may change over time as the as the condition of the magazine evolves over time. At block3216, the hardware processor modifies the calibration table based at least in part on the adjustment factor. In some embodiments, the hardware processor may modify the calibration table only if the adjustment factor exceeds a threshold. In other embodiments, the hardware processor modifies the calibration table based on any nonzero adjustment factor. In some cases, the adjustment factor is the same for the entire calibration table. In other words, the linear encoder position may be modified based on the adjustment factor for each entry associated with each number of cartridges loaded within the magazine. Alternatively, in certain embodiments, the adjustment factor may vary for different amounts of cartridges loaded within the magazine. For example, the adjustment factor for the linear encoder may be smaller when the magazine is empty or has a fewer number of cartridges loaded compared to when the magazine has a greater number of cartridges loaded. For instance, he adjustment factor for the linear encoder may be zero when the magazine is empty, 1 μm when one cartridge is loaded in the magazine, 2 μm when to cartridges are loaded in the magazine, 4 μm when three cartridges are loaded in the magazine, and 1 mm when four cartridges are loaded in the magazine. In some embodiments, the hardware processor may modify the calibration table associated with the magazine being inserted into the weapon and the calibration table associated with the magazine not being inserted into the weapon based on the adjustment factor. In other embodiments, the process3200is repeated once for the magazine being inserted the weapon and once the magazine not being inserted into the weapon. At block3218, the hardware processor stores the modified calibration table at the magazine. The modified calibration table may replace the existing calibration table or may be stored as an additional calibration table. The calibration table may be stored in a memory of the magazine. Example Process of Obtaining a Total Ammunition Count FIG.33presents a flowchart of an embodiment of a total ammunition count process3300. The process3300can be implemented by any system that can obtain a total ammunition count available to a user across one or more magazines registered to a weapon of the user. For example, the process3300, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3300will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3300may be performed by other systems. For example, one or more operations of the process3300may be performed by a computing device that is configured to communicate with a weapon1300or one or more magazines. Performance of the process300may occur in response to a trigger. This trigger may be a request from a user, a passage of time, the addition or removal of a magazine from a set of magazines registered to a weapon or to a user, or the insertion or removal of a magazine from an insertion port of the weapon. The process3300begins at block3302where, for example, a hardware processor included in the electronic circuitry2002accesses the identity of one or more magazines registered to a weapon. The identity of the one or more magazines may be obtained from a memory, such as a non-volatile memory, located within the weapon. This memory may be located within a handle of the weapon, or a stock of the weapon. In some embodiments, the memory may be located within hardware other than the weapon carried by the user, such as a helmet or personal communication device included in the user's kit. The identity of the one or more magazines may include a network address for accessing the one or more magazines via an ad hoc network. The network address may be formed from a combination of an identifier of the weapon and an identifier of the magazine. In some such embodiments, the weapon may serve as a router or hub that can communicate with the one or more magazines. This communication may be performed using ultra-wide band (UWB) communication. Further, the range of communication may be limited to a particular range of the user carrying the weapon creating a small personal network for the user. For example, the range of communication may be limited to 100 meters, 50 meters, 25 meters, 10 meters, or 3 meters or less. In some embodiments, although the transceiver in the weapon may be capable of communicating over a larger distance, the power supplied may be sufficiently small to limit communication to 1 to 2 meters to prevent eavesdropping or interference. Communication range may be limited based on the amount of power supplied to the networking equipment of the weapon. In some embodiments, the router or hub may be included and other equipment carried by the user, such as a helmet of the user. In some embodiments, the weapon may implement a low probability of intercept/low probability of detect (LPI/LPD) protocol to reduce eavesdropping or interference by malicious users. At block3304, the hardware processor, using a transceiver, attempts to establish a communication connection with a magazine from the one or more magazines identified at the block3302. In some embodiments, the operations associated with the block3304may be performed in parallel or substantially in parallel for a plurality of magazines of the one or more magazines. In some cases, a magazine inserted into the weapon may be omitted from the operations of the block3304because, for example, a count of cartridges loaded in the magazine inserted into the weapon may be obtained via alternative communication channels between the magazine and the weapon and/or alternative processes available for the magazine inserted into the weapon. For example, an optical communication connection may be established between the weapon and the magazine inserted into the weapon enabling the weapon to obtain the cartridge count for the magazine inserted into the weapon by requesting the cartridge count from the hardware processor of the magazine. As another example, the magazine may automatically provide the cartridge count to the weapon when inserted into the weapon via an optical connection to the weapon. In some embodiments, communication between the magazine and the weapon is encrypted or occurs over a secure channel. Further, the attempt to establish the communication connection with the magazine may occur over a particular time period. For example, the weapon may attempt to communicate with a magazine for 5, 10, 15, or 30 seconds or a minute. At decision block3306, the hardware processor determines whether a communication connection was successfully established with the magazine. The hardware processor may determine whether the attempt to establish the communication connection was successful based at least in part on whether the transceiver receives an acknowledgment packet or response from the magazine. In some cases, the hardware processor determines whether the attempt to establish a communication connection was successful based on whether the acknowledgment packet or other response from the magazine is received within a particular time period. If it is determined at the decision block3306that a communication connection was not successfully established with the magazine, the hardware processor deregisters the magazine from the one or more magazines registered to the weapon at block3308. Deregistering the magazine may include removing the identification of the magazine in a memory of the weapon. Alternatively, or in addition, deregistering the magazine may include marking an identifier of the magazine in the memory as no longer available. Further, in certain embodiments, deregistering the magazine may include removing a number of cartridges associated with the deregistered magazine from a total count of cartridges available to the user. If it is determined that the decision block3306that a communication connection was successfully established, the hardware processor determines at the block3310a count of the number of cartridges loaded in the magazine. Determining the count of the number of cartridges loaded in the magazine may include requesting account information from the magazine. Requesting the account information of the number of cartridges loaded in the magazine may cause the magazine to access its memory to determine the count of cartridges and to transmit the count information to the weapon. Alternatively, or in addition, requesting the count information of the number of cartridges loaded in the magazine may cause the magazine to perform a cartridge count process, such as the process3100. At block3312, the hardware processor adds the count obtained at the block3310to a total count of cartridges. The total count of cartridges may represent the total number of cartridges available to a user. Alternatively, or in addition, the total count of cartridges may represent the total number of cartridges loaded into all of the magazines registered with the user or registered to the weapon. In other words, if there are five magazines registered to the weapon or to the user, the total count of cartridges may represent the cumulative number of cartridges loaded into the five registers combined. At the decision block3314, the hardware processor determines whether there are more magazines registered to the weapon. If there are more magazines registered to the weapon, the process3300returns to the block3304where the hardware processor repeats the operations of the block3304for a different magazine of the one or more magazines. If there are no more magazines registered to the weapon, the hardware processor outputs the total count of cartridges at the block3316. The block3316may include one or more of the embodiments previously described with respect to the block3114. In some embodiments, the total count of cartridges may be displayed on a display of the magazine inserted into the weapon, on a scope attached to the weapon, on a heads up display of the helmet, or any other display accessible to the user. In some embodiments, the total count of cartridges may be transmitted wirelessly to another user other than the user carrying the weapon or otherwise associated with the weapon. For example, the total cartridge count may be transmitted to a display or system of a squad commander or other supervising user of the user carrying the weapon. In some embodiments, the total cartridge count may be transmitted to another location, such as a command center or other monitoring location. In some cases, the total count of cartridges is not output for display or is only output for display upon a request by a user. In some such cases, the total count of cartridges may be stored in a nonvolatile memory of the weapon or in other nonvolatile memory available to the user, such as non-volatile memory may be included in a helmet (e.g., a helmet with a heads up display) of the user. In some embodiments, the total cartridge count may also include the cartridges loaded in a magazine that is inserted into the weapon. In some such cases, a count of the cartridges loaded in the magazine is inserted into the weapon may be determined using the process3100. To determine the count of cartridges loaded in the magazine that is inserted into the weapon may be added to the total count of cartridges available to a user across all magazines registered to the weapon. Further, in some embodiments, the total count of cartridges may include a cartridge in a chamber of the weapon. In some embodiments, the process3300or a modified version of the process3300may be used to identify ammunition count for different types of ammunition carried by the user. For example, when each magazine is registered with the weapon, the type of ammunition loaded in the magazine may also be identified. Thus, when calculating the total count of cartridges available to the user, the hardware processor may separately count magazines that are loaded with different types of ammunition. In some such cases, the weapon may display the total ammunition available to the user and the total ammunition of each type of ammunition available to the user. Thus, for example, the weapon may display the total amount of round nose cartridges and the total number of hollowpoint cartridges available to the user. In some embodiments, the weapon may count the total number of cartridges available to a user for magazines the user carries for a different weapon. For example, the user may register each magazine the user carries with a weapon that includes the ammunition count hardware. The weapon may then communicate with each of the magazines to determine the total cartridges available to the user regardless of whether the weapon can load each of the magazines. Advantageously, in certain embodiments, the weapon can count the total number of cartridges available for the user's rifle as well as the user's handgun or other weapon and display it on a single display enabling the user to track the total ammunition available to the user. The display may identify separately the cartridge counts for the different weapons of the user. Example Magazine Registration Process FIG.34presents a flowchart of an embodiment of a magazine registration process3400. The process3400can be implemented by any system that can register a magazine with a weapon. For example, the process3400, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3400will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3400may be performed by other systems. For example, one or more operations of the process3400may be performed by a computing device that is configured to communicate with a weapon1300or one or more magazines. The process3400begins at block3402where, for example, a hardware processor included in the electronic circuitry2002identifies the existence of an unregistered magazine. The unregistered magazine may include any magazine that is not currently registered with the weapon regardless of whether the magazine was registered with the weapon at some previous point in time. The hardware processor may identify the existence of the unregistered magazine when the unregistered magazine is inserted into the weapon. Upon insertion of the magazine into the weapon, the hardware processor may obtain an identifier for the magazine. The hardware processor may access a list or other data structure stored within a memory of the weapon and may determine whether the identifier for the magazine is included in the list or the data structure to determine whether the magazine is unregistered with the weapon. Alternatively, or in addition, the existence of an unregistered magazine may be determined when the magazine is brought within a particular distance of the weapon. For example, when the unregistered magazine is brought within radio distance of the weapon, the weapon may access radio frequency identifier (RFID) tag or other type of tag of the magazine to obtain identifier of the magazine. Using the identifier of the magazine, the hardware processor can determine whether the magazine is registered with the weapon. In some embodiments, when a user presses a button on the magazine or interacts with the user interface of the magazine, the magazine may transmit an identifier within a particular distance. In certain embodiments, this particular distance is generally a short distance (e.g., 10 feet or less) to prevent weapons of other users from receiving the identifier from the magazine. Further, the identifier may be used by the weapon to determine whether the magazine is registered with the weapon. Further, the identifier may be used to facilitate the weapon communicating with the magazine to establish a network identifier for further communication between the magazine and the weapon. In certain embodiments, the user may deregister a magazine with the weapon by interacting with the user interface to indicate that a magazine inserted into the weapon is to be deregistered. Alternatively, or in addition, the user may interact with the user interface of the magazine to indicate to the weapon that the magazine within RFID communication distance of the weapon is to be deregistered from the weapon. At block3404, the hardware processor receives a magazine identifier for the unregistered magazine. The magazine identifier may be a unique identifier associated with the magazine of the weapon and can be used to distinguish the magazine from other magazines that may be within communication distance of the weapon. In some cases, the magazine identifier is a default identifier associated with any magazine that is not registered to a weapon. Upon registering the magazine with a weapon, the default identifier may be replaced with a unique identifier assigned by the weapon. Upon the registering the magazine with a weapon or upon the magazine no longer being within communication distance with the weapon, the identifier assigned by the weapon may be reset or replaced with the default identifier. In some embodiments, the identifier of the magazine may be or may be similar to a media access control (MAC) address. At block3406, the hardware processor provides a weapon identifier to the unregistered magazine. The weapon identifier may be a unique identifier associated with the weapon that is registering the magazine. In some embodiments, the weapon identifier of the weapon may be or may be similar to a media access control (MAC) address. In certain embodiments, the magazine may combine the weapon identifier with the magazine identifier to create an identifier used for further communication with the weapon. In certain embodiments, instead of providing the weapon identifier to the unregistered magazine, the block3406may include providing a unique identifier to the unregistered magazine that is based at least in part on the magazine identifier and the weapon identifier, and which may be used to uniquely identify the magazine. In some embodiments, the block3406may be optional or omitted. For example, in some embodiments, the weapon may provide the network identifier created at the block3408below to the magazine. At block3408, the hardware processor combines the magazine identifier and the weapon identifier to create a network identifier for the magazine. The network identifier may be a unique identifier that the weapon uses to identify the magazine. In some embodiments the weapon may use the network identifier is a network address to establish communication with the magazine. Advantageously, in certain embodiments, by assigning each magazine a unique network identifier, each magazine within communication range of the weapon can determine whether a communication is intended for the magazine or another magazine registered with the weapon. At block3410, the hardware processor stores the network identifier of the magazine. The network identifier may be stored in a non-volatile memory of the weapon. At block3412, the hardware processor obtains a count of cartridges loaded into the magazine. In certain embodiments, the block3412may include performing the process3100, or portions thereof. In some cases, the hardware processor may request the count of cartridges from the magazine by establishing a communication connection with the magazine via a secure channel and using the network identifier assigned to the magazine. In some embodiments the block3412may include one or more of the embodiments described with respect to the block3310. At block3414, the hardware processor adds the count to a total count of cartridges registered with the weapon. The total count of cartridges registered with the weapon may be stored in the nonvolatile memory of the weapon. In some embodiments, performing the block3414may include performing the process3300, or parts thereof. In some embodiments determining the total count of cartridges registered with the weapon may include aggregating the count of cartridges loaded within each magazine registered with the weapon including a magazine inserted into the weapon. Further, determining the total count of cartridges registered with the weapon may include determining whether the cartridge is within a chamber of the weapon. In some embodiments, the block3414may include adding the count of cartridges for the magazine newly registered to the weapon to a running count of cartridges previously determined to be registered with the weapon. In other embodiments, the block3414may include performing a new count of cartridges available across all magazines registered with the weapon. In some embodiments determining the total count of cartridges registered with the weapon may include subtracting the count of cartridges loaded into a magazine that is determined to be deregistered or no longer registered with the weapon. The deregistered magazine may include a magazine that the weapon is no longer able to indicate with to you, for example, a distance between the weapon in the magazine. Example Weapon Display State Machine FIG.35presents a flowchart of an embodiment of a weapon display state machine3500. The weapon display state machine3500includes a number of states and decision points for a controller of a display of a weapon system and/or a hardware processor included in the weapon system. Each of the states is associated with performing a process as described herein. The process may be performed by a weapon system or a hardware processor of a weapon system. Although particular processes are identified, it should be understood that in certain embodiments, alternative embodiments of the processes described herein may be performed. When the weapon system is initialized a magazine display state3510may be entered causing performance of a magazine display process3800. Further, the magazine display process3800may be performed continuously or intermittently while a hardware processor and/or display controller is powered or active. Moreover, the weapon system may remain in a magazine display state3510when each of the decisions blocks3502-3506return a “no” determination. The magazine display process3800is described in more detail below with respect toFIG.38. At decision block3502, it is determined whether an edge buffer tube magnetic sensor has been triggered. If one or more edge buffer tube magnetic sensors have been triggered, the weapon system enters the chamber count display state3512causing performance of a chamber count display process3700. The chamber count display process3700is described in more detail below with respect toFIG.37. At decision block3504, it is determined whether a central buffer tube magnetic sensor has been triggered. If one or more central buffer tube magnetic sensors have been triggered, the weapon system enters a jam display state3514causing performance of the jam display process3600. The jam display process3600is described in more detail below with respect toFIG.36. At decision block3506, it is determined whether a redraw has been triggered. If it is determined that a redraw has been triggered, the weapon system enters a redraw state causing performance of the redraw process3900. The redraw process3900is described in more detail below with respect toFIG.39. Example Jam Display Process FIG.36presents a flowchart of an embodiment of a jam display process3600. The process3600can be implemented by any system that can determine whether a weapon is jammed and display a jam state of the weapon. For example, the process3600, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3600will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3600may be performed by other systems. For example, one or more operations of the process3600may be performed by a computing device that is configured to communicate with a weapon1300or one or more magazines. The process3600begins at block3602where, for example, a hardware processor included in the electronic circuitry2002detects a signal generated by a central buffer tube magnetic sensor. Detecting a signal at the central buffer tube magnetic sensor may include determining that the central buffer tube magnetic sensor generated a signal based on a magnet or magnetic field detected by the magnetic sensor. The signal may be generated by a magnetic sensor within the buffer tube that is between the two ends of the buffer tube in response to a magnetic field produced by a magnet within sensor detection range of the magnetic sensor. The central buffer tube magnetic sensor may include any of the magnetic sensors that are between the two ends of the buffer tube. For example, with reference toFIG.20, some weapons may include five magnetic sensors2018within the buffer tube2012of the weapon1300. The central buffer tube magnetic sensors may include the middle three magnetic sensors2018that are between the magnetic sensors at each end of the buffer tube2012. In certain embodiments, the magnetic sensors2018may generate a signal regardless of detection of a magnet2016. In some such embodiments, detecting the signal generated at the central buffer tube magnetic sensor2018may include determining that the signal generated by one of the central buffer tube magnetic sensors2018is stronger than a signal generated by an edge buffer tube sensor. At block3604, the hardware processor initiates a jam detection timer. The jam detection timer may include a timer that counts the length of time that the signal is received from one or more of the central buffer tube magnetic sensors. Alternatively, or in addition, the jam detection timer may measure the amount of time that the strongest signal is received from one of the central buffer tube magnetic sensors as supposed to one of the edge buffer tube magnetic sensors. Initiating the jam detection timer may include setting the timer to a value of zero and starting the timer. At decision block3606, the hardware processor determines whether a signal has been generated at an edge buffer tube magnetic sensor. Determining whether a signal has been generated at the edge buffer tube magnetic sensor may comprise determining whether an edge buffer tube magnetic sensor has generated a signal in response to a magnet or magnetic field. In certain embodiments, the decision block3606includes determining whether the signal has been generated at a particular edge buffer tube magnetic sensor. In other words, the decision block3606includes determining whether the signal has been generated at the magnetic sensor2018at one edge of the buffer tube or by the magnetic sensor2018at the opposite edge of the buffer tube ofFIG.20. In other embodiments, the decision block3606includes determining whether a signal has been detected as being generated at either of the edge buffer tube magnetic sensors. In some embodiments, the decision block3606may include one or more of the embodiments associated with the block3602, but with respect to the edge sensors instead of the central buffer tube magnetic sensors. In some embodiments, the decision block3606may include determining whether a signal generated at an edge buffer tube magnetic sensor is stronger than one or more signals that may be generated by one or more other buffer tube magnetic sensors2018. In some cases, the magnetic field generated by the magnet2016may be sufficiently strong enough to be detected by multiple magnetic sensors2018. Alternatively, or in addition, multiple magnetic sensors2018may be sensitive enough to detect a magnetic field generated by the magnet2016. In some such cases, the determination of whether an edge magnetic sensor2018or a central magnetic sensor2018within the buffer tube generates a signal responsive to detecting a magnet, or magnetic field, may be based on the strength of the signal generated. In some cases, the decision block2606includes determining whether the strength of the signal at the edge buffer tube magnetic sensor exceeds the strength of the signal at the central buffer tube magnetic sensor by a threshold amount. If a signal is detected as being generated at the edge buffer tube magnetic sensor, or if the signal from the edge buffer tube magnetic sensor is stronger than signals from a central buffer tube magnetic sensor, the hardware processor resets a jam detected flag at the block3608. In certain embodiments, the block3608may be optional or omitted. For example, if the jam detected flag was not previously set, or does not indicate the existence or potential existence of a jam, the block3608may be omitted. Further, the block3608may include resetting or stopping a jam detection timer initiated at the block3604. If a signal is not generated at the edge buffer tube magnetic sensor, or if the signal generated at the edge buffer tube magnetic sensor is lower in strength than a signal generated at a central buffer tube magnetic sensor, the hardware processor determines whether the jam detection timer has expired at the decision block3610. Determining whether the jam detection timer has expired may include determining whether a passage of time measured by the jam detection timer exceeds a threshold length of time. The threshold length of time may vary based on the type of weapon. In some embodiments, the user may set the threshold length of time. If the jam detection timer has not expired, or has not exceeded the threshold length of time, the process3600returns to the decision block3606where the hardware processor continues to monitor whether a signal has been generated at the edge buffer tube magnetic sensor. If it is determined at the decision block3610that the jam detection timer has expired, the hardware processor sets a jam detected flag at the block3612. Setting the jam detected flag may include setting a value of the jam detected flag to indicate that a jam has been detected in the weapon. Further, setting the jam detected flag may include storing at a non-volatile memory of the weapon that the jam has been detected. Further, the hardware processor sets a redraw flag at the block3614. Setting the redraw flag may trigger performance of the redraw process3900associated with the state3516as illustrated in theFIG.35. The redraw process3900, as described below, may be used to output an indication to a user that the weapon is jammed. In certain embodiments, the blocks3612and/or3614may be optional or omitted. For example, in some embodiments, if the jam detection timer has expired, the hardware processor may cause an output of an indication that the weapon is jammed without setting any internal flags within the weapon. Example Chamber Count Display Process FIG.37presents a flowchart of an embodiment of a chamber count display process3700. The process3700can be implemented by any system that can determine whether a cartridge is in a chamber of a weapon. For example, the process3700, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3700will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3700may be performed by other systems. For example, one or more operations of the process3700may be performed by a computing device that is configured to communicate with a weapon1300or one or more magazines. The process3700begins at block3702where, for example, a hardware processor included in the electronic circuitry2002determines a current weapon state of a weapon. Determining the current weapon state of the weapon may include determining the configuration of a bolt of the weapon, whether a magazine is inserted into the weapon, a count of cartridges within the magazine inserted into the weapon, or any other status information for the weapon the may be used to determine whether a cartridge is loaded into a chamber of the weapon. As indicated by the state graph3500, the process3700may be initiated when a signal is received from an edge buffer tube magnetic sensor. Accordingly, in certain embodiments, the current weapon state of the weapon is a state other than a jam state. At decision block3704, the hardware processor determines whether a bolt of the weapon is open. Determining whether the bolt of the weapon is open may be based at least in part on the current weapon state of the weapon. Further, determining whether the bolt of the weapon is open may be based at least in part on one or more signals generated by one or more magnetic sensors within the buffer tube. For example, if a signal is generated by the magnetic sensor2018that is closest to the butt or back end of the stock or buffer tube, it may be determined that the bolt is open. If instead the signal is generated by the magnetic sensor2018that is closest to the chamber or magazine, it may be determined that the bolt is closed. Further, if the signal is generated by a magnetic sensor other than the magnetic sensor closest to the butt of the stock, and may be determined that the bolt is not open. In some cases, determining which magnetic sensor generated a signal may include determining which signal generated by a plurality of magnetic sensors is strongest. If it is determined at decision block3704that the bolt is open, the hardware processor sets the chamber count to zero at block3706. Setting the chamber count to zero indicates that there is not a cartridge within a chamber of the weapon. If it is determined at the decision block3704that the bolt is not open, the hardware processor determines whether the magazine count decremented at the decision block3708. Determining whether the magazine count was decremented may include obtaining a count of the number of cartridges within a magazine that is inserted into the weapon. The count of the number of cartridges within the magazine inserted that is into the weapon may be compared to a previously determined count of the number of cartridges within the magazine to determine whether the count of cartridges in the magazine decremented. If it is determined at the decision block3708that the magazine count did not decrement, the hardware processor sets the chamber count to zero at block3710. Setting the chamber count to zero indicates that the cartridge is not loaded within the chamber of the weapon. If it is determined at the decision block3708that the magazine count did decrement, the hardware processor sets the chamber count to one at block3712. Setting the chamber count to one indicates that a cartridge is loaded within the chamber of the weapon. After setting the chamber count at one of the blocks3706,3710, or3712, the hardware processor sets the redraw flag at block3714. Setting the redraw flag may trigger performance of the redraw process3900associated with the state3516as illustrated in theFIG.35. This redraw process may be used to output to a user an indication of whether a cartridge is loaded into a chamber. Further, the redraw process may be used to output to a user and updated count of cartridges within a magazine that is inserted into the weapon. Optionally, at block3716, the hardware processor resets a jam detection timer. Further, the hardware processor optionally resets the jam detected flag at block3718. In certain embodiments, because the hardware processor is able to determine whether a cartridge was loaded into the chamber and was able to determine whether a cartridge count of a magazine inserted into the weapon decremented, the hardware processor is able to confirm that the weapon is not in a jam state or is no longer in a jam state. Accordingly, the blocks3716and3718may be used to reset a previous identification of a jam state and/or to reset the process used to determine whether the weapon is a jam state. Blocks3716and3718may be optional or omitted because, for example, a jam was not detected or a jam detection timer was not active. Example Magazine Display Process FIG.38presents a flowchart of an embodiment of a magazine display process3800. The process3800can be implemented by any system that can determine and display the status of a magazine with respect to a weapon. For example, the process3800, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3800will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3800may be performed by other systems. For example, one or more operations of the process3800may be performed by a computing device that is configured to communicate with a weapon1300or one or more magazines. The process3800begins at decision block3802where, for example, a hardware processor included in the electronic circuitry2002determines whether a magazine is detected in the weapon. Determining whether the magazine is inserted into the weapon may include communicating with the magazine via an optical connection. Alternatively, or in addition, determining whether the magazine is inserted in the weapon may include determining whether a mechanical switch or other physical indicator has been triggered. Advantageously, in certain embodiments, by including a physical mechanism that is triggered when the magazine is inserted into the weapon enables the weapon to determine that a magazine is inserted even if the magazine is a conventional magazine that does not include the embodiments of the present disclosure. If it is determined at the decision block3802that a magazine is not detected in the weapon, the hardware processor sets the magazine count to zero at the block3804. Setting the magazine count to zero indicates to the weapon that there are no cartridges available to the weapon. In some embodiments, setting the magazine count to zero indicates that a weapon that there are no cartridges available to the weapon other than the cartridge that may be within a chamber of the weapon. It should be understood that, in certain embodiments, the operations of the block3804are associated with a counter for cartridges currently available for the weapon to fire and may be distinct from the count of cartridges available to the user, for example, in magazines registered to the weapon, the not inserted into the weapon. At the block3806, hardware processor clears the magazine inserted flag or sets the magazine inserted flag to a value indicating that a magazine is not inserted into the weapon. Clearing the magazine inserted flag may include modifying a register or memory of the weapon to indicate that a magazine is not inserted. If it is determined at the decision block3802that a magazine is detected in the weapon, the hardware processor obtains the magazine cartridge count at the block3808. Obtaining the magazine cartridge count may include performing the process3100or portions thereof. In some embodiments, the block3808may be optional or omitted. For example, if a conventional magazine is inserted into the weapon that does not include the embodiments disclosed herein, the hardware processor of the weapon may be unable to determine the cartridge count for the inserted magazine. In certain embodiments, if a conventional magazine is inserted into the weapon, the hardware processor may set a status unknown flag indicating that the weapon is unable to determine the number of cartridges loaded within the magazine. At the block3810, the hardware processor sets the magazine inserted flag indicating that a magazine is inserted into the weapon. Setting the magazine inserted flag may include setting a register or memory address to a value that indicates that a magazine is inserted into the weapon. In some embodiments, the magazine inserted flag is set to a value indicating a count of the cartridges within the magazine as obtained at the block3808. At the decision block3812, the hardware processor determines whether a change in the magazine cartridge count or a change in the magazine inserted flag has occurred. If it is determined at the decision block3812that a change in the magazine cartridge count or in the magazine inserted flag has not occurred, the process3800returns to the decision block3802where the hardware processor may continuously or intermittently determine whether a magazine is detected in the weapon. If it is determined at the decision block3812that there is a change in the magazine cartridge count. Or in the magazine inserted flag, hardware processor sets the redraw flag at the block3814. Setting the redraw flag may trigger performance of the redraw process3900associated with the state3516as illustrated in theFIG.35. Example Redraw Process FIG.39presents a flowchart of an embodiment of a redraw process3900. The process3900can be implemented by any system that can output weapon system status information for a weapon. For example, the process3900, in whole or in part, can be implemented by electronic circuitry included in the magazine100, such as the electronic circuitry122, and/or electronic circuitry in the weapon1300, such as the electronic circuitry2002. To simplify discussion and not to limit the disclosure, portions of the process3900will be described with respect to particular systems, such as the electronic circuitry122or2002. However, it should be understood that operations of the process3900may be performed by other systems. For example, one or more operations of the process3900may be performed by a computing device that is configured to communicate with a weapon1300or one or more magazines. The process3900begins at decision block3902where, for example, a hardware processor included in the electronic circuitry2002determines whether a redraw flag has been set. Determining whether the redraw flag has been set may include accessing a memory location in a memory of the weapon that stores the redraw flag or a state of the redraw flag. An indication that a redraw flag has been set may be used to cause a display of the weapon, or displaying truncation with the weapon, to display information about a state of the weapon, such as whether the weapon is jammed, and/or information about the number of cartridges available to the user. The number of cartridges available to the user may indicate the number of cartridges in the magazine inserted into the weapon, whether a cartridge is within a chamber of the weapon, and/or the number of cartridges inserted in a set of one or more magazines registered to the weapon whether or not a magazine is inserted into the weapon. If it is determined at the decision block3902that a redraw flag has not been set, the process3900returns to the decision block3902where the hardware processor continuously or intermittently determines whether the redraw flag has been set. In certain embodiments, the determination that the redraw flag has not been set may indicate that a change in status of the weapon or of cartridges available to the weapon has not changed. Moreover, in certain embodiments, instead of the hardware processor continuously or intermittently determining whether the redraw play has been set, the hardware processor receives an interrupt signal or a push notification alerting the hardware processor that the redraw play has been set. If it is determined at the decision block3902that a redraw flag has been set, the hardware processor determines at the decision block3904whether a magazine inserted flag has been set. Determining whether the redraw inserted flag has been set may include accessing a memory location in a memory of the weapon that stores the magazine inserted flag or a state of the magazine inserted flag. If it is determined at the decision block3904that a magazine inserted flag has been set, the magazine inserted status is displayed on a user interface or display of the weapon system at block3906. Displaying the magazine inserted status may include displaying a symbol corresponding to the magazine inserted status. In some embodiments, the display of the weapon system may be within the scope of the weapon or may be projected onto a scope of the weapon. Alternatively, or in addition, the magazine inserted status may be displayed on a user interface or display that is separate from the weapon, such as on a heads up display of a helmet or any other visual system carried or used by the user. If it is determined at the decision block3904that a magazine inserted flag is not set, the magazine not inserted status is displayed on the user interface or the display of the weapon system at the block3908. The block3908may include one or more of the embodiments described with respect to the block3906, but with the output of the display corresponding to the magazine not inserted status. At decision block3910, the hardware processor determines whether a jam detected flag is set. Determining whether the jam detected flag has been set may include accessing a memory location in a memory of the weapon that stores the jam detected flag or a state of the jam detected flag. If a jam detected flag is set, a jam state notification is displayed on the user interface or the display of the weapon system at the block3912. The block3912may include one or more of the embodiments described with respect to the block3906, but with respect to displaying the jam state of the weapon. At decision block3914, the hardware processor determines whether an unknown state flag is set. Determining whether the unknown state flag has been set may include accessing a memory location in a memory of the weapon that stores the unknown state flag or a state of the unknown state flag. In some embodiments, determining whether the unknown state flag has been set may include accessing or attempting to access the magazine count for the inserted magazine. If the hardware processor is unable to determine the magazine state for the inserted magazine, the hardware processor may determine that the weapon system is in an unknown state. In some embodiments, a weapon may be in an unknown state when first powered or initialized because, for example, it may be undetermined whether a cartridge is in a chamber. Once a user has cycled the bolt, the weapon can determine its state. If the weapon has a cartridge in the chamber, cycling the bolt may eject the cartridge. Further, if a magazine is inserted into the weapon, cycling the bolt may load a cartridge into the chamber if the magazine is not empty. One or more of the processes disclosed herein can be used to determine the ammunition or cartridge count loaded into a chamber of the weapon and/or a magazine inserted into the weapon. Accordingly, in certain embodiments, once a user has cycled the bolt, a weapon in an unknown state may transition to a known state and the unknown state flag may be reset or otherwise set to a value indicating a known weapon state. In certain embodiments, once the weapon is transitioned from an unknown state to a known state via, for example, the cycling of the bolt, the weapon may remain in a known state by performing the one or more embodiments disclosed herein for determining ammunition count and weapon state. In some embodiments, a display of the weapon may instruct the user to cycle the bolt upon turning on or powering the electronic circuitry of the weapon enabling the weapon to determine its state before the user takes further action with the weapon. In some embodiments, a weapon may be unable to determine a number of cartridges within a magazine because, for example, the magazine may be a conventional magazine that does not include the ammunition count capabilities disclosed herein. In some such embodiments, the weapon may indicate an unknown ammunition count or an unknown state. In other such embodiments, the weapon may indicate a state of the chamber (e.g., loaded or not loaded), but may indicate that a state of a magazine is unknown. In some cases, the weapon may indicate whether or not a magazine is inserted, but may not display a cartridge count due, for example, to the magazine being a conventional magazine. If it is determined at the decision block3914that an unknown state flag is set, an unknown state notification is displayed on the user interface or the display of the weapon system at the block3916. The block3916may include one or more of the embodiments described with respect to the block3906, but with respect to displaying the unknown state. If it is determined at the decision block3914that an unknown state flag is not set, a magazine count and a chamber count is displayed on the user interface or the display of the weapon system at the block3918. Determining the magazine count and/or the chamber count for cartridges or ammunition available to the user may include performing one or more of the processes2800,3100, or3300. In some embodiments, determining the magazine count and/or the chamber count may include accessing a memory location in a memory of the weapon that stores the magazine count and/or the chamber count. Further, the block3918may include one or more of the embodiments described with respect to the block3906, but with respect to displaying the ammunition available to the user or included in magazines registered to the weapon. Additional Embodiments A number of additional embodiments are possible based on the disclosure herein. For example, embodiments of the present disclosure can be described in view of the following clauses:1. A magazine configured to hold ammunition, the magazine comprising:a housing comprising an ammunition chamber, wherein the ammunition chamber is configured to store one or more cartridges of a particular ammunition type;an ammunition counter configured to determine a quantity of cartridges of the particular ammunition type within the ammunition chamber; andan optical transceiver configured to transmit a count of the quantity of cartridges of the particular ammunition type within the ammunition chamber to a weapon system when the magazine is installed in the weapon system.2. The magazine of clause 1, wherein the ammunition counter comprises:a magnet configured to generate a magnetic field that is at least partially within the ammunition chamber; anda Hall effect sensor positioned within the housing, the Hall effect sensor positioned with respect to the housing and the magnet, wherein the Hall effect sensor generates a signal when the magnet is within a particular distance of the Hall effect sensor.3. The magazine of clause 2, wherein the ammunition counter further comprises electronic circuitry configured to determine the quantity of cartridges of the particular ammunition type within the ammunition chamber based at least in part on one or more signals generated by the Hall effect sensor.4. The magazine of any one of the preceding clauses, further comprising a digital to optical signal adapter configured to convert a digital signal corresponding to the count of the quantity of cartridges to an optical signal for transmission by the optical transceiver.5. A weapon system comprising:a magazine configured to hold one or more cartridges; anda weapon comprising:an insertion port configured to accept the magazine, wherein the magazine is configured to be inserted into the insertion port of the weapon and to provide the one or more cartridges to the weapon for firing; andan optical transceiver configured to receive a count of the one or more cartridges within the magazine when the magazine is installed in the weapon system.6. The weapon system of clause 5, wherein the magazine comprises:a first magnet configured to generate a first magnetic field that is at least partially within an ammunition chamber of a housing of the magazine; anda first set of Hall effect sensors positioned along the housing, at least some of the first set of Hall effect sensors aligned with respect to the first magnet when the ammunition chamber includes a particular number of cartridges.7. The weapon system of any one of the preceding clauses, wherein the weapon further comprises a plurality of light-emitting diodes controllable to display the count of the one or more cartridges within the magazine.8. The weapon system of any one of the preceding clauses, wherein the weapon further comprises a plurality of light-emitting diodes controllable to display a jam state of a weapon when the magazine is loaded into the weapon.9. The weapon system of any one of the preceding clauses, wherein the weapon further comprises a second magnet configured to generate a second magnetic field, the second magnet located within a buffer tube of the weapon.10. The weapon system of any one of the preceding clauses as modified by clause 9, wherein the weapon further comprises a second set of Hall effect sensors positioned along the buffer tube and configured to detect a position of the bolt based at least in part on the second magnetic field.11. The weapon system of any one of the preceding clauses as modified by clause 10, wherein the weapon further comprises electronic circuitry configured to determine a count of cartridges within the weapon system.12. The weapon system of any one of the preceding clauses as modified by clause 11, wherein the electronic circuitry determines the count of cartridges within the weapon system based at least in part on one or more signals generated by the second set of Hall effect sensors and a signal received at the optical transceiver.13. The weapon system of any one of the preceding clauses, wherein the weapon further comprises an optical to digital signal adapter configured to convert an optical signal received from the magazine by the optical transceiver to a digital signal, the optical signal corresponding to the count of the one or more cartridges within the magazine.14. A weapon system comprising:a weapon comprising:an insertion port configured to accept a first magazine from one or more magazines, wherein the first magazine is configured to be inserted into the insertion port of the weapon and to provide one or more cartridges to the weapon for firing; anda transceiver configured to receive one or more magazine status signals corresponding to the one or more magazines, each magazine status signal corresponding to a different magazine from the one or more magazines.15. The weapon system of clause 14, further comprising the one or more magazines.16. The weapon system of any one of the preceding clauses, wherein each magazine status signal corresponds to a number of cartridges within the corresponding magazine.17. The weapon system of any one of the preceding clauses, wherein each magazine status signal corresponds to a use state of the corresponding magazine indicating whether the corresponding magazine has been used with the weapon.18. The weapon system of any one of the preceding clauses, further comprising a registration system configured to register the one or more magazines with the weapon system.19. The weapon system of any one of the preceding clauses as modified by clause 18, wherein the registration system comprises an optical scanner configured to scan a machine-readable code associated with the first magazine from the one or more magazines.20. The weapon system of any one of the preceding clauses as modified by clause 19, wherein the machine-readable code includes a unique identifier associated with the first magazine.21. The weapon system of any one of the preceding clauses as modified by clause 19, wherein the machine-readable code includes a unique identifier associated with the first magazine.22. The weapon system of any one of the preceding clauses as modified by clause 18, wherein the weapon further comprises the registration system.23. The weapon system of any one of the preceding clauses, wherein the weapon further comprises a user interface system configured to display a count of the number of cartridges within the first magazine when the first magazine is inserted into the weapon.24. The weapon system of any one of the preceding clauses, wherein the weapon further comprises a user interface system configured to display a count of the number of cartridges within the one or more magazines.25. The weapon system of any one of the preceding clauses, wherein the weapon further comprises a user interface system configured to display a count of the one or more magazines.26. The weapon system of any one of the preceding clauses as modified by clause 25, wherein the user interface system is further configured to display the count of the one or more magazines that include one or more cartridges.27. The weapon system of any one of the preceding clauses, wherein the transceiver is further configured to transmit a status of the one or more magazines to an external system that is separate from the weapon system.28. The weapon system of any one of the preceding clauses as modified by clause 27, wherein the external system comprises a headgear system.29. The weapon system of any one of the preceding clauses as modified by clause 27, wherein the external system comprises a computing system.30. The weapon system of any one of the preceding clauses as modified by clause 27, wherein the status is transmitted via a network to a central command center.31. The weapon system of any one of the preceding clauses, wherein the transceiver comprises an optical transceiver.32. The weapon system of any one of the preceding clauses, wherein the transceiver comprises a wireless transceiver.33. A magazine configured to hold ammunition, the magazine comprising:a housing comprising an ammunition chamber, wherein the ammunition chamber is configured to store one or more cartridges of a particular ammunition type;a magnet configured to generate a magnetic field that is at least partially within the ammunition chamber;a Hall effect sensor positioned within the housing, the Hall effect sensor positioned with respect to the housing and the magnet, wherein the Hall effect sensor generates a signal when the magnet is within a particular distance of the Hall effect sensor; andelectronic circuitry configured to determine a quantity of cartridges of the particular ammunition type within the ammunition chamber based at least in part on one or more signals generated by the Hall effect sensor.34. The magazine of clause 33, wherein the Hall effect magnet is positioned in the follower.35. The magazine of any one of the preceding clauses, wherein the one or more signals are generated based on a location of the magnet with respect to the Hall effect sensor.36. The magazine of any one of the preceding clauses, further comprising a plurality of Hall effect sensors within the housing, the plurality of Hall effect sensors including the Hall effect sensor, the plurality of Hall effect sensors positioned along the housing.37. The magazine of any one of the preceding clauses, further comprising a plurality of light-emitting diodes controllable to display the quantity of cartridges of the particular ammunition type.38. The magazine of any one of the preceding clauses as modified by clause 37, wherein the plurality of light-emitting diodes are further controllable to display a jam state of a weapon when the magazine is loaded into the weapon.39. The magazine of any one of the preceding clauses, further comprising an optical transceiver configured to transmit the quantity of cartridges of the particular ammunition type to a user interface device that is separate from the magazine.40. The magazine of any one of the preceding clauses as modified by clause 39, wherein the optical transceiver is further configured to receive a jam state of a weapon when the magazine is loaded into the weapon.41. The magazine of any one of the preceding clauses, further comprising an alignment pin configured to align a circuit board that includes the Hall effect sensor.42. The magazine of any one of the preceding clauses, further comprising a spring-loaded plunger positioned below the ammunition chamber and configured to control the position of the plurality of cartridges within the ammunition chamber.43. The magazine of any one of the preceding clauses as modified by clause 42, further comprising a gap between a sealing cap of the magazine and the spring-loaded plunger, the gap configured to house at least some of the electronic circuitry.44. The magazine of any one of the preceding clauses, further comprising a power source in electrical communication with the electronic circuitry.45. A weapon system comprising:a magazine configured to hold one or more cartridges; anda weapon comprising an insertion port configured to accept the magazine, a buffer tube, and a bolt, wherein the magazine is configured to be inserted into the insertion port of the weapon and to provide the one or more cartridges to the weapon for firing, the magazine comprising:a first magnet configured to generate a first magnetic field that is at least partially within an ammunition chamber of a housing of the magazine; anda first set of Hall effect sensors positioned along the housing, at least some of the first set of Hall effect sensors aligned with respect to the first magnet when the ammunition chamber includes a particular number of cartridges.46. The weapon system of clause 45, wherein the weapon further comprises a second magnet configured to generate a second magnetic field, the second magnet located within the buffer tube of the weapon.47. The weapon system of clause 46, wherein the weapon further comprises a second set of Hall effect sensors positioned along the buffer tube and configured to detect a position of the bolt based at least in part on the second magnetic field.48. The weapon system of any one of the preceding clauses, further comprising electronic circuitry configured to determine an amount of cartridges within the ammunition chamber based at least in part on one or more signals generated by the first set of Hall effect sensors.49. The weapon system of any one of the preceding clauses as modified by clause 48, wherein the electronic circuitry is further configured to determine a total amount of available cartridges based at least in part on the amount of cartridges within the magazine and a determination of whether a cartridge is in a chamber of the weapon.50. The weapon system of any one of the preceding clauses as modified by clause 48, wherein the electronic circuitry comprises an application-specific integrated circuit.51. The weapon system of any one of the preceding clauses as modified by clause 48, wherein the electronic circuitry is further configured to determine a jam state of the weapon.52. The weapon system of any one of the preceding clauses as modified by clause 48, wherein at least a portion of the electronic circuitry is within the magazine.53. The weapon system of any one of the preceding clauses as modified by clause 48, wherein at least a portion of the electronic circuitry is within a handle of the weapon.54. The weapon system of any one of the preceding clauses, further comprising a mountable display mounted on a barrel of the weapon, the mountable display configured to at least display an available count of cartridges within the magazine.55. The weapon system of any one of the preceding clauses as modified by clause 54, wherein the mountable display is further configured to display a number of loaded magazines available to a user.56. The weapon system of any one of the preceding clauses as modified by clause 54, wherein the mountable display is further configured to display a number of cartridges fired.57. The weapon system of any one of the preceding clauses as modified by clause 54, wherein the mountable display is further configured to display a jam state of the weapon.58. A method of determining a number of available cartridges, the method comprising:generating a magnetic field using a magnet located in a magazine;detecting, using a set of sensors, a location of the magnet within the magazine, wherein each of the set of sensors is configured to generate a voltage based at least in part on the magnetic field, and wherein the location of the magnet is determined based at least in part on one or more voltage values generated by one or more of the sensors from the set of sensors; anddetermining a number of cartridges of ammunition within a magazine based at least in part on the location of the magnet.59. The method of clause 58, further comprising displaying a count of the number of cartridges on a display interface included with the magazine.60. The method of any one of the preceding clauses, further comprising transmitting count data corresponding to the number of cartridges to a separate display interface that is separate from the magazine.61. The method of any one of the preceding clauses as modified by clause 60, wherein the count data is transmitted using an optical transceiver.62. The method of any one of the preceding clauses, further comprising:generating a second magnetic field using a second magnet located in a weapon;detecting, using at least one sensor, a location of the second magnet within the weapon, the at least one sensor separate from the set of sensors; anddetermining whether a cartridge is within a chamber of the weapon based at least in part on the location of the second magnet.63. The method of any one of the preceding clauses as modified by clause 62, further comprising determining whether the weapon is jammed based at least in part on the location of the second magnet within the weapon.64. The method of any one of the preceding clauses, further comprising:determining a number of magazines assigned to a user;determining a number of cartridges within each of the number of magazines;summing the number of cartridges within each of the number of magazines and the number of cartridges of ammunition within the magazine to obtain a total number of available cartridges; andcausing the total number of available cartridges to be displayed to the user.65. A computer-readable, non-transitory storage medium storing computer executable instructions that, when executed by one or more computing devices, configure the one or more computing devices to perform operations comprising:generating a magnetic field using a magnet located in a magazine;detecting, using a set of sensors, a location of the magnet within the magazine, wherein each of the set of sensors is configured to generate a voltage based at least in part on the magnetic field, and wherein the location of the magnet is determined based at least in part on one or more voltage values generated by one or more of the sensors from the set of sensors; anddetermining a number of cartridges of ammunition within a magazine based at least in part on the location of the magnet.66. The computer-readable, non-transitory storage medium of clause 65, wherein the operations further comprise displaying a count of the number of cartridges on a display interface included with the magazine.67. The computer-readable, non-transitory storage medium of any one of the preceding clauses, wherein the operations further comprise transmitting count data corresponding to the number of cartridges to a separate display interface that is separate from the magazine.68. The computer-readable, non-transitory storage medium of any one of the preceding clauses as modified by clause 67, wherein the count data is transmitted using an optical transceiver.69. The computer-readable, non-transitory storage medium of any one of the preceding clauses, wherein the operations further comprise:generating a second magnetic field using a second magnet located in a weapon;detecting, using at least one sensor, a location of the second magnet within the weapon, the at least one sensor separate from the set of sensors; anddetermining whether a cartridge is within a chamber of the weapon based at least in part on the location of the second magnet.70. The computer-readable, non-transitory storage medium of any one of the preceding clauses as modified by clause 69 wherein the operations further comprise determining whether the weapon is jammed based at least in part on the location of the second magnet within the weapon.71. The computer-readable, non-transitory storage medium of any one of the preceding clauses, wherein the operations further comprise:determining a number of magazines assigned to a user;determining a number of cartridges within each of the number of magazines;summing the number of cartridges within each of the number of magazines and the number of cartridges of ammunition within the magazine to obtain a total number of available cartridges; andcausing the total number of available cartridges to be displayed to the user. Terminology The embodiments described herein are exemplary. Modifications, rearrangements, substitute processes, etc. may be made to these embodiments and still be encompassed within the teachings set forth herein. One or more of the steps, processes, or methods described herein may be carried out by one or more processing and/or digital devices, suitably programmed. Depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithm). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. The various illustrative logical blocks, modules, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure. The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processor configured with specific instructions, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The elements of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of computer-readable storage medium known in the art. An exemplary storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The processor and the storage medium can reside in an ASIC. A software module can comprise computer-executable instructions which cause a hardware processor to execute the computer-executable instructions. Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” “involving,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y or Z, or any combination thereof (e.g., X, Y and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y or at least one of Z to each be present. The terms “about” or “approximate” and the like are synonymous and are used to indicate that the value modified by the term has an understood range associated with it, where the range can be ±20%, ±15%, ±10%, ±5%, or ±1%. The term “substantially” is used to indicate that a result (e.g., measurement value) is close to a targeted value, where close can mean, for example, the result is within 80% of the value, within 90% of the value, within 95% of the value, or within 99% of the value. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. While the above detailed description has shown, described, and pointed out novel features as applied to illustrative embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. | 202,904 |
11859936 | DETAILED DESCRIPTION In general, the present disclosure provides ammunition magazines and followers that enable feeding ammunition cartridges into a bolt assembly of a firearm without enabling contact between the bullet portion of the cartridge and an interior surface of the receiver portion of the firearm. Use of ammunition magazines and followers consistent with those disclosed herein significantly reduces or even prevents cartridge misfeed errors and the dangers associated therewith. Referring now specifically toFIGS.1-10, ammunition magazines consistent with the present disclosure include a magazine housing10and a follower20. The follower20is disposed within a cavity120of the magazine housing, and is configured to force one or more ammunition cartridges towards the top end of the magazine housing10and into an upper receiver of a firearm (not shown). In some embodiments, the magazine housing10includes a catch110on its outer face; the catch110is configured to selectably mate with the magazine catch of a firearm's lower receiver (not shown) to secure the ammunition magazine to the lower receiver. The cavity120is sized to accommodate two stacked, overlapping columns of ammunition rounds (e.g., a “double-stack” magazine). For example, a magazine housing10consistent with the present disclosure may include a cavity120sized and shaped to contain a double-stacked arrangement of 0.223 ammunition rounds and/or a double-stacked arrangement of 5.56 ammunition rounds. In another embodiment, the present disclosure may include a cavity120sized and shaped for a single nonoverlapping column of ammunition rounds. In some embodiments, the magazine housing10further includes a tail recess160extending vertically through the magazine housing10. The tail recess160, when present, is disposed near the proximal end112of the magazine housing10, and enables the follower20(described in more detail below) travels vertically through the magazine housing10smoothly. The magazine housing10further includes a floor plate receiver140at the bottom of the magazine housing10. The floor plate receiver140enables the floor plate (not shown) to slidably mate to the bottom of the magazine housing10. Referring now toFIGS.6-10, a follower20consistent with the present disclosure and for use with a magazine housing10includes a cartridge-shaped protrusion220, a ramp230, optionally a front leg280, and optionally a rear leg270. The top surface210contacts a first layer of ammunition rounds (not shown) within the cavity120, and includes a cartridge-shaped protrusion220and a ramp230. In some embodiments, the cartridge-shaped protrusion220includes a contour similar to that of an ammunition round, or a portion thereof. In some embodiments, the top surface of the cartridge-shaped protrusion220is disposed at an angle β relative to the top surface210of the follower20. In such embodiments, the angle β may be about 0.2° to about 5°, for example about 0.2°, about 0.3°, about 0.4°, about 0.5°, about 0.6°, about 0.7°, about 0.8°, about 0.9°, about 1°, about 1.1°, about 1.2°, about 1.3°, about 1.4°, about 1.5°, about 1.6°, about 1.7°, about 1.8°, about 1.9°, about 2°, about 2.1°, about 2.2°, about 2.3°, about 2.4°, about 2.5°, about 2.6°, about 2.7°, about 2.8°, about 2.9°, about 3°, about 3.1°, about 3.2°, about 3.3°, about 3.4°, about 3.5°, about 3.6°, about 3.7°, about 3.8°, about 3.9°, about 4°, about 4.1°, about 4.2°, about 4.3°, about 4.4°, about 4.5°, about 4.6°, about 4.7°, about 4.8°, about 4.9°, or about 5°. In some embodiments, the angle β is about 1.2° to about 2°. In some embodiments, the angle β is about 1.4° to about 1.8°. In some embodiments, the angle β is about 1.5°. The ramp230is disposed near the distal end of the follower20. The ramp230is configured to force the bullet-side tip of an ammunition cartridge upwards and towards the center of the chamber of an upper receiver (not shown) when the ammunition cartridge is advanced out of the magazine housing10. In some embodiments, the ramp230is configured such that the bullet portion of the cartridge does not contact an interior surface of the receiver of the firearm until the bullet portion reaches the barrel of the firearm. The ramp230is defined by an attack angle α measured between the surface of the ramp230and the surface210of the follower20. The attack angle α may vary depending on the make and model of the firearm, the upper receiver, and/or the cartridge to be used with the ammunition magazine. For example, the attack angle α for a ramp230of a follower20consistent with the present disclosure for use with a 5.7×30 bottleneck round may be about 11° to about 12°, such as about 11.5°. In another non-limiting example, the attack angle α for the ramp230of a follower20consistent with the present disclosure for use with a 5.56 caliber cartridge may be about 10° to about 11°, such as about 10.6°. More generally, however, the attack angle α is about 2° to about 20°, for example about 2°, about 3°, about 4°, about 5°, about 6°, about 7°, about 8°, about 9°, about 10°, about 11°, about 12°, about 13°, about 14°, about 15°, about 16°, about 17°, about 18°, about 19°, or about 20°. In some embodiments, the attack angle α is about 11°. In some embodiments, the attack angle α is about 10.6°. In some embodiments, the attack angle α is about 2.5° to about 20°. In some embodiments, the attack angle α is about 10° to about 12°. In some embodiments, the attack angle α is about 16°. Measured another way, depicted representatively inFIG.9B, the ramp230defines a guidance angle γ between the ramp surface230aand the longitudinal axis Baof the barrel B of the upper receiver UR sufficient that the bullet portion of the cartridge does not contact an interior surface of the barrel extension Beof the upper receiver UR until the bullet portion reaches the barrel B. The guidance angle γ may vary depending on the make and model of the firearm, the upper receiver, and/or the cartridge to be used with the ammunition magazine. For example, the guidance angle γ for a ramp230of a follower20consistent with the present disclosure for use with a 5.7×30 bottleneck round may be about 11° to about 12°, such as about 11.5°. In another non-limiting example, the guidance angle γ for use with a 5.56 caliber cartridge may be about 10° to about 11°, such as about 10.6°. More generally, however, the guidance angle γ is about 2° to about 20°, for example about 2°, about 3°, about 4°, about 5°, about 6°, about 7°, about 8°, about 9°, about 10°, about 11°, about 12°, about 13°, about 14°, about 15°, about 16°, about 17°, about 18°, about 19°, or about 20°. In some embodiments, the guidance angle γ is about 11°. In some embodiments, the guidance angle γ is about 10.6°. In some embodiments, the guidance angle γ is about 2.5° to about 20°. In some embodiments, the guidance angle γ is about 10° to about 12°. In some embodiments, the guidance angle γ is about 16°. In some embodiments, the ramp230includes a concave top surface230aand/or a longitudinal channel230aoriented parallel to the longitudinal length of the ammunition cartridge and configured to align the ammunition cartridge with a center line CL of the ramp230and towards the chamber as the ammunition cartridge is advanced into the chamber. In some embodiments, the shoulder of the last ammunition cartridge within the magazine10contacts and slides along the concave top surface230aas the last ammunition cartridge is advanced into the chamber, but the bullet portion of the ammunition cartridge does not contact the ramp230. In some embodiments, the bullet portion of the ammunition cartridge also does not contact a feed ramp of the lower receiver, but instead is forced into the chamber without contacting any interior surface of the lower receiver. In some embodiments, a channel240is disposed proximal to the ramp230and is configured to align the ammunition cartridge with the center line CL of the ramp230. The channel240, when present, includes an interior radius orthogonal to the longitudinal length of an associated ammunition cartridge and approximately the same radius as or slighter larger than the outer radius of the ammunition cartridge. The tail fin260slidably mates with the tail recess160. The tail fin260generally has a width of about 25% to about 50% of the overall width of the magazine housing10, for example about 25%, about 26%, about 27%, about 28%, about 29%, about 30%, about 31%, about 32%, about 33%, about 34%, about 35%, about 36%, about 37%, about 38%, about 39%, about 40%, about 41%, about 42%, about 43%, about 44%, about 45%, about 46%, about 47%, about 48%, about 49%, or about 50% of the overall width of the magazine housing10. In operation, the first loaded ammunition round is inserted laterally to contact the ramp230, and is held to one side of the cavity120due to the cartridge-shaped protrusion220contacting the ammunition round casing. The second loaded ammunition round contacts both the first loaded ammunition round and the cartridge-shaped protrusion220, and is held to the other side of the cavity120. As shown best inFIGS.9A-9B, the follower20may also include a spring hook290between the front leg280and the rear leg270. The spring hook290, when present, reversibly mates with the top end of the spring250. In some embodiments, the present disclosure provides a magazine housing10and associated follower20that, together with a spring, form an ammunition magazine. The follower20is disposed within a cavity120of the magazine housing, and is configured to force one or more ammunition cartridges towards the top end of the magazine housing10and into an upper receiver of a firearm (not shown). The magazine housing10includes a catch110on its outer face, a cavity120in which the follower20and an associated spring250are disposed, a tail recess160extending vertically through the magazine housing10, and a floor plate receiver140; while the follower20includes a cartridge-shaped protrusion220and a ramp230disposed on its top surface210, a front leg280, a rear leg270, and a tail fin260disposed on the rear leg270. The upper surface of the cartridge-shaped protrusion220is disposed at an angle β of about 1.4-1.8° relative to the top surface210of the follower20, while the top surface of the ramp230is disposed at an angle α of about 10-16° relative to the top surface210of the follower20. The follower20and its ramp230operate to direct cartridges loaded within the cavity120of the ammunition magazine into substantially the center of a chamber of a lower receiver (not shown) without the bullet portion of the cartridge contacting an inner surface of the lower receiver until the bullet portion enters the barrel of the firearm. In some embodiments, such as those generally consistent withFIGS.1-10, the ammunition magazine is configured to hold two stacks of cartridges that are partially interlaced (e.g., a double stack configuration). In other embodiments, the ammunition magazine is configured to hold a single stack of cartridges, each resting on top of the cartridge below (e.g., a single stack configuration). In such embodiments, the follower20includes the ramp230and optionally the channel240, but does not include the cartridge-shaped protrusion220. The ramp230and, if present, the channel240are disposed mid-way laterally across the width of the follower surface210, rather than being offset towards one side of the follower surface210as shown in the double stack embodiment depicted specifically inFIGS.1-10. Regardless of whether the ammunition magazine enables a single stack or a double stack configuration of loaded cartridges, the magazine housing10and the follower20cooperate to enable feeding of an ammunition cartridge from the cavity120into a chamber of an upper receiver without enabling contact between the bullet portion of the cartridge and the feed ramp of the upper receiver (not shown). By reducing or eliminating contact between the ammunition cartridge and the feed ramp, damage to the cartridge—and therefore risk and rate of cartridge misfeed and misfire—is also reduced or eliminated. In some embodiments, the rate of cartridge misfeed and misfire associated with an ammunition magazine of the present disclosure is reduced by at least about 50% compared to an ammunition magazine including a follower inconsistent with the present disclosure, for example by about 50%, by about 55%, by about 60%, by about 65%, by about 70%, by about 75%, by about 80%, by about 85%, by about 90%, by about 95%, by about 96%, by about 97%, by about 98%, by about 99%, or by about 100%. Referring now toFIG.11-13, an alternative embodiment is shown of a magazine housing, intended to be mounted into a rifle receiver of a firearm. In one embodiment the magazine housing includes a base10a, a removable plate11, a left-hand feed lip insert800and a right-hand feed lip insert900; a left-hand feed ramp insert810and a right-hand feed ramp insert910; a magazine follower20and a spring40. Referring specifically toFIG.12, a left-hand feed lip insert800and left-hand feed ramp insert810are disposed on the inside of the base10a. A right-hand feed lip insert900and right-hand feed ramp910are disposed on the inside of a removable housing plate11(not shown), which is substantially a mirror image of the magazine housing base10aillustrated inFIG.12. The removable housing plate11including the right-hand feed lip insert900and the right-hand feed ramp910reversibly mates with the magazine housing base piece10aincluding the left-hand feed lip insert800and the left-hand feed ramp810to define a magazine housing. A magazine follower20is disposed within the magazine housing, and is pressurized by a spring40configured to elevate a cartridge50against left-hand feed lip insert800and right-hand feed lip insert900. The left-hand feed lip insert800and the right-hand feed lip insert900cooperate to hold the cartridge in place generally in midway between the outer wall of the magazine housing base10aand the outer wall of the removable housing plate11. Referring now toFIG.13, the magazine housing base10amay in some embodiments include one or more key seat slotted pockets911,912,913disposed in the left inner side wall to accept and mate with tabs1111,1112,1113respectively of the left-hand feed lip insert800. In such embodiments, the key seat slotted pockets911,912,913are configured to accept tabs1111,1112,1113such that the tabs1111,1112,1113are slidably engaged with the magazine housing base10ato enable smooth and reliable up-and-down movement of the left-hand feed lip insert800within the magazine housing. Similarly, the magazine housing plate11may include one or more key seat slotted pockets921,922,923disposed on the interior of magazine housing plate11to accept tabs1121,1122,1123of the right-hand lip insert900. Key seat slotted pockets921,922,923are configured to slidably mate with tabs1121,1122,1123to enable smooth and reliable up-and-down movement of the right-hand feed-lip insert900within the magazine housing. Referring now toFIGS.14-16, a rear cutaway view of one embodiment of a magazine of the present disclosure shows an assembled magazine box10holding a cartridge50in position between the left-hand feed lip insert800and the right-hand feed lip insert900. Referring specifically toFIG.14, a rear view of an assembled magazine box shows the left-hand feed lip insert800and the right-hand feed lip insert900in their topmost positions of the magazine housing10. The magazine housing base10aand removable plate11are shown assembled for use as magazine housing10. Magazine follower20raises the cartridge into position between left-hand feed lip insert800and right-hand feed lip insert900due to upward pressure from follower spring40. The key seat slotted pocket911of the magazine housing base10aand the key seat slotted pocket921of the removable plate11both have empty clearances at bottom locations with the left-hand feed lip insert800and the right-hand feed lip900both in their topmost positions. Cartridge50is shown at the magazine top opening, resting on inside corners815and915of left-hand feed lip insert800and right-hand feed lip insert900, respectively. FIG.15shows a cartridge50pressed downward (arrow) against inside corners815and915of left-hand feed lip insert800and right-hand feed lip insert900, respectively. Magazine follower20and spring40are also shown moving away from cartridge50in a downward direction (arrow). When the cartridge50is pressed against inside corners815and915of left-hand feed lip insert800and right-hand feed lip insert900respectively, any empty clearance remaining for key seat slotted pocket911is filled by the tab1111until the left-hand feed lip insert800reaches location D. Simultaneously, the key seat slotted pocket921is filled by the tab1121until the right-hand feed lip900reaches location C. Upon discontinuation travel of the left-hand feed lip insert800and the right-hand feed lip insert900, continued downward pressure of cartridge50will cause left-hand feed lip insert800and right-hand feed lip insert900, both of flexible construction, to widen into positions B and A as shown inFIG.15. When the left-hand feed lip insert800flexes to position B and the right-hand feed lip insert900flexes to position A, the cartridge50may pass by inside corners815and915and into the interior of the magazine housing. As shown inFIG.16, a cartridge50has already been forced beyond the inside corner815of the left-hand feed lip and the inside corner915of the right-hand feed lip insert900. The cartridge50is retained in this configuration until the bolt of a firearm forces the cartridge50from the magazine into a chamber, or until another cartridge50is forced into the magazine on top of the first cartridge50. In some embodiments, both the left-hand feed lip800and the right-hand feed lip900are sufficiently flexible to enable their respective side walls to relax from their flexed positions A/B (FIG.15) to prevent cartridge50from exiting the top of the magazine opening. The flexible characteristics of the left-hand feed lip800and the right-hand feed lip900also enable the cartridge50to be gently but firmly secured approximately centrally between the magazine housing base10aand the removable housing plate11(e.g., laterally central). Decompression of spring40will elevate the magazine follower20, pressing the cartridge50into the inner surfaces of the left-hand feed lip insert800and the right-hand feed lip insert900to allow both to be elevated into their top-most position. The process of loading cartridges as described above may be repeated until the magazine housing10reaches full capacity. In one embodiment, loading cartridges50from top to bottom is improved by the elasticity of both the left-hand feed lip insert800and the right-hand feed lip insert molding to the cartridge50holding the cartridge50in the desired location. Another improvement provided by the elastic material of the left-hand feed lip insert800and right-hand feed lip insert900is the prevention of scoring to any surfaces of cartridges50during use. In some embodiments, the left-hand feed lip insert800and the right-hand feed lip insert900cooperate to position the top-most cartridge50substantially centrally between the left-hand feed lip insert800and the right-had feed lip insert900, reducing the rate of cartridge misfeeds into the chamber of an associated lower receiver/firearm. Referring now toFIGS.17A-17B, the left-hand feed lip insert may in some embodiments include a ramping surface810-C. A right-hand feed ramp910(e.g., consistent with that shown inFIG.11) may be substantially a mirror image of the left-hand feed ramp insert810shown inFIGS.17A-17B, and may include a ramping surface910-C (not shown). The ramping surfaces800-C/900-C, when present, improve alignment of the cartridge50while the bolt forces the cartridge50into a chamber. Referring now specifically toFIGS.18-22, in some embodiments a magazine housing assembly10including a base10a, a left-hand feed lip insert800, left-hand feed ramp insert810, a right-hand feed lip insert900(not shown for clarity), a right-hand feed ramp insert910(not shown for clarity), a magazine follower20, and a spring40, is assembled to form a magazine box, which is then mounted onto a receiver400. Receiver400is shown in cutaway view for clarity, and includes a bolt600configured to move towards the barrel500to force cartridges50from the magazine box into the firing chamber501of barrel500. In one embodiment, the receiver400also includes an integral feed ramp401to further assist in guiding the cartridge50into position in the firing chamber501. Referring now toFIG.19, the bolt600is in a retracted position (away from the barrel500). When the bolt is in this retracted position, cartridge50is forced upward into position between the left-hand feed lip insert800and the right-hand feed lip insert900. After the cartridge50is elevated to this position, the bolt600may be moved towards the chamber501to slidably engage the cartridge50. As the bolt600continues to advance towards the barrel500, the shoulder50-A of cartridge50contacts the ramping surface810-C of left-hand feed ramp insert810and the ramping surface910-C of the right-hand feed ramp insert910(not shown for clarity). Continued forward movement of the bolt600continues to force the cartridge50along the ramping surface810-C of the left-hand feed ramp insert810and along the ramping surface910-C of the right-hand feed ramp insert910(not shown), forcing the bullet60portion of the cartridge50to move past the feeding ramp401without contacting the feeding ramp401. As bolt600continues to advance, the cartridge50continues to slide towards the barrel500, releasing cartridge50from the left-hand feeding lip insert800and its ramping surface810-C, and from the right-hand feeding lip insert900and its ramping surface910-C (not shown). Upon release of the cartridge50into the chamber501, the bolt600retracts and the next cartridge50available in the magazine10will be forced upward to the left-hand feeding lip insert800and the right-hand feeding lip insert900(not shown) and into position for loading into the chamber401. Upon retraction of the bolt600, the spent cartridge50is ejected from the receiver400and the next cartridge50available in the magazine10can repeat the cycle described above, with each cartridge50contacting the left-hand feed ramp surface810and the right-hand feed ramp surface910as each cartridge is forced from the magazine10into the chamber401, until all cartridges50in the magazine10have been expended. In some embodiments, no bullet portion60of cartridges50within a magazine10contact inner surfaces of the firearm before the bullet portions60are expelled through the barrel of the firearm (at which point the bullet portions60may each contact inner surfaces of the barrel). Magazines10consistent with the present disclosure operate to prevent the bullet portions60of cartridges50from disruption, damage, or scoring during the process of transferring the cartridge50and its associated bullet portion60from the magazine10to the chamber401. Further, the left-hand feed ramp810and right-hand feed-ramp910inserts comprise, consist, or consist essentially of a flexible material, such as rubber, plastic, foam, polymer, or a combination thereof, substantially reduce or prevent scoring to the cartridge case during the loading process. Referring specifically toFIGS.23-29, another embodiment of a magazine10consistent with the present disclosure includes a magazine housing base10a, a follower20, and a spring40. Also, shown inFIG.23is a bottle-neck style cartridge50consistent with the type of ammunition round used by at least one embodiment of the present disclosure. The magazine10of may have a double row or double-stack design as shown inFIGS.23-29. In another embodiment, the magazine housing10may be configured to hold only a single row of cartridges50. FIG.24illustrates a magazine housing base10aincluding a left-hand feed ramp surface810integrally formed as a protrusion on the inner surface of magazine housing base10a. Similarly, right-hand feed ramp surface910is a substantially mirror-image of the left-hand feed ramp surface810, and is also integrally formed on the inner surface of the opposite half of the magazine housing base10a(not shown for clarity), which may be substantially a mirror image of the illustrated half of the magazine housing base10a. As shown inFIG.25, cartridge50is loaded into magazine10by compressing the magazine follower20and the magazine spring40downward. When the follower20and the spring40are compressed, the cartridge50is held against the left-hand feed lip800and the right-hand feed lip900(not shown). Cartridge50may have a conventional bottle neck design including a shoulder50-A that contacts the feed ramp surfaces810/910. An additional cartridge50loaded into the magazine10will depress the initial cartridge50into the interior of magazine10along with the follower20by further compressing the spring40. As shown inFIG.26, the bolt600of a firearm initially contacts the rear surface of the cartridge50when advancing towards the chamber501.FIG.27shows a magazine housing10aligning a cartridge50with a barrel extension700that includes a left-hand loading slot701and a right-hand loading slot702. Loading slots701and702are configured to help guide a cartridge from a conventional double-stack ammunition magazine into the chamber501. FIG.28is an isometric view ofFIG.27, and illustrates the cartridge50being advanced by the bolt600until the cartridge shoulder50-A contacts the feed ramp surfaces810/910of the magazine10. The feed ramp surfaces810/910guide cartridge shoulder50-A away from the outer walls of the magazine10and towards the center of the chamber501such that the bullet60does not contact the loading slots701and702of the barrel extension700, or any other interior surface of the barrel extension700. Referring specifically toFIG.30-32, a magazine10is shown mounted to a receiver400with its bolt600pushing the cartridge50along the surface of feed ramps810/910to guide the cartridge50and its bullet60through the barrel extension700and into the chamber501of the barrel500. The bullet60of the cartridge50does not contact interior surfaces of the barrel extension700as it is forced into the firing chamber501of barrel500. Upon retraction of the bolt600, the spent cartridge50is ejected from the receiver400and the next cartridge50available in the magazine10can repeat the cycle described above, with each successive cartridge50alternately contacting the left-hand feed ramp surface810and the right-hand feed ramp surface910, until all of the cartridges50in the magazine10have been expended. In such embodiments, all cartridges50in a double-stack magazine10consistent with the present disclosure are expelled from the magazine10without disruption or scoring of the bullets60during the loading process. In some embodiments, the present disclosure provides an ammunition magazine configured to bottom-feed a plurality of ammunition cartridges into an upper receiver of a firearm, the ammunition magazine comprising: a magazine housing including a catch and a floor plate receiver; a spring within the magazine housing; a floor plate retainer disposed at a bottom end of the spring; a floor plate disposed at a bottom end of the magazine housing and reversibly mated with the floor plate retainer; and a magazine follower disposed at a top end of the spring, the magazine follower including: a cartridge-shaped protrusion on a top surface, a ramp on the top surface and disposed adjacent to the cartridge-shaped protrusion, a front leg disposed on a distal end of the magazine follower, and a rear leg disposed generally opposite the front leg. In some embodiments, the ammunition round is a 0.223 inch round. In some embodiments, the ammunition round is a 5.56 mm round. In some embodiments, the magazine follower further comprises a tail fin configured to slide vertically within a tail recess of the magazine housing. In some embodiments, the ramp is configured to enable a cartridge to be fed from the ammunition magazine to a chamber of an upper receiver of a firearm without enabling a bullet portion of the cartridge to contact a feed ramp of the upper receiver. In some embodiments, the present disclosure provides a magazine follower comprising: a cartridge-shaped protrusion on a top surface, a ramp on the top surface and disposed adjacent to the cartridge-shaped protrusion, a front leg disposed on a distal end of the magazine follower, and a rear leg disposed generally opposite the front leg. In some embodiments, the magazine follower further comprises a tail fin disposed adjacent the rear leg and configured to slidably mate with a tail recess of a magazine housing. In some embodiments, the ramp is configured to enable a cartridge to be fed from the ammunition magazine to a chamber of an upper receiver of a firearm without enabling a bullet portion of the cartridge to contact a feed ramp of the upper receiver. In some embodiments, the present disclosure provides an ammunition magazine comprising a magazine follower configured to orient all ammunition cartridges housed therewithin at an angle of about 0.2° to about 5° relative to a longitudinal axis of an associated chamber. In some embodiments, the angle is about 1.5°. In some embodiments, the magazine follower does not contact a bullet portion of the ammunition cartridges. In some embodiments, a bullet portion of the ammunition cartridges do not contact a feed ramp proximate to the associated chamber. In some embodiments, the follower includes a ramp on its top surface, and wherein the ramp is configured to orient a first ammunition cartridge housed within the ammunition magazine at an angle of about 2° to about 20° relative to the top surface. In some embodiments, the present disclosure provides a magazine follower configured to orient a first ammunition cartridge housed within an associated ammunition magazine at an angle of about 0.2° to about 5° relative to a top surface of the magazine follower. In some embodiments, the magazine follower comprises a ramp on its top surface, wherein the ramp is configured to orient the first ammunition cartridge housed within the associated ammunition magazine at an angle of about 0.2° to about 5° relative to the top surface. In some embodiments, the follower is configured to not contact a bullet portion of the ammunition cartridge. In some embodiments, the ramp includes a concave top surface and/or a groove oriented along its longitudinal length. In some embodiments, the ammunition cartridge is a shouldered cartridge. In some embodiments, the ammunition cartridge is a 0.223 cartridge, a 5.7×30 cartridge, a 5.56 cartridge. In some embodiments, the present disclosure provides a single stack magazine comprising a magazine housing base including a left-hand feed ramp insert, a left-hand feed lip insert including a plurality of tabs, and a plurality of key seat slotted pockets configured to slidably mate with the tabs of the left-hand feed lip insert; a removable plate including a right-hand feed ramp insert, a right-hand feed lip insert including a plurality of tabs, and a plurality of key seat slotted pockets to slidably mate with the tabs of the right-hand feed lip insert; a magazine spring disposed within a magazine housing defined by the magazine housing base and the removable plate; and a magazine follower disposed at a top end of the magazine spring. In some embodiments, the base includes key seat slotted pockets to accept corresponding tabs of a left-hand feed lip insert allowing up and down movement of the left-hand feed lip insert. In some embodiments, the left-hand feed lip comprises an elastic material. In some embodiments, the key seat slotted pockets are configured to cooperate with the tabs of the right-hand feed lip insert to enable up-and-down movement of the right-hand feed lip insert within the magazine housing. In some embodiments, the right-hand feed lip insert comprise an elastic material. In some embodiments, the magazine is configured to enable top-loading of an ammunition cartridge into the magazine housing. In some embodiments, the magazine follower comprises a top surface including a cartridge-shaped protrusion and a ramp disposed adjacent to the cartridge-shaped protrusion. In some embodiments, the cartridge has a common bottle-neck design including a shoulder. In some embodiments, the left-hand feed ramp insert and the right-hand feed ramp insert each contact a shoulder of an ammunition cartridge to guide the cartridge into a chamber without enabling a bullet portion of the cartridge to contact an inner surface of the chamber. In some embodiments, the present disclosure provides a double stack ammunition magazine comprising a magazine housing base having an integrated left-hand feed ramp; a removable magazine box plate having an integrated right-hand ramp insert; a magazine spring disposed within a magazine housing defined by the magazine housing base and the removable magazine box plate; and a magazine follower disposed at a top end of the magazine spring. In some embodiments, the right-hand ramp inserts and the left-hand ramp inserts are each configured to guide a cartridge including a bullet portion from the magazine housing into a barrel extension without the bullet portion contacting an interior surface of the barrel extension. In some embodiments, the right-hand feed ramp insert and the left-hand feed ramp insert each contact a shoulder of the cartridge to guide the cartridge into a chamber without enabling a bullet portion of the cartridge to contact an inner surface of the chamber. In some embodiments, the barrel extension includes a right-hand loading slot and a left-hand loading slot configured to guide a cartridge from the magazine into a chamber. In some embodiments, the left-hand feed ramp insert is configured to contact a shoulder of a cartridge, but not a bullet portion of the cartridge, upon the cartridge being advanced towards the barrel extension by a bolt. In some embodiments, the right-hand feed ramp insert is configured to contact a shoulder of a cartridge, but not a bullet portion of the cartridge, upon the cartridge being advanced towards the barrel extension by a bolt. In some embodiments, the cartridge has a common bottle-neck design including a shoulder. While the present disclosure has been shown and described herein by illustrating the results and advantages over the prior art, the disclosure is not limited to those specific embodiments. Thus, the forms of the disclosure shown and described herein are to be taken as illustrative only and other embodiments may be selected by one having ordinary skill in the art without departing from the scope of the present invention. | 34,912 |
11859937 | DETAILED DESCRIPTION Magazines for use with firearms are known in the art. The conventional magazine stores one or more ammunition cartridges in a magazine cavity. The typical magazine also has a spring which biases the cartridges towards a chamber of the firearm to which the magazine is attached. When the topmost cartridge is ejected from the firearm, e.g., towards a target, the magazine spring pushes the next cartridge into the firearm chamber. In the prior art, magazines are configured to be reusable since their design and manufacture is complex and costly. Prior art magazines are therefore manufactured using high grade materials, such as aluminum, heavy-duty plastic, et cetera, to allow the magazines to be reused a large number of times. A shooter, e.g., an officer, a soldier, or another, first loads the prior art magazine with cartridges. Once the magazine is loaded, the shooter inserts the magazine into the firearm and shoots the firearm. When the magazine runs out of cartridges, the shooter disassociates the magazine from the firearm, and load a new magazine into the firearm. The expended magazine is then typically kept on the shooter's person so that the expended magazine may be reloaded for further use. In other words, the magazine is not intended to be disposed after only one use. The repeated loading and reloading of cartridges into the prior art magazine may be a time consuming and cumbersome process. Further, the shooter may not wish to carry around empty magazines after the cartridges therein have been depleted. The weight of the prior art magazine, in part due to the fact that it is made to be durable and is constructed typically of heavy-duty materials, is not insignificant, and may cause discomfort to a shooter carrying a plurality of such magazines. On top of this, the prior art magazines assume a large amount of space (i.e., volume) to store, thus limiting the number of magazines a shooter may carry, whether they are loaded or expended. Another disadvantage associated with the conventional magazines is that valuable time must be taken to store the expended magazine when reloading the firearm with a fresh magazine. In some situations, this is time that the shooter cannot afford to waste. The shooter may desire to have preloaded magazines that the shooter can use as desired and simply discard thereafter. But the cost of the materials involved in making a magazine, along with the complexity of the prior art magazine design and the manufacturing costs of the various parts of the magazine, have heretofore been prohibitive such that a single-use magazine has been commercially unviable. FIG.1shows a prior art magazine10as is known in the art.FIG.2shows the internals of the prior art magazine10. The magazine10has a housing12forming a cavity14for the reception of cartridges (i.e., ammunition). The magazine10typically includes a follower16, a spring18, a retainer plate20, and a base plate22. The housing shell12typically includes at least two parts that are fusion welded together to form the completed housing12. As is known, the follower16is in contact with an upper end of the spring18. The follower16helps to compress the spring18when rounds (i.e., ammunition, cartridges, etc.) are inserted into the magazine10. The follower16pushes the rounds up as the topmost round is removed (e.g., by being fired from a firearm). The retainer plate20is in contact with the lower end of the spring18, opposite the follower16. The base plate22is removably secured to the magazine housing12and ensures that the spring18does not undesirably decouple from the magazine10. Prior art magazines may further include a pin which assists in the retention of the spring18. In some prior art magazines, a flat piece of plastic with a nub is situated at the bottom of the spring. The base plate has an opening corresponding to the nub. When the base plate is slid on a track at the bottom of the housing, the nub comes out the hole in the base plate to lock the base plate in place. The nub can be pushed into the magazine, akin to a button, to unlock the base plate. As noted, the prior art magazine is not configured for one-time use, at least in part because of the high manufacturing and assembling costs of the magazine. In terms of cost, welding one piece of the magazine housing to the other to form the completed housing12is a significant contributor. Substantial costs are also incurred in making a separate mold for the base plate22, as the base plate22is distinct from the housing12and is configured to be removably coupled thereto. Further, the industry is fixated on making long-lasting magazines that can be reused numerous times, and for this reason, prior art magazines are made of high-quality materials that also increase the costs to make these magazines. Because of these costs, the artisan would consider the idea of a single-use magazine commercially infeasible. The present disclosure relates to a magazine that, in stark contrast to the entrenched market philosophy, is configured for single use. The disclosed magazine may not require that the pieces of the magazine housing be welded together. Further, the disclosed single-use magazine may do away with a separate base plate and retainer plate. The materials used to make the magazine may be relatively inexpensive because the magazine, unlike conventional magazines, is not intended to be reused numerous times. Thus, material durability and resistance to wear may become less significant factors when choosing a magazine material. Instead, a cheaper and/or lighter material may be used. FIGS.3-5shows a single-use magazine100, according to an embodiment. The magazine100comprises a housing102having a first portion104and a second portion106, a spring register108, and a spring and follower akin to the prior art. The spring and the follower may be joined together as a solitary component. In embodiments, the single-use magazine100does not have a separable base plate nor a retainer plate. The housing first portion104and the second portion106may be collectively configured to form a cavity for retaining cartridges that are fed to the firearm with the assistance of the spring and follower. The housing first portion104may comprise a primary or base wall104A, side walls or edges104B and104C (seeFIGS.3-4), and a bottom wall104D. The side walls104B and104C and the bottom wall104D may each extend generally perpendicular to the base wall104A and may face the housing second portion106when the magazine100is in the assembled configuration. The base wall104A, side walls104B and104C, and the bottom wall104D may collectively form a part105A of the cavity105(seeFIG.9) within which the cartridges are housed. In embodiments, a reinforced rail104E may be provided outwardly adjacent one of the side walls (e.g., outwardly adjacent the side wall104C (FIG.4)). Further, a double rail104F may be provided inwardly adjacent the other side wall (e.g., inwardly adjacent the sidewall104B (FIG.4)). The outwardly adjacent reinforced rail104E and the inwardly adjacent double rail104F may each contribute to the structural integrity of the magazine100. The housing second portion106may likewise comprise a primary or base wall106A, side walls or edges106B and106C, and a bottom wall106D. The side walls106B and106C and the bottom wall106D may each be generally perpendicular to the base wall106A and may extend towards the housing first portion104when the magazine100is in the assembled configuration. The second portion base wall106A, side walls106B and106C, and the bottom wall106D may collectively form a part105B of the cavity105(seeFIG.9) within which the cartridges will be housed. In embodiments, a reinforced rail106E may be provided outwardly adjacent one of the side walls (e.g., outwardly adjacent the side wall106C as shown (FIG.4)). A double rail106F may be provided inwardly adjacent the other side wall (e.g., inwardly adjacent the sidewall106B (FIG.4)). As discussed with respect to the first portion104, the outwardly adjacent reinforced rail106E and the inwardly adjacent double rail106F of the second portion106may each contribute to the structural integrity of the magazine100. To eliminate the need for welding, the housing first portion104and the housing second portion106may be configured to be mechanically joined to form the cavity105for housing of cartridges and the spring and follower. The apparatus for mechanically joining the housing first portion104and the housing second portion106may include fasteners and/or integral joints. For example, nuts and bolts, screws, pins, rivets, seams, et cetera, may be used to mechanically join the first portion104and the second portion106. In an embodiment, the first portion104and the second portion106may have female and corresponding male mating members configured to readily mate with each other to cause the first portion104to mechanically interlock with (e.g., snap to) the second portion106. For example, in the embodiment illustrated inFIGS.3-5, the second portion106may have a plurality of protruding members112and the first portion104may have a plurality of corresponding openings110. Each protruding member112on the housing second portion106may have a mushroom-shaped head112H (seeFIGS.6-7) and a corresponding mushroom opening110may be configured on the housing first portion104to lockingly accept the mushroom-shaped head112to collectively cause the housing first portion104and the housing second portion106to couple together (e.g., the housing first portion104and the housing second portion106may click together). In embodiments, the configuration of the female members110and the corresponding male members112may be such that once the housing first portion104and the second portion106are snapped together, they remain joined to each other during normal use. That is, the coupling between the first portion104and the second portion106may be configured to be permanent. For example, the mushroom-shaped protruding members112may be designed to be strong enough to lockingly engage with the female members110for ordinary use, but may be designed to break apart if a user forcibly attempts to separate the first portion104from the second portion106after the two portions104,106have been snapped together. For ensuring a secure lock, the protruding members112may be provided on or proximate each of the side walls106B and106C and the bottom wall106D of the housing second portion106, and the corresponding openings may likewise be provided on or proximate each of the side walls104B and104C and the bottom wall104D of the housing first portion104. By virtue of the number, arrangement, and configuration of the protruding members112and the corresponding openings110, when the first portion104and the second portion106are locked together, the side wall104B, side wall104C, and bottom wall104D of the first portion104may respectively be adjacent and in contact with the side wall106B, side wall106C, and bottom wall106D of the second portion106. In an embodiment, at least some of the protruding members112may be provided on the reinforced rail106E of the second portion106, and the mushroom openings110corresponding thereto may be provided on the reinforced rail104E of the first portion104. Having at least a portion of the protruding members112and the corresponding openings110on the reinforced rails106E and104E, respectively, may further fortify the locking mechanism. The artisan will understand from the disclosure herein that the locking system comprising the female members110and the male members112depicted in the figures is exemplary and is not intended to be independently limiting. For example, whileFIGS.3-5show that a certain number of male members112are arranged in a vertical line along each of the side walls106B,106C walls of the housing second portion106, and that a particular number of male members112are arranged laterally along the bottom wall104D of the housing second portion106, this arrangement of the male members112, together with the corresponding arrangement of the female members110, is exemplary. For instance, a different number of male members112may be provided on or proximate the side walls106B,106C and/or bottom wall106D, and the protruding members112may be arranged in locations other than those shown. Further, the male and the female members112,110may be arranged such that each housing portion104,106may include at least one of each. Other variations of the disclosed locking mechanism, and other locking mechanisms that do not employ welding, will become apparent to the artisan from the disclosure herein. The housing102, including the first portion104and/or the second portion106thereof, may be made of light-weight and/or durable material, such as bamboo/poly mix, Kevlar, polypropylene/polyethylene mix, various plastics/polymers, recycled material, et cetera. However, since the magazine100is intended to be a single-use magazine, unlike in the prior art, the raw materials may not be selected with an eye towards ensuring that magazine be capable of being reused a number of times. The artisan would understand that the specific shape, size, and configuration of the magazine100shown in the figures is one exemplary embodiment of many, and that embodiments of the single-use magazine described herein are not limited to what is depicted in the figures. The artisan would also understand that embodiments of the single-use magazine may be any suitable shape, size, and/or configuration of ammunition-holding magazine (e.g., an assault rifle magazine, a carbine magazine, a pistol magazine, a shotgun magazine, etc.) now known or subsequently developed. For example, embodiments of the single-use magazine may comply with NATO's STANAG standards. As another example, embodiments of the single-use magazine may have any desirable capacity (e.g., the single-use magazine may have a ten-round capacity, a twenty-round capacity, a thirty-round capacity, et cetera). As yet another example, embodiments of the single-use magazine may retain ammunition in a single column configuration, in a double stack configuration, a casket configuration, a drum configuration, a saddle-drum configuration, a horizontal configuration, a rotary configuration, a pan/disc configuration, a helical configuration, et cetera. As still another example, embodiments of the single-use magazine may be configured to retain any suitable caliber of ammunition, such as 5.56×45 mm rounds, 5.45×39 mm rounds, 7.62×51 mm rounds, .22 LR rounds, 9×19 mm rounds, .45 ACP rounds, 12-gauge shotgun rounds, et cetera. The spring register108(seeFIG.3) may, as shown inFIG.4, comprise a first portion108A and a second portion108B. The spring register first portion108A may be part of the housing first portion104, and specifically, be situated on the bottom wall104D thereof. Similarly, the spring register second portion108B may be part of the housing second portion106and be situated on the bottom wall106D thereof The first spring register portion108A may be complementary to the second register portion108B (e.g., the first portion108A may be generally identical to the second portion and be a mirror opposite thereof). Locking engagement of the housing first portion104to the housing second portion106may cause the register portions108A,108B to come adjacent and in contact with each other to complete the spring register108. The spring register108may be configured, for example, to securely hold the spring18and follower16in place, and eliminate the need for a pin as is used in certain prior art magazines. In embodiments, the spring register108may be specifically configured to mate with and retain the spring18, such as through the use of grooves, recesses, mechanical locks, apertures, et cetera. The double rail106F may ensure improved registration of the flexible spring within the housing102. The artisan will understand that the spring may be in contact with the spring register108and that the spring and the follower may be sandwiched between the housing first and second portions104and106. In embodiments, the primary wall104A, side walls104B and104C, bottom wall104D, rails104E and104F, and the spring register portion108A of the housing first portion104, may all be part of the mold of the housing first portion104. Similarly, the primary wall106A, side walls106B and106C, bottom wall106D, rails106E and106F, and the spring register portion108B of the housing second portion106, may all be part of the mold of the housing second portion106. Such may eliminate the need to have separate molds for the various components as in the prior art (e.g., a separate mold for the base plate). This, together with the fact that the magazine portions need not be welded together and may be made of low-priced materials, may allow the magazine100to be manufactured quite inexpensively relative to the prior art magazine10. To form the magazine, the housing first portion104and the housing second portion106may be placed together in a jig and compressed with a spring and follower therebetween, which may cause the male and female locking members110,112to interlock such that the spring comes in registry with the completed spring register108. Because the magazine100is not configured to be opened, it may be loaded with ammunition200(FIG.9) before the first portion104and the second portion106are lockingly engaged. The loaded magazine100(seeFIG.9) may be sold to the consumer in a package, and appropriate indicia (e.g., branding, instructions for use, legal information, et cetera) may be placed on the magazine100and/or the packaging thereof. Because of the advancements discussed herein, the magazine100may be manufactured inexpensively and be configured for single use. The shooter may purchase the preloaded magazine100, fire the cartridges200therein, and then simply discard the magazine100, thereby obviating the need to carry around empty magazines and the hassle associated with loading and reloading same. In embodiments, the magazine100or portions thereof may be configured to be recycled. In some embodiments, the magazine100may be provided with a cap300that may be placed over the top of the magazine100. In operation, the cap300may serve to protect and/or retain the ammunition200stored within the magazine100from the elements (e.g., moisture, heat, et cetera). The cap300may be removed prior to loading the magazine100into the firearm. The cap300may be made of the same or similar material as the magazine100, and may likewise be disposable after use. Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the spirit and scope of the present disclosure. For example, while the magazine100depicted in the figures is shown as housing a certain type of ammunition, the artisan will understand the magazine100may be configured to house different types of ammunition. Embodiments of the present disclosure have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to those skilled in the art that do not depart from its scope. A skilled artisan may develop alternative means of implementing the aforementioned improvements without departing from the scope of the present disclosure. It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims. | 19,564 |
11859938 | DETAILED DESCRIPTION Referring toFIGS.1and2, an embodiment of a multi-channel magazine for a toy gun according to the disclosure is adapted to receive numerous bullets1and includes a magazine body2, a dividing unit3and a bullet pressing unit4. With reference toFIGS.1,3,4and5, the magazine body2is elongated in a lengthwise direction (Z), and includes a magazine housing21, an expansion housing22and a magazine cap23. The magazine housing21includes two magazine body halves212which are coupled with each other to define a bullet outlet211therebetween, and an outer sleeve213which surrounds and is sleeved around the magazine body halves212. In this embodiment, the bullet outlet211is formed at an end face to discharge the bullets1therefrom. The outer sleeve213has two protrusions214which are respectively formed at two sides thereof in a front-rear direction (Y), and two grooves215which are formed at an end thereof and opposite to the protrusions214in a width direction (X) and further opposite to the bullet outlet211in the lengthwise direction (Z). In this embodiment, the lengthwise direction (Z), the width direction (X) and the front-rear direction (Y) are perpendicular to one another. Each of the expansion housing22and the magazine cap23is selectively and removably connected with and mounted on the magazine housing21in the lengthwise direction (Z) (as shown inFIGS.2and9). The expansion housing22has a first sleeve222of a U-shape which defines a first mounting opening221for sleeving on the magazine housing21from the first mounting opening221, two first hooks223which are formed on an inner surface of the first sleeve222to be respectively engageable with the grooves215, two first slots224which are formed in the inner surface of the first sleeve222to be respectively engageable with the protrusions214, and four expansion channels225which are opened toward one side thereof. The magazine cap23has a second sleeve232of a U-shape which defines a second mounting opening231for sleeving on the magazine housing21from the second mounting opening231, two second hooks233which are formed on an inner surface of the second sleeve232to be respectively engageable with the grooves215, two second slots234which are formed in the inner surface of the second sleeve232to be respectively engageable with the protrusions214, and a wall235which extends transverse to the second sleeve232. With reference toFIGS.1,3and4, the dividing unit3is formed within the magazine housing21and cooperates with the magazine housing21to define within the magazine housing21a converging channel31which is in spacial communication with the bullet outlet211, a plurality of storage channels32(the number of the storage channels32is S) which extend in the lengthwise direction (Z), a plurality of first narrow channels33(the number of the first narrow channels33is M) which are in spacial communication with the storage channels32, and a plurality of second narrow channels34(the number of the second narrow channels34is N) which are formed downstream of the first narrow channels33and upstream of the converging channel31. The storage channels32are configured to arrange the bullets1filled and held therein in the width direction (X) such that a plurality of rows of the bullets1(the number of rows of the bullets1is R) are arranged in the storage channels32. Each of M, N and R is even, and R>S>N, S=M. In this embodiment, for example, R=8, and S=M=4. That is, within the magazine housing21, by the dividing unit3, four storage channels32, four first narrow channels33and two second narrow channels34are formed, and two rows of the bullets1are arranged in each storage channel32such that eight rows of the bullets1can be filled in the storage channels32. Moreover, each of the first narrow channels33, the second narrow channels34and the converging channel31has a width configured to allow only one of the bullets1to pass through. With reference toFIGS.1,3,4and5, in this embodiment, the dividing unit3includes a dividing plate36which is disposed within and divides the magazine housing21into two half compartments35, and two dividing ribs37which extend transversely from the dividing plate36and each of which divides a respective one of the half compartments35into two of the storage channels32. The dividing unit3further includes four guiding rib assemblies38. Each guiding rib assembly38extends from at least one of the magazine housing21and the dividing plate36in the front-rear direction (Y), and has a plurality of rib portions arranged in the lengthwise direction (Z). Each two of the guiding rib assemblies38are disposed within the corresponding half compartment35and are spaced apart from the corresponding dividing rib37in the width direction (X) to cooperate with the corresponding dividing rib37to define within the corresponding half compartment35two of the first narrow channels33and one of the second narrow channels34. Specifically, the two of the guiding rib assemblies38extend and are converged toward each other to define within the corresponding half compartment35the second narrow channel34. The bullet pressing unit4is mounted on the magazine body2, and includes four pressing elements41and four biasing elements42. Each pressing element41is adapted to press the bullets1. Each biasing element42is disposed between the magazine body2and the respective pressing element41to urge the corresponding pressing element41to move toward the bullet outlet211. With reference toFIGS.1and3, when the magazine cap23is connected with and mounted on the magazine housing21, each pressing element41is movably disposed in the respective storage channel32, and each biasing element42is disposed between and abuts against the respective pressing element41and the wall235of the magazine cap23. Thus, the pressing elements41press the bullets1from the storage channels32through the first and second narrow channels33,34and discharge the bullets1from the converging channel31. During this, as shown inFIGS.4to8, the eight rows of the bullets1arranged in the storage channels32enter the first narrow channels33and are arranged to four rows. Then, the four rows of the bullets1in the first narrow channels33enter the second narrow channels34and are arranged to two rows. Subsequently, the two rows of the bullets1in the second narrow channels34enter the converging channel31and are converged in one row. Finally, as shown inFIG.3, the bullets1are discharged one by one from the bullet outlet211. Referring toFIGS.1and3, it is noted that, when the magazine cap23is mounted on the magazine housing21, the magazine cap23is connected with the magazine housing21in the front-rear direction (Y) from the second mounting opening231while the second hooks233are engaged with the grooves215and the second slots234are engaged with the protrusions214for the magazine cap23to be securely sleeved on the magazine housing21. Likewise, the magazine cap23is forced in a direction away from the magazine housing21to disengage the second hooks233and second slots234from the grooves215and the protrusions214for the magazine cap23to be removed from the magazine housing21. With reference toFIGS.1,9and10, when the expansion housing22is connected with and mounted on the magazine housing21, each expansion channel225is in spacial communication with the respective storage channel32, each pressing element41is movably disposed in the respective expansion channel225and the respective storage channel32, and each biasing element42is disposed between the respective pressing element41and the expansion housing22. Thus, the pressing elements41press the bullets1by virtue of the biasing action of the biasing elements42from the expansion channels225and through the storage channels32, the first narrow channels33and the second narrow channels34, and the bullets1then enter the converging channel31to be discharged from the bullet outlet211. It is noted that, during assembling of the expansion housing22, the pressing elements41and the biasing elements42are respectively moved down into the expansion channels225from the storage channels32when the expansion housing22is mounted on and connected with the magazine housing21. Alternatively, the pressing elements41and the biasing elements42may be removed from the magazine housing21when the magazine cap23is detached from the magazine housing21, and are moved into the expansion channels225of the expansion housing22. Moreover, when the expansion housing22is mounted on the magazine housing21, the expansion housing22is connected with the magazine housing21in the front-rear direction (Y) from the first mounting opening221while the first hooks223are engaged with the grooves215and the first slots224are engaged with the protrusions214for the expansion housing22to be securely sleeved on the magazine housing21. Likewise, the expansion housing22is forced in a direction away from the magazine housing21to disengage the first hooks223and the first slots224from the grooves215and the protrusions214for the expansion housing22to be removed from the magazine housing21. The number of the storage channels32and the expansion channels225may not be limited to four. In other embodiments, it may be two, three or more than four. As illustrated, with the storage channels32each for receiving and filling two rows of the bullets1, the capacity of the magazine is increased. With the pressing element41and the biasing element42disposed in the respective storage channel32, the bullets1can be pressed toward the bullet outlet211without the need to frequently fill bullets in the magazine so as to decrease the times of bullet filling operation. Moreover, with the first narrow channels33and the second narrow channels34, eight rows of the bullets1can be rearranged and converged to four rows in the first narrow channels33, and then to two rows in the second narrow channels34. Hence, the bullets1can be moved and converged gradually to a one-by-one arrangement in one row in the converging channel31, which can avoid jostling of the bullets1against one another to smoothly press and discharge the bullets1. Furthermore, with the expansion housing22and the movable pressing unit4, the magazine housing21is selectively connected with one of the magazine cap23and the expansion housing22to meet the capacity requirement of usage. While the disclosure has been described in connection with what is considered the exemplary embodiment, it is understood that this disclosure is not limited to the disclosed embodiment but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements. | 10,704 |
11859939 | DETAILED DESCRIPTION FIG.1illustrates an example spring piston air gun100according to aspects of this disclosure. More specifically,FIG.1is a side exterior view of one implementation of the air gun100, which includes features for self- or automatic-cocking.FIG.1illustrates the air gun100as generally including a barrel102, a stock104, and a trigger106. The air gun100also includes a housing108extending generally between the barrel102and the stock104. The housing108may retain and/or conceal components of the air gun100, as detailed further herein. Without limitation, aspects of this disclosure include components for self- or automatic-cocking of the air gun100, which may be disposed in, attached to, or otherwise associated with the housing108. The barrel102extends generally from a breach end110to a muzzle end112. Although not illustrated inFIG.1, a bore extends through the barrel102, from the breach end110to the muzzle end112. The bore provides a hollow interior space within the barrel102through which compressed air and a projectile, such as a pellet, can pass, as will be described in greater detail below. The barrel102is sufficiently strong to contain high pressure gasses introduced into the barrel102to fire the projectile. In implementations, the bore may be smooth, or the bore may be rifled, e.g., to impart a stabilizing spin on the projectile as it passes through the bore. The trigger106may be any lever, button, or the like, configured for user interaction to fire the air gun100. As detailed further herein, in some instances the trigger106is a part of a trigger assembly that, among other features, prevents unintended firing of the air gun100. For example, and without limitation, the trigger assembly may prevent firing of the air gun100while the air gun100is automatically cocking after firing a projectile. The stock104may be any conventional size or shape. In some instances, the stock104may be removably secured to the housing108, e.g. to promote removal and/or replacement of the stock104. Moreover, and as discussed below, removal of the stock104may facilitate access to an interior of the housing108, e.g., to service working components of the air gun100. Although not illustrated inFIG.1, a portion of the housing108may include rails extending generally longitudinally, and the stock104can be configured with receptacles that engage and slide along the rails. Without limitations, the housing108may be extruded and the rails may be a portion of the extrusion, although in other instances the rails may be separately manufactured and secured to the housing108. In still further examples, the stock may include one or more rails that cooperate with one or more receptacles on the housing108. The use of rails may reduce the number of fasteners required to secure the stock104to the housing108and/or may provide a more pleasing aesthetic. In the example ofFIG.1, the stock104includes a removable portion114. The removable portion114is removable to example a hollow compartment or receptacle in which components of the air gun100may be stored. For example, and without limitation, a power source (not shown) for powering components that promote automatic cocking of the air gun100may be retained in the stock104. For example, the removable portion114may be removed to expose a battery compartment. Although the removable portion114is shown as a cheek portion of the stock104, in other examples, the removable portion114can be formed at a butt end of the stock104, e.g., as a portion of a recoil pad116. The housing108is generally provided to contain components of the air gun100. For instance, and as detailed further below, the housing108may contain, support, and/or conceal aspects that facilitate automatic cocking and/or action of the air gun100. The shape and size of the housing108inFIG.1is for illustration. Other shapes, sizes, and compositions are contemplated. Components of the housing may be made of any conventional materials, including but not limited to, metal, such as aluminum, or polymers. Additional details of the air gun100now will be discussed with reference to additional figures. FIG.2is a cross-section of the air gun100taken generally in the X-Y plane ofFIG.1. InFIG.2, the stock104and portions of the housing108are removed for clarity. As shown, the housing108of the air gun100includes a chamber wall200defining a chamber202. As detailed herein, the chamber202retains, aligns, and/or otherwise supports a compression tube206, a compression piston208, and a spring210disposed in the chamber202. The compression tube206is of a type generally well known in the art. The compression tube206is disposed to slide in the chamber202, e.g., generally along a longitudinal axis211of the air gun100. The compression tube206generally includes a cylindrical sidewall212extending between an open end214, closer to the stock along the longitudinal axis, and a closed end216, closer to the barrel102along the longitudinal axis211. The sidewall212includes an outer surface218separated from an inner surface220by a wall thickness. The outer surface218is disposed proximate the chamber wall200. For example, the outer surface218is disposed to move relative to the chamber wall200, e.g., via a lubricated guiding interface. The inner surface220of the sidewall212, together with an inner face222of the closed end216generally define a compression tube volume224. The compression tube206may be formed from any number of rigid materials, including but not limited to metal, for performance, safety and durability factors. The compression piston208includes a sidewall226extending between an open end228and a closed end230. The compression piston208is configured to slide, generally along the longitudinal axis211, relative to the chamber wall200and the compression tube. As illustrated inFIG.2, the closed end230of the compression piston208is sized to be received in the compression tube volume224and includes one or more seals232sealing the compression piston208relative to the inner surface220of the compression tube206. Proximate the open end228, the compression piston is configured to slide relative to the chamber wall200. For example, the compression piston208may be sized proximate the open end228to contact the chamber wall200. Although not illustrated inFIG.2, one or more rings, e.g., guide rings, may be disposed to promote movement and alignment of an outer surface of the compression piston208proximate the open end228, relative to the chamber wall200. As also illustrated inFIG.2, the piston defines an inner piston volume234generally accessible via the open end228. The compression piston208also includes, proximate the open end228, a searing surface236. The piston searing surface236may take the form of an annular groove in the outer surface of the compression piston208. For example, when the searing surface is formed , eliminating the need to control orientation of the compression piston208relative to the trigger106. As detailed further herein, the piston searing surface236engages with the trigger106(or a member coupled to the trigger106) to retain the compression piston208in a cocked position. An outer surface of the sidewall226of the compression piston208is illustrated as being contoured inFIG.2. The contour can include a plurality of protrusions237, e.g., annular protrusions, formed as tail guides to create multiple points of contact with the compression tube206and/or the chamber wall200. These points of contact maintain the compression piston208in a concentric orientation with the compression tube206to increase efficiency of the compression piston208and reduce noise upon movement of the compression piston from the cocked to the fired position. In some examples, the protrusion237can be annular protrusions extending around the entire circumference of the compression piston208. In other examples, the protrusions237may provide multiple points of contact about the circumference, e.g., three points of contact. These three points may be the minimum number of contacts required to keep the compression piston208concentric to the compression tube206. The protrusions237can be located at a variety of circumferential positions, although it may be advantageous to symmetrically locate the protrusions about the 360-degree circumference of the piston body. Thus, each of the protrusions237may include three protrusions located at 120° intervals about the longitudinal axis211. This arrangement may minimize the frictional losses associated with the protrusions, by reducing friction. The protrusions can be located anywhere on the circumference of the sidewall226and anywhere along the longitudinal dimension of the sidewall226. AlthoughFIG.2illustrates the protrusions237on the sidewall226of the compression piston208as integrated into the sidewall226and generally rectangular in shape, in other implementations, the protrusions237may be generally spherical or hemispherical and/or may be separate members retained within corresponding recesses in the sidewall226. In other examples, the protrusions237can include faceted, apex, line, or point contact surfaces. It is also understood the number of protrusions237can range from one to a multiple such as 10 or more, depending on the desired operating characteristics and design construction. In some examples, it may be desirable to have the protrusions embodied as separable pieces which may be made of plastic, for example. For instance, the protrusions be formed of a variety of materials, including but not limited to polymers such as nylon, Acetal (POM), PTFE and PTFE coated nylon or filled polymers with lubricants such as graphite, TFE or molybdenum. While numerous configurations of the protrusions237are non-metal, it is understood various alloys and metals, such as oil impregnated bronze can be used for the protrusions. The spring210is in communication with the compression piston208and is configured to bias the compression piston208toward the barrel102, e.g., along the longitudinal axis211. In the embodiment ofFIG.2, the spring210is a gas spring having a gas spring body238and a gas spring piston240. The gas spring body238is disposed in the inner piston volume234and defines a sealed interior chamber242containing a compressed gas. The gas spring piston240extends into and is moveable relative to the sealed interior chamber242. As will be appreciated, as the gas spring piston240is forced into the sealed interior chamber242during cocking, e.g., as the gas spring body238is moved relatively away from the barrel102(toward the stock104, not shown), the effective volume of the interior chamber242is reduced. The increased pressure creates a force on the compression piston208, urging the compression piston208toward the barrel102. InFIG.2the spring210is embodied as a longitudinal gas spring. The spring210can be longitudinally compressed or extended, but returns to a former configuration when released. In some instances, the spring210may be a coil that expands and contracts generally along a longitudinal axis of the spring210. The spring210can be any of a variety of configurations including metal coil or helical springs, composite or alloy coil or helical springs, as well as struts or gas spring. These and other springs are generally well known in the industry. The air gun100further includes an actuator assembly244. As detailed further herein, the actuator assembly244facilitates automatic cocking of the air gun100, e.g., cocking without user intervention or user action. More specifically, the actuator assembly244is coupled to the compression tube206, to selectively move the compression tube206between a firing position and a cocking position, as detailed further herein. As shown inFIG.2, the actuator assembly244generally includes a carriage246, a drive screw248(also referred to herein as a lead screw248), a drive screw nut250(also referred to herein as a lead screw nut250) threaded on the drive screw248, and a rotary actuator252. The carriage246is coupled to the compression tube206, such that movement of the carriage246in a direction parallel to the longitudinal axis causes a corresponding movement of the compression tube206in the chamber202. The carriage246generally includes a first end254spaced longitudinally from a second end256. In the illustrated example, the first end254and the second end256are embodied as plates, although other configurations may also be used. The carriage246may also include one or more sidewalls258extending between the first end254and the second end256. As also illustrated inFIG.2, the carriage246includes a protrusion260proximate the first end254. The protrusion260may be a pin, bar, plate, or other feature that is received in a corresponding receptacle of the compression tube206. Of course, this arrangement is for example only; any mechanical coupling that causes movement of the carriage246to move the compression tube206may be used. In at least one example, the carriage246may include a receptacle, and the compression tube206may include a protrusion received in the receptacle. The carriage246is configured to move along the drive screw248. More specifically, in the example ofFIG.2, the first end254and the second end256include openings extending therethrough that provide clearance for the drive screw248. The drive screw248passes through the carriage246. The drive screw nut250threadedly engages the drive screw248. As shown, the carriage246is disposed such that the drive screw nut250is disposed (longitudinally) between the first end254and the second end256. Also in the example, a spring262is disposed (longitudinally) between the drive screw nut250and the second end256. The rotary actuator252is disposed to drive the drive screw248, e.g., by causing the drive screw248to rotate about its longitudinal axis. In the illustrated example, a plurality of gears264are provided between a shaft266of the rotary actuator252and the drive screw248. The gears264may provide a decreased rotational velocity of the drive screw248, e.g., relative to a rotational velocity of the shaft266thereby increasing the torque of drive screw248. For example, the gears264may provide a gear ratio of from about 1:1.5 to about 1:20. Although only two gears are shown inFIG.2, more gears may be included. In some instances, the gears264may be embodied in a gear box. The gears264are provided for example only; other components and systems for transferring power from the rotary actuator252to the drive screw248also are contemplated. Moreover, in some examples, the shaft266may be directly coupled to the drive screw248, e.g., such that the shaft266and the drive screw248rotate about the same axis. As also shown inFIG.2, an end of the drive screw248opposite the end coupled to the rotary actuator252is supported by a bearing268, which may be a ball bearing, needle bearing, sleeve bearing or the like. In operation, the actuator252drives the drive screw248, causing the drive screw nut250to move in the longitudinal direction. For example, when the shaft266of the actuator252rotates in a first rotational direction, the drive screw nut250moves in a first longitudinal direction, and, when the shaft266rotates in a second rotational direction, opposite the first rotational direction, the drive screw nut250moves in a second longitudinal direction, opposite the first longitudinal direction. In the illustrated example, when the drive screw nut250moves in a direction generally away from the barrel102and toward the stock104, the drive screw nut250contacts the first end254of the carriage246, causing the carriage246to move in the same direction. Conversely, when the drive screw nut250moves in a direction generally toward the barrel102and away from the stock104, the drive screw nut contacts the spring262, which in turn contacts the second end256of the carriage246. The spring262is sufficiently rigid that, absent some impediment to travel of the compression tube206, the force applied by the drive screw nut250to the spring262is almost entirely transferred to the second end256of the carriage246. The spring262may facilitate non-destructive overtravel of the drive screw nut250, e.g., after the compression tube206has reached an advanced, firing position, as detailed further below. In the illustrated example, the carriage246may be configured to travel relative to the housing108along rails incorporated into the housing. For example, the rails may extend generally parallel to the drive screw248and the carriage includes mating grooves that slide along the rails. This arrangement may act as a linear bearing system that also functions to resolve the torque moment forces resulting from the offset distance between an axis of the spring210and an axis of the lead screw248. The bearing system can also resolve torque moment forces resulting from friction between the lead screw and lead screw nut transferred to the carriage. Although not illustrated inFIG.2, the carriage236may also include an anti-rotation key that prevents (or significantly restricts) rotation of the lead screw nut250within the carriage sidewall258, while allowing axial movement of the lead screw nut250against the over-travel spring262. The actuator assembly244, the compression tube206, the compression piston208, and the spring210cooperate to selectively cock and fire projectiles from the air gun100. In examples, projectiles, such as a projectile270, are loaded into the air gun100proximate the breach end110of the barrel102, via a magazine receptacle272formed as an opening in the housing108. In the illustrated example, the magazine receptacle272is configured as an opening sized and shaped to receive a magazine274carrying one or more projectiles. In examples, the magazine274may be an automatically indexing magazine, including one or more projectile holding passages arranged in a circular pattern and rotatable about a central pivot, e.g., as in a carousel-type arrangement, to selectively present a single projectile for firing. More specifically, the magazine274may further include an entry port and an axially-aligned outlet port, which also are aligned with the bore204. Although not illustrated inFIG.2, seals, such as ring seals, may be provided on the magazine274and/or in the magazine receptacle272to limit or prevent compressed air from leaking at the interfaces associated with the magazine274. As also shown inFIG.2, a hollow probe276extends from the closed end of the compression tube206, in a longitudinal direction away from the compression tube206. The hollow probe276provides a fluid passageway from the compression tube volume224of the compression tube206into the barrel bore204of the barrel102, through which compressed air passes to fire the projectile270. As detailed further below, the hollow probe276also passes at least partially through the magazine274, to advance the projectile270out of the magazine274, via the outlet port, and into the barrel102for firing. In the illustrated example, a seal277, which may be an o-ring, a wiper seal, or the like, is disposed to create a seal between the hollow probe276and the barrel bore204. Although illustrated as being secured to the barrel102, the seal277may be fixed to a distal end of the hollow probe276in other examples. In still further examples, the seal277may be disposed in the magazine274and/or at a position on the left-side (in the image ofFIG.2) of the magazine274, e.g., to seal the probe276relative to the housing108. As will be appreciated, the seal277and/or other or additional sealing mechanisms may be disposed to facilitate compressed air forced through the probe276and into the barrel bore204exiting the air gun100only via the barrel102. As also shown inFIG.2, a retention block278is disposed in a rear, e.g., relatively closer to the stock104(not shown), end of the chamber202. The retention block278is generally provided to terminate the end of the chamber202. Moreover, in the illustrated example, the retention block278defines a threaded opening for receiving a threaded plug280. When received in the retention block278, the threaded plug280may be adjusted (e.g., by threading) to contact the spring210, e.g., to provide a desired loading to the spring210. As also shown, the retention block278may be retained in the chamber202via one or more fasteners282. The fasteners282are illustrated as two pan head screws inFIG.2, although other fasteners may be used to secure the retention block278. An example configuration including a retention block is illustrated inFIG.7and discussed in more detail, below. FIG.2also illustrates the trigger106as part of a trigger assembly284. In the illustrated example, the trigger assembly284includes a first linkage286and a sear288. A trigger searing surface290protrudes from the sear288. As detailed further below in connection withFIGS.3A-3C, the trigger searing surface290protrudes from the sear288and contacts the piston searing surface236to retain the air gun100in a cocked configuration. The trigger106, the first linkage286, and the sear288cooperate via a number of surfaces, protrusions, recesses, and the like, such that movement of the trigger106results in movement of the first linkage286and the sear288, e.g., to fire the air gun100by releasing the piston searing surface236, and/or such that movement of the sear288results in movement of the first linkage286and the trigger106, e.g., to cock the air gun100by engaging the trigger searing surface290with the piston searing surface236. Although the trigger assembly284is illustrated as including three components, this disclosure is not limited to the illustrated configuration. For instance, the trigger assembly284can include as few as a single component, e.g., the trigger106, or more components. Generally, the trigger assembly284functions to retain the air gun100in a cocked configuration (discussed further below) and/or to fire the air gun100in response to a user squeezing the trigger106. The air gun100illustrated inFIG.2, and just described, is configured for automatic or automated cocking. More specifically, the actuator assembly244facilitates automatic cocking of the air gun100by selectively moving the compression tube206via the carriage246.FIGS.3A-3Cillustrate aspects of this automatic cocking. Specifically,FIG.3A-3Care cross-sectional views of the air gun100in three different configurations, including a fired configuration300, a cocking configuration302, and a firing configuration304, each of which is described in turn below. FIG.3Ashows the air gun100in the fired configuration300, in which the air gun100has just been fired. In the fired position300, the compression tube206is in an advanced, firing position, in which the hollow probe276extends through the magazine272. Also in the fired configuration300, the compression piston208is in an advanced, fired position, in which the compression piston208is generally disposed in the compression tube206and the spring210is in an extended position. The fired configuration300may correspond to a normal, or un-cocked, state of the air gun100, in which the spring210is not compressed. After firing, the actuator assembly244(automatically) causes the air gun to advance to the cocking configuration302, shown inFIG.3B. More specifically, the actuator252imparts a rotational motion on the drive screw248that causes the drive screw nut250to move in a first linear direction306generally toward the stock (not shown) and away from the barrel102. As the drive screw nut250is driven in the first linear direction306, the drive screw nut250imparts a force on the first end254of the carriage246that causes the carriage246also to move in the first linear direction306. As the carriage246is coupled to the compression tube206, the compression tube also moves in the first linear direction306. In turn, the compression tube206drives the compression piston208in the first linear direction306, which causes the spring210to compress. Specifically, the movement of the compression tube206via the carriage246has sufficient force to overcome the force of the spring210. Continued movement of the carriage246by the actuator252causes the compression tube206to advance to a compression tube cocking position, shown inFIG.3B, and which may correspond to a cocked position of the compression piston208. One or more sensors (examples of which are shown inFIGS.4A and4B, discussed below) may be provided to generate information that confirms that the compression piston208is in the cocked position. Such information may also be used to control the actuator252to stop continued actuation to drive the drive screw nut250in the first linear direction306and/or to impart an opposite rotational force on the drive screw248. As shown inFIG.3B, in the cocking configuration302, the piston searing surface236of the compression piston208engages with the trigger searing surface290, to place the compression piston208in a cocked position, with the spring210in a fully compressed position. As also illustrated inFIG.3B, with the hollow probe276retracted from the magazine274, the magazine274automatically indexes to present the projectile270in line with the bore204. As noted above, the magazine274may be automatically indexing, such that retraction of the hollow probe276causes the projectile270to automatically advance into the position shown inFIG.3B. Alternatively, in embodiments the magazine274may be adapted to receive power, mechanical energy, and/or control signals from the air gun100that may control the indexing of the projectile270. With the compression piston208in the cocked position, the actuator assembly244causes the air gun100to advance to the firing configuration304, shown inFIG.3C. Advancement to the firing configuration304ofFIG.3Cmay occur automatically in some embodiments, e.g., after firing the air gun100. In other embodiments, advancement to the firing configuration304may be at least partially manual. For instance, advancement to the firing configuration304may be initiated by a user. In some examples, a sensor, switch, or similar user interface element (not shown) may be provided that is configured to receive a user input, e.g., based on proximity, contact, or the like, of the user. The element (or a component associated with the element) may generate a signal that causes a controller in the air gun100to determine that the user desires to configure the air gun100in the firing configuration304, and the controller may cause the air gun100to be configured in the firing configuration304. The transition to the firing configuration304may be implemented by the actuator252imparting a rotational motion on the drive screw248that causes the drive screw nut250to move in a second linear direction308opposite the first linear direction306. As the drive screw nut250is driven in the second linear direction308, the drive screw nut250imparts a force on the spring264, which in turn imparts a force on the second end256of the carriage246, causing the carriage246to move in the second linear direction308. As the carriage246is coupled to the compression tube206, the compression tube206also moves in the second linear direction308, returning the compression tube206to a firing position, as inFIG.3A. UnlikeFIG.3A, however, in the firing configuration304ofFIG.3Cthe compression piston208remains in the cocked position, via engagement of the trigger searing surface290with the piston searing surface236. As also shown inFIG.3C, as the compression tube206moves to the firing position, the hollow probe276contacts and advances the projectile270into the bore204for firing. In the firing configuration304ofFIG.3C, because the compression piston208remains in the cocked position, the compression piston208is spaced from the inner surface222of the closed end216of the compression tube206, thereby creating a volume310. With the air gun100in the firing configuration304, the air gun100is ready for firing. Specifically, pulling the trigger106will cause the trigger searing surface290to disengage from the piston searing surface236, thereby allowing the spring210to extend, driving the compression piston208in the second linear direction308. The movement of the compression piston208in this manner forces air in the volume310through the hollow probe276and out the bore204, firing the projectile270. The air gun is then returned to the fired configuration300ofFIG.3A. As will be appreciated from the foregoing, the actuator assembly244controls the drive screw248to ready the air gun100for firing. Because the actuator assembly244cocks the air gun100, the user need not perform actions normally associated with conventional spring guns, such as barrel breaking, pumping, or the like. Such manual labor between every shot can be fatiguing and time consuming. Additionally, the automatic cocking techniques described herein may allow a user to continue to effectively aim the air gun100at a target during cocking, which may not be possible with conventional guns. The actuator assembly may automatically cycle from the fired configuration300to the cocking configuration302and then to the firing configuration306, e.g., upon the projectile270being fired. In other examples, the air gun100may be provided with a user interface, e.g., one or more buttons, levers, switches, or the like, that allow the user to control the automatic cocking. For example, a user may interact with the user interface to cause the air gun100to cycle through the configurations shown inFIGS.3A and3C. In some examples, the actuator252and the gears264may provide rapid movement of the drive screw248and the drive screw nut250. For example, and without limitation, the drive screw nut250may move at a rate of about 3 in/sec or 0.07 m/s in the first linear direction206and/or the second linear direction208. In at least one example, the drive screw nut250may advance from the fired configuration300to cock the gun in the cocking configuration302, and back to the firing configuration306, a distance of about 8 inches, in about three seconds. In some examples, the air gun100includes a number of sensing components to facilitate automatic cocking as described herein. Generally, the air gun100can include one or more sensors or components to determine when the air gun100is in the fired configuration300, the cocking configuration302, or the firing configuration304. The air gun100can also include one or more sensors or components to determine that the compression tube206is in the firing position and/or the cocking position, that the compression piston208is in the fired position and/or the cocked position, and/or that the spring210is in the compressed and/or extended position. Examples of sensing components will be described now with reference toFIGS.4A,4B, and5. FIG.4Ais a partial cross-sectional diagram of a rear portion400(e.g., proximate the stock—not shown) of the air gun100. The rear portion400includes features additional to those discussed above, including features for determining that the compression piston208of the air gun100is in the cocked position. In some implementations, the components shown inFIGS.4A and4Bmay be optional. Elements introduced previously are given the same reference numeral inFIGS.4A and4B. In more detail,FIG.4Aillustrates a probe402extending through the retention block278, generally in a longitudinal direction. Specifically, the probe402includes a body404extending through the retention block278and a head406on a side of the retention block278nearer the stock (not shown). For instance, the retention block278may have an aperture or sleeve formed therethrough that provides a clearance fit for the body404of the probe402, but through which the head406cannot pass. In the illustrated example, the body404and the head406are generally cylindrical, although such is not required. Other shapes and profiles are anticipated and could function similarly. As also illustrated inFIG.4A, a spring408biases the probe402away from the sensor406, e.g., generally in a direction410. As noted above, the head406cannot pass through the retention block278. Accordingly, the probe402will generally maintain the illustrated position ofFIG.4Awhen the air gun100is not cocked. For instance,FIG.4Acorresponds to the air gun100being in the fired configuration300ofFIG.3A. FIG.4Aalso includes a schematic representation of a sensor412arranged proximate the head406of the probe402. As discussed below, the sensor412is disposed to detect a presence/absence of the head406of the probe402. Although disposed to sense the head406of the probe402in the example ofFIG.4A, the sensor412may sense other aspects of the probe402and/or other components. The sensor412may be a conventional sensor, including but not limited to an optical sensor, a mechanical sensor, an electromagnetic sensors, such as a hall-effect sensor, a pressure sensor, a vibration sensor, a strain sensor, an orientation sensor or any other sensor that can be used to detect conditions and provide signals from which the proximity of the head406(or other portions) of the probe402can be determined. The sensor412may also be otherwise positioned and/or other or additional sensors may be provided, e.g., to detect other positions or a range of positions corresponding to the cocked position. For example, an alternative arrangement may include a sensor and/or other mechanism that detects that the compression piston208is located in a position corresponding to the cocking position. In examples, the sensor412may make a binary determination (present/absent) of whether the probe402is sensed. In the example, the sensor412is disposed on a circuit board414, also shown schematically. The circuit board414may be sized and shaped for retention in the housing108and/or the stock104in some examples. The circuit board414is also illustrated as supporting additional electronic components416. Without limitation, the additional electronic components416can include power sources, resistors, memory, integrated circuits, systems on a chip, microprocessors, microcontrollers, a field-programmable gate array (FPGA), a programmable logic device (PLD), programmable analog logic (PAL), an application specific circuit (ASIC), or other digital control system, as well as hardwired electronic control systems, or the like. The circuit board414hardware can form a logic control unit, which receives inputs signal from the various sensors associated with the air gun100and sends control signals to other components of the air gun100, including but not limited to the actuator252. In embodiments, such a logic control unit can execute instructions stored in memory and can, for example and without limitation, include a microprocessor incorporating suitable look-up tables and/or control software executable by the microprocessor to cause the air gun100to operate according the control software stored in the memory, and based at least in part on data from sensors, as described herein. In examples, the electronic components416may control aspects of the air gun100. Without limitation, the circuit board414and/or the electronic components416can be configured to function as a controller associated with the air gun100to perform one or more of: receiving data, e.g., from the sensor412and/or other sensors associated with the air gun (as detailed further herein), controlling aspects of the actuator assembly, e.g., to automatically cock the air gun as detailed herein, controlling aspects of one or more user interface elements, e.g., to indicate to the user that the air gun100is ready for firing, is cocking, and/or needs maintenance, and/or logic and/or control operations.FIGS.8and9, discussed further below, illustrate examples of control processes that may be implemented by the electronic components416and/or the circuit board414. In some examples, the circuit board414may include or be coupled to a port to which external devices may physically connected or a wireless communication system enabling external devices to be connected to the circuit board414using contactless communication technologies including but not limited to radio frequency communications such as Wi-Fi, Bluetooth, or near field communications, as well as optical communications including, but not limited to , infrared communications. Such communications can be used to improve or adjust programming, to examine stored information in the air gun100, such as fault determinations, shot counts, and/or any other information related to operation and/or status of the air gun100. For instance, such memory may be stored in a memory coupled to the circuit board414. Although described herein as a circuit board414, it is not essential that the components be mounted to a single substrate. For example, and without limitation, various components of the circuit board414may be distributed within the air gun100to meet functional, simplicity, aesthetic, or other objectives with respect to the air gun100. In the example ofFIG.4A, the circuit board414is connected to a power source418. The power source418may be a battery. For example, the battery may be stored in a compartment in the stock (not shown) of the air gun100and may be electrically connected to the circuit board414, e.g., via one or more cables, leads, or the like. In other examples, the power source418may be an external power source, e.g., a battery pack, a cord, or the like. In some examples, the power source418may be selectively attached to the air gun100, e.g., to charge a battery on the air gun100, which may be one of the electronic components416. As noted above, in the example ofFIG.4Athe air gun100is in the fired configuration300previously shown inFIG.3A. In this example, the head406of the probe402is spaced from the sensor412, such that the sensor412does not detect the head406of the probe402. However, as the air gun100is cocked the compression piston208is moved toward the cocked configuration shown inFIG.4B. This brings the compression piston208into contact with the probe402, causing the probe402to move in a direction420, opposite the direction410. This arrangement is illustrated inFIG.4B. Specifically, the force associated with moving the compression tube206into the cocking position, and thus the compression piston208into the cocked position, overcomes the spring force of the spring408, and the head406of the probe402is advanced to a position in closer proximity to the sensor412, such that the sensor412detects the probe402. The portion of the air gun100illustrated inFIG.4Bmay generally remain the same when the air gun100is in the cocking configuration302discussed above in connection withFIG.3Band/or the firing configuration304discussed above in connection withFIG.3C. Specifically, whenever the compression piston208is in the cocked position, the sensor412will detect the probe402, confirming the spring210is compressed. Conversely, when the probe402is not detected by the sensor412, the air gun100is not cocked, and thus cannot be fired. In some examples, the circuit board414may be connected to a light or other multi-state visible signaling device indicating the state of the air gun100. For example, a first light color may indicate that the air gun100is cocked and a second light color may indicate that the air gun100is not cocked. Alternatively, a portion of the probe402may be visible from outside of the air gun100, and adapted to provide a visible indicia that the air gun100is cocked (or not cocked). Without limitation, the probe402may have a portion that has one color that is visible through a portal or window in the housing108when the air gun100is in the cocked configuration and a second color that is visible when the air gun100is not cocked. Other visible indicia, such as symbols, text, and/or the like may also or alternatively be used. The probe402and the sensor412may be arranged such that the probe402is sensed prior to the compression piston208contacting the retention block278, e.g., during cocking. In this manner, a “presence” signal generated by the sensor412can be transmitted to the circuit board414to stop continued movement of the actuator in time to avoid a collision of the compression piston208with the retention block278. Stated differently, the configuration of the probe404and the sensor412allow for some overtravel of the compression piston208. Similarly, the trigger searing surface290and the piston searing surface236may be positioned to sear the compression piston208at a position spaced longitudinally from the retention block278, and/or the piston searing surface236may be oversized in the longitudinal direction to accommodate such overtravel without the trigger searing surface290becoming dislodged from the piston searing surface236. FIG.5illustrates additional sensor modalities and functionality. Specifically,FIG.5is a partial cross-sectional view of aspects of the air gun100proximate the breach end110of the barrel102. As illustrated inFIG.5, the air gun100includes a magazine sensor502proximate the receptacle272in the housing108configured to receive the magazine274. The magazine sensor502is illustrated schematically and generally functions to confirm a presence/absence of the magazine274. The sensor502may be disposed on a circuit board (not shown) or may be mounted to or otherwise supported by the housing108. In the example ofFIG.5, the magazine274includes a magnet504integrated therein. The magnet504is detectable by the sensor502. Specifically, the sensor502confirms presence of the magazine274when the magnet504is sensed and detects an absence of the magazine274when the magnet504is not sensed. In some examples, the magnet504may be overmolded during production of the magazine274, e.g., such that some or all of the magnet504is embedded in a body of the magazine. In other instances, the magnet may be coupled to the magazine274, e.g., via an adhesive, a press fit, mechanical means, or otherwise. Although the sensor502and the magnet504are used in the example ofFIG.5, other features for detecting presence/absence of the magazine274also are contemplated and include, but are not limited to, mechanical switches, optical sensors, or the like. For example, the magnet504may not be required in some alternate sensing configurations. The magazine sensor502may be in communication with a controller associated with the air gun100, which may be embodied as the electronic components416. For example, the controller may prohibit movement of the actuator assembly244(not shown inFIG.5), e.g., to prevent cocking of the air gun100, absent an indication from the sensor502that the magazine274is loaded. FIG.5also includes a schematic representation of a carriage sensor506coupled to the housing108, e.g., below the bore204. The carriage sensor506is disposed to sense a presence/absence of the carriage246. The carriage sensor506may be disposed on a circuit board (not shown) or may be mounted to or otherwise supported by the housing108. In at least some examples, the magazine sensor502and the carriage sensor506may be disposed on, or in communication with, the same circuit board. In the illustrated example, a magnet508is secured to the sidewall258of the carriage246. The magnet508is illustrated schematically and is disposed to be sensed by the carriage sensor506when the carriage242is in a predetermined, e.g., front-most inFIG.5, position. For example, inFIG.5, the air gun100is illustrated in the firing configuration304corresponding toFIG.3C, in which the air gun is cocked, and ready for firing. In the example ofFIG.5, the carriage senor506detects the magnet508only when the carriage is in the illustrated position. When the carriage is anywhere other than the position shown inFIG.5, the carriage sensor506indicates an absence of the magnet508. The carriage sensor506may be in communication with a controller associated with the air gun100, which may be embodied as the electronic components416. For example, the controller may require a signal from the carriage sensor506confirming the presence of the magnet508before configuring the air gun100for firing. For instance, the user may be prevented from firing the air gun100, e.g., via an electronic trigger lock, switch, or the like, until the carriage246is confirmed to have returned to the illustrated position. As will be appreciated, in the firing configuration illustrated, the hollow probe276is in position to transmit compressed air from the compression tube206into the bore204of the barrel102. Firing the air gun100with the carriage246(and thus the compression tube206) in a position rearward (relatively closer to the stock) of the firing position may cause compressed air to be released into a volume between the compression tube206and the bore204, which may cause jamming, damage, and/or other problems. Moreover, failure of the carriage to reach the position illustrated inFIG.5may indicate a malfunction, such as two projectiles in the bore204, which could result from and/or lead to jamming of the air gun100. In addition to sensing the position of the compression tube206for safe firing of the air gun100, data from the sensor506can also be used to stop travel of the carriage246, e.g., by stopping the actuator252. In examples, the compression tube206may come to rest upon contacting the end of the chamber202, even with continued rotation of the drive screw248. As will be appreciated, in the illustrated arrangement, the drive screw nut250can continue to travel without causing the carriage246to move further. Also in the arrangement, the spring262can provide resistance to this “overtravel.” In some examples, the resistance provided by the spring can be detected, e.g., via an increased current load, and used to signal the actuator252to stop. That is, in some contemplated examples the sensor506can be used to detect a presence of the carriage246and other sensor modalities may be used to control the actuator252. The carriage sensor506and the magnet508are one example for detecting presence/absence of the carriage in the illustrated position ofFIG.5. Other features for detecting presence/absence of the carriage246also are contemplated and include, but are not limited to, mechanical switches, optical sensors, or the like. Moreover, although the magnet508is shown as integrated in the sidewall258of the carriage246, in other examples, the magnet508may be disposed on other portions of the carriage, including but not limited to the first end254or the rear end256of the carriage246. In further examples, the magnet508may be secured to the drive screw nut250, although the potential for overtravel, as discussed above, may make this arrangement less desirable in some instances. As just described,FIG.5includes the carriage sensor506to verify that the carriage246, and thus the compression tube208, is in position for firing. The carriage sensor506may prevent firing of the air gun100prior to completion of the cocking cycle. The air gun100may also include additional sensors and/or enable sensing techniques for determining a status of the air gun100. Without limitation, in some examples, the actuator252may, in some examples, include an encoder or resolver. The encoder/resolver may provide velocity and/or positional feedback to a controller associated with the air gun100. Such feedback may simplify controlling the compression chamber position and could augment or replace other monitoring, sensor-based and/or timer-based functions. The air gun100may also include additional features to prevent inadvertent firing. Specifically,FIGS.6A and6Bare used to illustrate a mechanical trigger lock for preventing inadvertent discharge when the compression tube206is not in the proper position for firing. FIG.6Ais a cross-sectional view of the air gun100showing aspects of an optional trigger lock assembly600. InFIGS.6A and6B, some components (like the actuator252) have been removed for clarity. The trigger lock assembly600generally prevents inadvertent firing of the air gun100when the carriage is other than in the forward-most or firing configuration. As shown, the trigger lock assembly600includes a mounting plate602, a locking plate604, and a rod606coupled to the locking plate604as detailed further herein. The mounting plate602is generally fixed relative to the air gun100. The mounting plate602includes a slotted opening608sized to provide a clearance fit for a trigger protrusion610that extends laterally (e.g., normal to the X-Y plane ofFIG.6A) from the trigger106. The slotted opening608is generally arcuate, although other shapes and sizes will be appreciated with the benefit of this disclosure. The mounting plate602also includes a post612protruding laterally (e.g., normal to the X-Y plane ofFIG.6A) therefrom. The mounting plate602also includes mounting features614for securing a spring616to the mounting plate602. The locking plate604includes a slot618configured to receive the post612of the mounting plate602therein. As detailed further below, the locking plate is movable relative to the mounting plate602via movement of the slot618about the post612. Although obscured by the perspective ofFIG.6A, the spring616is coupled to the locking plate604. The spring616is arranged to bias the locking plate604into a locking position illustrated inFIG.6A. In the locking position, the locking plate604obstructs movement of the trigger protrusion610in the slotted opening608of the mounting plate602. Specifically, in the illustrated example, pulling the trigger106causes the trigger protrusion610to contact a lower edge of the locking plate604, thereby impeding continued movement of the trigger106, and preventing the trigger searing surface290from releasing the piston searing surface236. The spring616is also coupled to the rod606. As illustrated, the rod606extends in a longitudinal direction from an attachment620at the spring616to a distal end622on a side of the carriage246relatively closer to the barrel102. The rod606also includes a biasing member624fixed along the length of the rod606. In some instances, the rod606may be formed of a metal wire, such as music wire, although this disclosure is not so limited. In other examples the rod606may be a polymeric material, a composite material, or the like. In examples, the rod606is sufficiently rigid such that application of a force to the biasing member624in a longitudinal direction626causes the rod606to move longitudinally and with sufficient force to overcome the spring force of the spring616. In the example ofFIG.6A, the compression tube206is between the cocking position and the firing position, discussed above. For example, the actuator assembly may be returning the compression tube206to the firing position after moving the compression piston208to the cocked position. In this position, the spring616, which is coupled to the locking plate604, biases the locking plate604into the locking position, impeding actuation of the trigger106, as described above. In the example ofFIG.6B, the compression tube206has advanced to the firing position. In this position, the second end256of the carriage246has contacted the biasing member624and displaced the biasing member624, and therefore the rod606, generally in a longitudinal direction626. The movement of the rod606overcomes the spring force of the spring616, causing the locking plate604to slide, relative to the mounting plate602generally in the direction626. With the locking plate604in this advanced position, the path of the trigger protrusion610in the slotted opening608is unobstructed. Pulling the trigger106with the locking plate in the advanced position results in disengagement of the trigger searing surface290with the piston searing surface236, allowing the air gun100to be fired, as described above. The air gun100is movable between multiple configurations, and, depending upon a current configuration, different components are located in different positions. For instance, in both the fired configuration300and the firing configuration304, the compression tube206is in an advanced, firing position, in which the hollow probe276extends through the magazine272, e.g., into the barrel102. However, in the fired configuration300no projectile270is in the barrel whereas the projectile270is in the barrel102in the firing configuration302. Moreover, in the cocking position302, the hollow probe276does not extend through the magazine272, but a projectile270may be in line with the barrel102. As will be described now with reference toFIG.7, aspects of the present disclosure allow for removal and/or reloading of the magazine274regardless of the current state of the air gun100. FIG.7is a schematic representation of a magazine700in a first magazine configuration702, a second magazine configuration704, and a third magazine configuration706. More specifically, the magazine700, which may be the magazine274, includes a housing708and a carousel710disposed to rotate in and relative to the housing708. In the depiction of the magazine700associated with the first configuration702, a portion of the housing708is removed to illustrate the position of the carousel710in the housing700. Moreover, for clarity in the following description, each representation of the magazine configurations702,704,706includes a separate depiction of the carousel710, to show the position of the carousel relative to the housing708. In more detail, the housing708defines an opening712, which, with the magazine700fixed to the air gun100, generally aligns with the barrel102. As discussed herein, the hollow probe276extends partially into the barrel102in the fired configuration300and the firing configuration304. In those configurations, when the magazine700is used, the hollow probe276extends through the opening712. The carousel710generally includes a plurality of receptacles714and a shutter716circumferentially-spaced about a rotational axis718. In the illustrated example, eight receptacles are shown, although more or fewer may be included in other arrangements. The shutter716generally comprises a solid wall or stop, as will be described further herein. Although not illustrated inFIG.7, the magazine700further includes an indexer, which may be embodied as a spring-loaded ratchet pawl cooperating with a torsion spring or the like, that causes the carousel710to rotate about the rotational axis718to serially align the receptacles714and the shutter716with the opening712in the housing708. More specifically, the first magazine configuration702may be a loading configuration, e.g., in which the magazine700is first placed into the receptacle272of the air gun100. In the first magazine configuration, a blank receptacle714aof the receptacles716is aligned with the opening712. In this configuration, the opening712is free of obstructions that could prevent the magazine100from being properly seated in the air gun100in either the fired configuration300or the firing configuration304, e.g., in which the hollow probe276is extended into the barrel102. Stated differently, the blank receptacle714aallows for loading of the magazine700over the extended hollow probe276, thereby obviating the need to cycle the air gun100to a position at which the hollow probe276is retracted. As will be appreciated, should the magazine700be loaded into the air gun100with the hollow probe276in the retracted position, the magazine700will automatically index to the second magazine configuration704. In the second magazine configuration704, the carousel710has been indexed in the direction of an arrow720(relative to the position in the first magazine configuration704) to present a loaded receptacle714bof the receptacles716in line with the opening712. Specifically, the loaded receptacle714bcontains a projectile722, which may the projectile270. With the projectile722in the opening712, as the air gun cycles to the firing configuration304, the projectile722is pushed out of the opening712into the barrel102as detailed herein. After firing, as the air gun cycles through the cocking configuration302and back to the firing configuration304, the magazine700will again index to present a next one of the loaded receptacles716bin line with the opening712. As the magazine700indexes in this manner, a visual indicator724may be updated to show a remaining number of projectiles in the magazine700. In one example, an opening or window726may be provided in the housing708and a printed indication728of a plurality of printed indications on the carousel710may align with the window726to be visible to a user. As the projectiles722are fired from the air gun100, the magazine700continues to index as just described. Upon firing of the last projectile722, the magazine700indexes to the third magazine configuration706. In this configuration, the shutter716aligns with the opening712. As noted above, the shutter716is a solid wall and prevents the hollow probe276from passing through the opening712. Because the air gun100cannot be advanced to the firing configuration304with the magazine700in the third magazine configuration706, a user cannot continue to fire the air gun100without replacing the magazine700. In the example illustrated, the visual indicator724includes an icon that alerts the user to the empty magazine700. In some embodiments, the air gun100includes a current sensor that senses the current used in the motor, e.g., the motor of the rotary actuator252, that drives the hollow probe276forward. When the shutter716closes the opening712and the hollow probe276drives against the shutter716, the current in the motor changes and these changes can be sensed by a microprocessor connected to the sensor. When such changes are detected, the microprocessor reverses the current in the motor to withdraw the hollow probe276, e.g., to return the air gun100to the cocking configuration302or some intermediate position between the firing position304and the cocking position302. Optionally the microprocessor can also cause an audible, visual or tactile indicator to emit a signal indicating that the magazine700must be changed. As will be appreciated from the foregoing, the magazine700may be coupled to the air gun100regardless of a state of the air gun100. That is, the magazine700can be replaced with the hollow probe276extended or retracted. For example, circumstances may arise in which the hollow probe276has advanced a projectile into the bore of the air gun100but a user wishes to swap ammunition or to load a more fully loaded magazine onto the air gun100while maintaining a readiness to fire as loaded. In such cases the user must load the magazine700onto (or over) the hollow probe276. Such replacement however is not possible with the shutter716positioned in the opening712, nor is it possible if a projectile is positioned in the opening712. Accordingly, the magazine700includes the blank receptacle714athat functions as a passageway for the hollow probe276. Thus, in the example ofFIG.7, the magazine700is rated for seven projectiles, but includes eight receptacles714. As discussed above, the blank receptacle714ais positioned between the shutter716and the first loaded receptacle714b. As illustrated, however, the blank receptacle714amay be similar in size and configuration to a loaded receptacle714b. In these configurations, the blank receptacle714amay facilitate loading of a separate, additional projectile into the air gun100. Specifically, as noted above, when the hollow probe276is retracted, the magazine700will automatically index to align the first loaded receptacle214bwith the opening212, because the hollow probe will not prevent this indexing. Instead of allowing such indexing, a user may elect to place a projectile to be received in the blank receptacle214a, thereby facilitating loading of an additional round. In other examples, the blank receptacle714amay be sized or shaped differently than the loaded receptacles714b, e.g., to facilitate the magazine replacement process. For instance, the blank receptacle714acan include magnets or shaped surfaces to help a user to more rapidly and precisely align the magazine700in the receptacle272during loading. It will be appreciated that when loading the magazine700of the embodiment ofFIG.7, the carousel716must be rotated so that the shutter is moved. Conventionally, this has positioned one of the loaded receptacles714bin line with the opening712. Because the opening712is at the vertical bottom of the magazine700, gravity can cause the projectile722disposed in the opening712to fall out while attempting loading. However, because the magazine700aligns the blank receptacle with the opening712, there is no pellet to complicate the loading process. In the embodiment illustrated inFIG.7, the shutter716is integrated into the carousel716. In other embodiments, however, a separate shutter, movable between a shutter blocking position and a shutter open position may be provided. For instance, such an alternative shutter can interact with the carousel716such that as the carousel716is moved after firing of a final stored projectile, the carousel716drives the shutter from the open position to the blocking position. In one such embodiment the shutter can have a catch that interferes with movement of a driving surface of the carousel such that rotation of the carousel716drives the shutter from the open position to the blocking position. Other forms of interaction can be used including but not limited to magnetic. In still other embodiments the magazine700and/or the shutter716may be driven by an actuator on the air gun100, with the actuator being used to synchronize movement of the carousel as well as movement of shutter100. In these alternative arrangements, the shutter716may be separate from the carousel716and/or the magazine700, such that the hollow probe276remains obstructed from advancing into the barrel102. The foregoing has discussed components and functionality associated with the automatic-cocking air gun100. In some aspects of this disclosure, the air gun100may also include features to facilitate ready assembly of the air gun100. Specifically,FIG.8provides an end view800and a partial cross-sectional view802of a portion of the air gun100proximate the stock104, not shown. More specifically, theFIG.8illustrates additional aspects of a retention block804, which may be used in place of the retention block278discussed above. In this example, the housing108of the air gun100includes a first rail806and a second rail808, which form part of a profiled or contoured inner surface of the housing, defining at least a portion of the chamber202. The retention block804has a profile that is configured to cooperate with the rails806,808. More specifically, the retention block804may be inserted into the chamber202by sliding the retention block804along the rails806,808. The retention block804may be secured in a desired longitudinal position using one or more fasteners810, shown generally as set screws inFIG.8. In this example, two fasteners810(per rail) are illustrated to secure the retention block804in the longitudinal direction, although more or fewer may be used. Any number and/or type of fasteners that facilitate securement of the retention block804with adequate force to prevent longitudinal motion of the retention block804during operation. Moreover, fasteners, like the fasteners282, may be used to secure the retention block804relative to the housing104, e.g., proximate a top of the retention block804. The retention block804includes an opening812, which, in the example, is a threaded opening configured to receive the threaded plug280. As illustrated, the threaded plug280contacts the rod240of the spring210. The threaded plug280can be moved to increase/decrease a loading on the spring210, e.g., by moving the rod240. For example, with the air gun100in the fired position, the threaded plug280may be “tightened” relative to the opening812to increase a pre-loading on the spring210. For example, the threaded plug280is illustrated as including a receptacle814configured to receive a tool for facilitating movement of the threaded plug280in the opening812. With this arrangement, the spring210, which, as discussed above, may be a gas spring, can be pre-loaded in the chamber202, obviating the need for expensive and specialized equipment for pre-loading and calibrating the gas spring prior to assembly of the air gun100. As also shown in the example ofFIG.8, the retention block804can include a pair of spaced-apart legs816having contoured outer surfaces for cooperating with the rails806,808. The legs816may provide easier assembly, e.g., by allowing for some lateral movement of the legs816, e.g., relative to each other, to account for manufacturing tolerances, or the like. In some examples, the spaced-apart legs816may include a lateral outward bias, such that the legs816must be moved laterally toward each other for insertion into the chamber202via the rails. In this example, the legs816may provide an outward force on the housing proximate the rail806,808, to increase a holding force of the retention block804in the chamber202. Moreover, the spaced-apart legs816can define a cavity818therebetween. The cavity818may house components, e.g., cabling, leads, circuit boards, and/or other electronic and/or electro-mechanical components, or the like. FIG.8also illustrates a stop block820secured to the rails806,808. In the example, the stop block820contacts a rear surface of the retention block804proximate the legs816and is secured to the rails806,808via fasteners822, which may be the same as the fasteners810. The stop block820may be optional in some examples. Without limitation, the stop block820may be integrated into the retention block804. Although not illustrated inFIG.8, in some examples the legs816can incorporate slots to receive and locate a pin, such as a sear pivot pin, to secure aspects of the trigger link288, and therefore the trigger searing surface290, relative to the housing108. The rails806,808may also cooperate with the stop block820to support loads acting on the pin by the trigger linkage288when the piston208is seared. The stop block820may also provide mounting points for the trigger assembly284and also for the trigger lock assembly600. The block820is secured to the rails806and808and may be in direct contact with the block278. In this arrangement, the block820can assist the block278in anchoring the reaction force of the spring210to the housing108. The air gun100discussed herein provides improved automatic cocking that reduces user interaction. A process for cocking the air gun100was generally discussed above with reference toFIGS.3A-3C.FIG.8andFIG.9illustrate additional example processes in accordance with embodiments of the disclosure. These processes are illustrated as logical flow graphs, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. It should be appreciated that the subject matter presented herein may be implemented as a computer process, a computer-controlled apparatus, a computing system, or an article of manufacture, such as a computer-readable storage medium. In examples, the air gun100can include a control system for implementing the processes900,1000,1100, as well as other functionality, of the air gun100. For instance, the control system can include the sensors412,502,506, the circuit board414, the electronic components416, and/or other components. While the subject matter described with respect to the process800and the process900are presented in the general context of operations that may be executed on and/or with one or more computing devices, those skilled in the art will recognize that other implementations may be performed in combination with various program/controller modules. Generally, such modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Those skilled in the art will also appreciate that aspects of the subject matter described with respect to the process900, the process1000, and/or the process1100may be practiced on or in conjunction with other computer system configurations beyond those described herein, including multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, handheld computers, mobile telephone devices, tablet computing devices, special-purposed hardware devices, network appliances, and the like. More specifically,FIG.9is a flow diagram illustrating an example process900for operating an air gun, such as the air gun100with automatic cocking. In some examples, the process900may be performed by a controller, aspects of which may be retained in the stock104. The example process900includes, at an operation902, confirming a magazine is loaded. For example, aspects of this disclosure may require that the magazine274be loaded in the magazine receptacle272for proper operation of the air gun100, e.g., to confirm that the breach end110of the barrel102is not exposed. As shown inFIG.5, the housing108can include a magazine sensor502configured to sense presence of the magazine274. In some examples, the magazine274may have an integrated magnet504sensed by the magazine sensor502. At an operation904, the process900includes, with the air gun in a fired position, controlling an actuator to move a compression tube toward a cocking position. For example,FIG.3Ashows the fired configured300in which the compression tube206is in the firing position, the compression piston208is the in the fired position, and the spring210is expanded. The operation904includes using the actuator to begin moving the compression tube206(and the compression piston208) to compress the spring210. In the example of the air gun100, the actuator is the rotary actuator252configured to drive the drive screw248, although in other examples, other actuators may be used. For example, aspects of the actuator assembly244can be replaced with one or more different servo, pneumatic, hydraulic or other actuators, including but not limited to linear servo actuators. At an operation906, the process900includes causing the piston to sear. For example, and as illustrated inFIG.3B, with continued movement of the compression tube206to compress the spring210, the piston searing surface236on the compression piston208will engage the trigger searing surface290. This engagement places the compression piston208in the cocked position shown inFIG.3B. At an operation908, the process900includes determining whether the piston travelled to the cocked position. For example, and as illustrated inFIGS.4A and4B, the air gun100includes the sensor412for determining that the compression piston208has reached the cocked position. As described herein, the sensor412may detect the presence of a probe402which moves into the detection field of the sensor412when contacted by the compression piston208during cocking. If, at the operation908it is determined that the piston has travelled to the cocked position, the process900proceeds to an operation910including reversing the actuator. For example, because the compression tube206has reached a position at which the compression piston208is seared, the actuator will reverse direction to return the compression tube206to the firing position. At an operation912, the process900includes determining whether the piston remained in the cocked position. For example, the probe402detected by the sensor412is biased, e.g., via a spring, such that when the compression piston208is no longer in the cocked position, the probe402will return to a normal position spaced from the field of view of the sensor412. Thus, for example, if the gun does not sear properly, and the compression piston208returns with the compression tube208during the movement of the actuator in the operation910, the sensor412will detect an absence of the probe402. If it is determined at the operation912that the piston has remained in the cocked position, at an operation914the process900includes determining whether the compression tube has returned to the firing position. In the example ofFIG.5, the carriage246, e.g., the sidewall258of the carriage254, is sensed by the carriage sensor506when in the firing position. In that example, the sidewall258includes the magnet508that is sensed by the carriage sensor506, although other examples are contemplated. The operation914can also be based at least in part on a time lapse. For instance, the process900may require that the compression tube208return to the firing position in a predetermined amount of time. If it is determined at the operation912that the compression tube is returned to the firing position, at an operation916the process900can include stopping the actuator. As discussed above in connection withFIG.5, the presence of the compression tube in the firing position, as detected by the carriage sensor506, can signal that the air gun is in the firing position, causing the actuator to stop. As also discussed, an increased resistance to movement of the actuator, e.g., resulting from contact of the spring262by the screw drive nut250, can be detected and cause the process900to stop the actuator. At an operation918, the process900can also include signaling ready to fire. For instance, the operation918can include controlling a user interface to indicate to a user that the air gun100is ready for firing. In some examples, an LED or other light source visible to the user may change from red to green or provide some other visual cue to indicate that the air gun100is ready for firing. Also, and as detailed above in connection withFIGS.6A and6B, the air gun100can include a trigger lock assembly that prevents pulling the trigger106until the compression tube is returned to the firing position. In other examples, the operation918can include removing an electronic trigger lock, providing an audible or tactile output corresponding to the ready to fire state, or the like. If it is determined at the operation908that the piston has not travelled to the cocked position, at the operation912that the piston has not remained in the cocked position, and/or at the operation914that the compression tube has not returned to the firing position, the process900can proceed to an operation920, at which an error is signaled. The operation920can include indicating to the user that the air gun100is malfunctioning, e.g. jammed or the like. Without limitation, the operation920can include providing a visual, audible, tactile, and/or other warning to the user that the air gun100is not ready for filing. In more detail,FIG.10shows a process1000for controlling an air gun in response to an error, such as the error determined at the operation820. Although the process1000may be in response to the error at the operation820, the process1000does not require the processing described above in connection withFIG.8, and the process800need not result in implementation of the process1000. In more detail, at an operation1002, the process1000includes receiving a jammed signal. In some examples, the jammed signal may be the error signal resulting from the operation820. In other examples, the jammed signal may result from an increased resistance to movement of the actuator, e.g., as determined by an increased current load. In still further examples, the jammed signal may result from a user input. For instance, the air gun100can include a user interface, e.g., a button, switch, or the like, that the user can interact with to signal that the user would like to perform maintenance on the air gun100for example. At an operation1004, the process1000includes controlling an actuator to move a compression tube a predetermined distance toward a cocking position. For example, the operation1004can include moving the compression tube206via operation of the actuator252to a position in which the hollow probe276is spaced from the magazine274. For example, in this “unjam” position, the magazine274can be removed from the magazine receptacle272. At an operation1006, the process1000includes outputting an indication of unjam mode. For instance and without limitation, the operation1006can include providing a visual, audible, tactile, and/or other indication to the user that the air gun100is in the “unjam” position. For instance, the indication may indicate to a user that the user can perform maintenance, e.g., to clear a jam, replace a magazine, or the like. At an operation1008, the process1000includes receiving an unjammed signal. For example, once an obstruction is cleared, a magazine is replaced, or the like, the user may interact with a user interface to so indicate. At an operation1010, the process1000includes optionally determining whether a piston is in a cocked position. For instance, as detailed above, the probe402may be sensed by the sensor412when the compression piston208is in the cocked position. A state of the sensor412may be determined at the operation1010. If, at the operation1010it is determined that the piston is in the cocked position, an operation1012includes controlling the actuator to return to the firing position. For example, the operation1012can include moving the compression tube208to a position at which the carriage sensor506confirms presence of the magnet508. Alternatively, if at the operation1010it is determined that the piston is not in the cocked position, an operation1014includes controlling the actuator to cock the air gun. For example, regardless of a state of the air gun100prior to entering the “unjam mode,” upon completion of unjamming the air gun100, the air gun100may be placed in a cocked or ready to fire position. As noted, the operations1010,1012, and1014are optional. In other examples, the air gun100may be returned to a different configuration upon receiving the unjammed signal. At an operation1016, the process1000includes providing the user with an indication of ready to fire. For example, and without limitation, the operation1016can include configuring a user interface to indicate, e.g., visually, audibly, tactilely, or the like, that the air gun100is no longer jammed and/or ready to fire. FIG.11provides an improved process1100of manufacturing an air gun, like the air gun100. As with the processes900,1000, the process1100is illustrated as a series of steps in a flowchart. The order of the steps is for example only and the process1100may be implemented with more or fewer steps. At an operation1102, the process1100includes providing a housing with an opening to a chamber. For example, as shown inFIG.8, the air gun100can include the housing108including the chamber202, at least partially defined by the chamber wall200. The chamber wall200can be profiled, e.g., to include the rails806,808. At an operation1104, the process1100includes inserting a cylinder, a piston, and a gas spring into the housing via the opening. As detailed herein, the compression tube206is configured for insertion into the chamber202and for movement relative to the chamber202. Similarly, the compression piston208is at least partially received in the open end of the compression tube206, and the spring210is positioned to bias the compression piston208toward the compression tube206. In the example ofFIGS.2and8, the compression tube206, the compression piston208, and the spring210are inserted, in order, and generally along the longitudinal axis211into the chamber202. At an operation1106, the process1100includes inserting a retention block into the housing via the opening. In examples, with the compression tube206, the compression piston208, and the spring210inserted in the chamber in the operation1104, the retention block is inserted into the chamber200. In the example ofFIG.8, the retention block820is inserted into the chamber202in a longitudinal direction, along the rails806,808. The operation1006, or another operation, may in some instances include sliding, as an assembly, the retention block820, the trigger group284and the trigger lock600into the cavity818and onto rails806and808of the housing108. At an operation1108, the process1100includes securing the retention block in the chamber. In the example ofFIG.8, discussed above, the retention block802is retained by one or more of the fasteners810, which may be threaded fasteners that pass through the retention block802and contact the rails806,808. Other fasteners, including mechanical fasteners, adhesives, or the like, also or alternatively may be used. At an operation1110, the process1100includes inserting an adjustment feature into an orifice in the retention block. In the example ofFIG.8, the retention block802includes the opening812into which the plug280is threaded. The plug280contacts the spring210. At an operation1112, the process1100includes adjusting the spring loading using the adjustment feature. For example, by selectively moving the plug280along the longitudinal axis211, e.g., by turning the threaded plug280, the spring210can be selectively compressed or expanded, with the spring contained in the chamber202. In examples, the process1100obviates the need for expensive and elaborate fixtures and tools for setting the spring tension prior to inserting the spring into the air gun. Moreover, the arrangements described herein provide for ready disassembly, e.g., for maintenance and/or repair of components of the air gun100. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure. Various modifications and changes may be made to the subject matter described herein without following the examples and applications illustrated and described, and without departing from the spirit and scope of the present invention, which is set forth in the following claims. | 82,637 |
11859940 | DETAILED DESCRIPTION A hop-up device or system for an airsoft gun has been developed that allows a user to adjust the tension applied to a BB during the chambering of the BB in the airsoft gun. The hop-up device or system has an adjustable tension adjustment sleeve configured to constrict a desired amount around the BB. The tension adjustment sleeve provides tension on the BB when the air nozzle is seating, or chambering, the BB in the bucking lips of a rubber bucking prior to firing. By adjusting the tension adjustment sleeve, the user can increase or decrease the bucking lip tension on the BB to reduce or essentially eliminate the adverse effects from manufacturing tolerances, wear, and the like. These adverse effects can cause double feeding of the BBs, jamming, inconsistent firing, slow firing, midcap syndrome (the feed spring in the magazine is too strong and forces BBs against the air nozzle with excessive force), and the like. FIG.1represents a cross-section of a hop-up device or system100that allows a user to adjust the tension applied to a BB during the chambering of the BB in the airsoft gun.FIG.2represents a proximal view of a barrel104with the hop-up device or system100showing the threaded dial118.FIG.3represents a distal view of a barrel104with the hop-up device or system100showing the threaded dial118.FIG.4represents an axial view of the hop-up device or system100showing the threaded dial118.FIG.5represents a disassembled view an airsoft gun system200with the hop-up device or system100that allows the assembly of one airsoft gun having one of three optional configurations. InFIG.1, the hop-up device or system100includes a rubber bucking102mounted on a proximal end of a barrel104, An adjustable hop arm106is disposed longitudinally on a top of the hop-up device or system100to apply a desired force downward on a hop nub108that extends into the barrel102to apply backspin to a BB110when the BB110is fired through the barrel. While the hop arm106is illustrated, the hop-up device or system100may use other mechanisms for adjusting the engagement of the hop nub108. While hob nub108is illustrated with particular configuration, other configurations of a hob nub may be used. As with conventional hop-ups, the rubber bucking102has a tapered proximal end or a bucking lip112in which the BB110will be seated by the air nozzle (seeFIG.5) of the airsoft gun (seeFIG.5) in order to chamber the BB110that is loaded from the magazine (seeFIG.5). The rubber bucking102forms a hollow core having a diameter for the BB110to pass through to the barrel102. The hop-up device or system100is provided with a tension adjustment sleeve114axially situated adjacent to the tapered end or bucking lip112of the rubber bucking102. A tension adjustment screw116is operatively connected to the opposite end of the tension adjustment sleeve114. The tension adjustment screw116has multiple axial locations to change the pressure applied to the tension adjustment sleeve114. The tension adjustment screw116applies a different pressure to the tension adjustment sleeve114at each axial location. There are least two axial locations of the tension adjustment screw116adjacent to the tension adjustment sleeve114; however, there may be numerous if not essentially infinite axial locations, but preferably there are between 10 through 15 axial locations, and more preferably 12 axial locations. The tension adjustment screw116is thus configured to move the tension adjustment sleeve114along the axial locations toward (tighten) and away from (untighten) the tapered end or bucking lip112. When tightened, the tension adjustment screw116increases the pressure applied to the tension adjustment sleeve114. When untightened, the tension adjustment screw116decreases the pressure applied to the tension adjustment sleeve114. When the pressure against the tension adjustment sleeve114increases or decreases, the tension adjustment sleeve114is compressed or decompresses accordingly against the bucking lip112and changes a tension force, the bucking lip tension, on the BB110being chambered in the rubber bucking102. The tension adjustment sleeve114and tension adjustment screw116are hollow so that the air nozzle can pass through to position the BB in the tapered end or bucking lips112of the rubber bucking102. The tension adjustment sleeve114is formed of a resilient material that is housed to allow axial movement of the sleeve. By tightening the tension adjustment screw116to cause the tension adjustment sleeve114to compress against the bucking lips112, the hollow diameter through the tension adjustment sleeve114is reduced, causing more force to be needed to seat the BB110in the rubber bucking102. Conversely, by loosening the tension adjustment screw116and therefore relaxing the tension adjustment sleeve114from against the rubber bucking102, the BB110is able to be moved through and seated in the rubber bucking102more easily. As seen inFIGS.2-4, the tension adjustment screw116inFIG.1may be arranged as a hollow body that is threaded on an outer perimeter, and configured to be rotated, and thus moved axially, by a threaded dial118. The dial118may have numbers or other indicia provided on a surface for a user to conveniently dial a desired level of tension on the tension adjustment sleeve114inFIG.1. InFIG.1, the tension adjustment screw116may be arranged immediately behind the tension adjustment sleeve114, and may be arranged such that the tension adjustment sleeve114is not twisted when compressed or relaxed. The interface between the tension adjustment screw116and the tension adjustment sleeve114may treated to reduce or eliminate such a twisting friction. In addition, an intermediate material may be provided between the tension adjustment screw116and the tension adjustment sleeve114. The tension adjustment sleeve114may be configured to envelop the tapered end or bucking lip112of the rubber bucking102. The tension adjustment sleeve114forms an opening at the bottom to allow BBs to pass from the magazine to be chambered. By moving the tension adjustment sleeve114axially, with a front end of the tension adjustment sleeve114fixed at the tapered end or bucking lip112of the rubber bucking102, the tension adjustment sleeve114can be adjusted to provide a desired amount of tension on the BB110being chambered, to tune the chambering process to the rubber bucking102in the hop-up device or system110. By compressing the tension adjustment sleeve114, the tension adjustment sleeve114provides more pressure on the BB110, and by relaxing the tension adjustment sleeve114, the tension adjustment sleeve114provides less pressure on the BB110. The hop-up device or system for an airsoft gun may include a rubber bucking102configured for a BB110to travel through into a barrel104, a mechanism for adjusting the engagement of a hop nub108into a top portion of the rubber bucking102to place a backspin on the BB110traveling through into a barrel104, and a tension adjustment sleeve114provided adjacent a proximal end or bucking lip112of the rubber bucking102and configured to apply a desired tension on a BB110being positioned to enter the rubber bucking102. The tension adjustment sleeve114may share a longitudinal axis with the rubber bucking102. The tension adjustment sleeve114may rest against an inner chamber of the hop-up device or system100at a lowest tension, and gradually decreases in diameter as tension is increased. The tension adjustment sleeve114may be configured to compress axially to decrease an inner diameter in order to increase the tension. The hop-up device or system100may further include a tension adjustment screw116provided at a back end of the tension adjustment sleeve114, and configured to axially compress the tension adjustment sleeve114when turned in a first direction, and to relax the tension adjustment sleeve114when turned in a second direction. The rubber bucking102may have a tapered proximal end or bucking lip112, and the tension adjustment sleeve114may be configured to fit around the tapered proximal end or bucking lip112to prevent loss of compressed air applied to a BB110entering the rubber bucking102. The tension adjustment sleeve114may form an opening at a lower portion configured to allow a BB110to pass from a loading magazine into the tension adjustment sleeve114. FIG.5represents a disassembled view an airsoft gun system200with the hop-up device or system100that allows the assembly of airsoft guns having optional configurations. The hop-up device or system100is connected to the end of barrel104as previously discussed, which are disposed in a lower receiver250with one of the optional cylinders or engines252connected to the hop-up device or system100. A magazine254is connected to the lower receiver250and feeds BBs to the hop-up device or system100. The cylinder or engine252is connected to one of the optional stock buffer tubes260that are disposed inside a stock262. An optional stock264may be used. An upper receiver with rail256is connected to the lower receiver250with pins (not shown) with the barrel104extending through the upper receiver with rail256. Optional barrels266may be used in place of barrel104. Optional rails258may be used in place of the rail in the upper receiver with rail256. The airsoft gun system200provides an airsoft gun with the hop-up device or system100that allows a user to adjust the tension applied to a BB during the chambering of the BB in the airsoft gun as previously discussed. The hop-up device or system100may be disposed in airsoft guns having other configurations and other or additional components. Unless otherwise indicated, all numbers expressing quantities and properties such as amounts, distances, and the like used in the specification and claims are to be understood as indicating both the exact values as shown and as being modified by the term “about”. Thus, unless indicated to the contrary, the numerical values of the specification and claims are approximations that may vary depending on the desired properties sought to be obtained and the margin of error in determining the values. At the very least, and not as an attempt to limit the application of the doctrine of equivalents to the scope of the claims, each numerical parameter should at least be construed in light of the margin of error, the number of reported significant digits, and by applying ordinary rounding techniques. The terms “a”, “an”, and “the” used in the specification claims are to be construed to cover both the singular and the plural, unless otherwise indicated or contradicted by context. No language in the specification should be construed as indicating any non-claimed element to be essential to the practice of the invention. Note that spatially relative terms, such as “up,” “down,” “right,” “left,” “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over or rotated, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the exemplary term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors interpreted accordingly. This detailed description provides an understanding of the structures and fabrication techniques described of the hop-up device or system for use in airsoft guns. The simplified diagrams and drawings do not illustrate all the various connections and assemblies of the various components; however, those skilled in the art will understand how to implement such connections and assemblies based on the illustrated components, figures, and provided descriptions. The description of well-known functions and constructions may be simplified and/or omitted for increased clarity and conciseness. While various aspects of the hop-up device or system are described, it will be apparent to those of ordinary skill in the art that other embodiments and implementations are possible within the scope of the invention. Accordingly, the hop-up device or system is not to be restricted except in light of the attached claims and their equivalents. | 12,581 |
11859941 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be, but not necessarily are references to the same embodiment; and, such references mean at least one of the embodiments. If a component is not shown in a drawing then this provides support for a negative limitation in the claims stating that that component is “not” present. However, the above statement is not limiting and in another embodiment, the missing component can be included in a claimed embodiment. Reference in this specification to “one embodiment,” “an embodiment,” “a preferred embodiment” or any other phrase mentioning the word “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure and also means that any particular feature, structure, or characteristic described in connection with one embodiment can be included in any embodiment or can be omitted or excluded from any embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others and may be omitted from any embodiment. Furthermore, any particular feature, structure, or characteristic described herein may be optional. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. Where appropriate any of the features discussed herein in relation to one aspect or embodiment of the invention may be applied to another aspect or embodiment of the invention. Similarly, where appropriate any of the features discussed herein in relation to one aspect or embodiment of the invention may be optional with respect to and/or omitted from that aspect or embodiment of the invention or any other aspect or embodiment of the invention discussed or disclosed herein. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Certain terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks: The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that the same thing can be said in more than one way. Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein. No special significance is to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification. Without intent to further limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions, will control. It will be appreciated that terms such as “front,” “back,” “top,” “bottom,” “side,” “short,” “long,” “up,” “down,” “aft,” “forward,” “inboard,” “outboard” and “below” used herein are merely for ease of description and refer to the orientation of the components as shown in the figures. It should be understood that any orientation of the components described herein is within the scope of the present invention. TheFIGS.1A-10are generally directed to powered barrel or nozzle assemblies and a variety of interchangeable or replaceable nozzle components or accessories that are configured to be used with blasters or blasters that fire projectiles. In other examples, the invention may be used in conjunction with other types of blasters. Exemplary blasters are shown inFIGS.1A-2. FIG.1Ais a perspective view of a blaster assembly or blaster10. The blaster10is a Surge 1.5 blaster made and sold by Gel Blaster. In other examples, the blaster10may be other Surge blasters, such as older or newer Surge models. The blaster10fundamentally has physical features of a conventional handgun or a pistol. The blaster10may be a plastic, metal, wood, and/or glass construction. The blaster10may fire spherical projectiles or Gellets that are made and sold by Gel Blaster. The Gellets are water-based soft beads that burst on impact and eventually evaporate. The blaster10may include a handle12, a nozzle or tip member14, a barrel15, a trigger17, and a hopper19. The handle12may be used to hold and aim the blaster10. The handle12may be hollow. The handle12may house one or more internal components of the blaster10, such as a printed circuit board, a battery, and/or a motor. The handle12may be attached to the barrel15. The barrel15may be hollow. In some examples, the barrel15may house one or more internal components of the blaster10, such as a printed circuit board, a battery, and/or a motor, in lieu of the handle12. In some examples, the barrel15may define or house a bore23. In some examples, the bore23may extend out of the barrel15. Gellets may travel through the bore23once the blaster10is fired and exit the bore23to be projected towards a target. The trigger17may be pulled to fire the blaster10. In some examples, the blaster10may have a safety mechanism such as a decocker that prevents the firing of the blaster10unless disengaged or deactivated. The hopper19may store a supply of Gellets ready to be fired. The hopper19may feed Gellets to a chamber or the bore23in preparation for firing of the blaster10. In some examples, a single Gellet may be fired with a single pull of the trigger17. In some examples, multiple Gellets may be fired consecutively with a single pull of the trigger17. In some embodiments, the blaster10may have an input device such as a button, a switch or a wireless electronic device where a user may set the number of Gellets to be fired with a single pull of the trigger17. Other user configurable settings are also contemplated. For example, a user may adjust the fire rate of the blaster10, which may include choosing among single fire, semi-automatic, and automatic firing modes. The nozzle or tip member14shown inFIG.1Amay be a default or base type. The tip member14may be removable and, once removed, expose a female part or a receptacle that is shaped and sized to receive other tip members, nozzle assemblies, and/or nozzle accessories. FIG.1Bis a perspective view of a blaster assembly or blaster25. The blaster25is a GB2011 1.0 blaster made and sold by Gel Blaster. In other examples, the blaster25may be an older or a newer version of the GB2011. The blaster25fundamentally has physical features of a conventional handgun or a pistol. The blaster25may be a plastic, metal, wood, and/or glass construction. The blaster25may fire spherical projectiles or Gellets. The Gellets may be fed to the blaster25via a cartridge and/or a hopper. The blaster25may include a handle27, a nozzle or tip member14, a barrel29, and a trigger31. The handle27may be used to hold and aim the blaster25. The handle27may be hollow. The handle27may house one or more internal components of the blaster25, such as a printed circuit board, a battery, and/or a motor. The handle27may be attached to the barrel29. The barrel29may be hollow. In some examples, the barrel29may house one or more internal components of the blaster25, such as a printed circuit board, a battery, and/or a motor, in lieu of the handle27. In some examples, the barrel29may define or house a bore33. In some examples, the bore33may extend out of the barrel29. Gellets may travel through the bore33once the blaster25is fired and exit the bore33to be projected towards a target. The trigger31may be pulled to fire the blaster25. In some examples, the blaster25may have a safety mechanism such as a decocker that prevents the firing of the blaster25unless disengaged or deactivated. The nozzle or tip member14shown inFIG.1Bmay be a default or base type. The tip member14may be removable and, once removed, expose a female part or a receptacle that is shaped and sized to receive other tip members, nozzle assemblies, and/or nozzle accessories. FIG.1Cis a perspective view of the blaster10with a nozzle assembly35attachment. The nozzle assembly35may be attached to the blaster10by removing the tip member14(seeFIG.1A) to expose a female part(s) or receptacle(s) and mating or coupling male part(s) of the nozzle assembly35with the female part(s) or receptacle(s). In other examples, the nozzle assembly35may be attached over the tip member14without removing the tip member14from the blaster10via conventional fastening methods such as snap fitment, screws, clips, and/or the like. The nozzle assembly35may be attached to the blaster10from or near a distal end37of the barrel15. The nozzle assembly35may include a projectile opening26. The projectile opening26may have a circular shape. The projectile opening26may coincide with the bore23(seeFIG.1A) when the nozzle assembly35is attached to the blaster10. The positioning of the projectile opening26relative to the bore23may be such that the path of a projectile being fired is not structurally interfered by the projectile opening26. The nozzle assembly35may include electrical components. The electrical components may include a laser diode, an LED, a light bulb, or any other light source by example. The light source(s) may project light having different shapes, visual patterns, dynamic patterns, frequencies, wavelengths, colors, brightness, etc. For example, the light source(s) may project refracted patterns, reticles, aiming shapes, and the like.FIG.1Cshows the nozzle assembly35having multiple laser diodes or emitters30and a flashlight39. A user may generally provide lighting towards a direction aimed at to improve visibility and line of sight via the flashlight39and improve shot accuracy by visualizing aim via the laser projections emitted from the laser emitters30. The laser emitters30may be disposed around the projectile opening26. The flashlight39may be disposed under the projectile opening26. The positioning and quantity of the laser emitters30and the flashlight39shown inFIG.1Care only by example. Other examples as well as the positional configurations, powering, and controls of the electrical components are discussed in future paragraphs. FIG.2is a perspective view of a blaster assembly or blaster42with a nozzle assembly35attachment. The blaster42fundamentally has physical features of a conventional rifle. The blaster42may be a plastic, metal, wood, and/or glass construction. The blaster42may fire spherical projectiles or Gellets that are made and sold by Gel Blaster. The blaster42may include a handle44, a butt45, a barrel46, a trigger48, and a hopper50. The handle44may be used to hold and aim the blaster42. The handle44may be hollow. The handle44may house one or more internal components of the blaster42, such as a printed circuit board, a battery, and/or a motor. The butt45may be used to rest the blaster42against a body of a user when aiming and firing the blaster42. The handle44may be attached to the barrel46. The barrel46may be hollow. In some examples, the barrel46may house one or more internal components of the blaster42, such as a printed circuit board, a battery, and/or a motor, in lieu of the handle44. In some examples, the barrel46may define or house a bore similar to the bores23,33(seeFIGS.1A-1B). In some examples, the bore may extend out of the barrel46. Gellets may travel through the bore once the blaster42is fired and exit the bore to be projected towards a target. The trigger48may be pulled to fire the blaster42. In some examples, the blaster42may have a safety mechanism such as a decocker that prevents the firing of the blaster42unless disengaged or deactivated. The hopper50may store a supply of Gellets ready to be fired. The hopper50may feed Gellets to a chamber or the bore in preparation for firing of the blaster42. In some examples, a single Gellet may be fired with a single pull of the trigger48. In some examples, multiple Gellets may be fired consecutively with a single pull of the trigger48. In some examples, the blaster42may have an input device such as a button, a switch or a wireless electronic device where a user may set the number of Gellets to be fired with a single pull of the trigger48. Other user configurable settings are also contemplated. For example, a user may adjust the fire rate of the blaster42, which may include choosing among single fire, semi-automatic, and automatic firing modes. The blaster42may have a nozzle or tip member similar to the tip member14shown inFIG.1A. The tip member may be removable and, once removed, expose a female part or a receptacle that is shaped and sized to receive other tip members, nozzle assemblies, and/or nozzle accessories. InFIG.2, the blaster42is shown featuring the nozzle assembly35, which includes multiple laser emitters30and a flashlight39. FIG.3is an isolated front perspective view of a replaceable nozzle20andFIG.4is an isolated rear perspective view of the replaceable nozzle20. The nozzle20may be used in the laser aiming nozzle assembly16(seeFIG.5), the light nozzle assembly18(seeFIG.6), and the nozzle assembly35(seeFIG.1C), which is a hybrid. The nozzle20may be compatible with blaster10(seeFIG.1A), blaster25(seeFIG.1i), and blaster42(seeFIG.2). The nozzle20includes a main body portion21. The body portion21includes m mechanical connection members: upper connection member22aand a lower connection member22b, collectively referred to as connection members22, extending therefrom. The connection members22may have arm, claw, or hook-like structures. The connection members22may fasten to a distal end of a barrel of a blaster. The distal end of the barrel may have a female part or a receptacle, such as a recess, cavity, hole, groove, and the like, that is shaped and sized to receive one of the connection members22. The mating may be a snap fitment by example. The nozzle20may include one or more connection flanges28. The connection flanges28may fasten to a distal end of a barrel of a blaster, thereby providing added structural support and reinforcement to the connection members22for the mating between the nozzle20and a blaster. The distal end of the barrel may have a female part or a receptacle, such as a recess, cavity, groove, channel and the like, that is shaped and sized to receive one of the connection flanges28. The mating may be a snap fitment by example. The nozzle20may include a projectile opening26. The projectile opening26may be aligned with a bore of a blaster such that a projectile being fired is not interfered by the nozzle20along its path. The nozzle20may further include one or more tunnels24. Three tunnels24are shown inFIG.3by example. In other examples, there may be more or less tunnels24. The three tunnels24may be disposed around the projectile opening26to form a triangle as shown inFIG.3. Each of the tunnels24may receive an electronic component such as a laser emitter30(seeFIG.5) and/or a light emitter32(seeFIG.6). The nozzle20may have a button36. In other examples, the button36may be a switch, dial, knob, and/or the like. The button36may control power transmitted to the electrical components of the nozzle assemblies. For example, the button36may be pressed to turn a laser emitter30on and off. In some examples, the button36may perform different functions based on variation in input, such as sequential pressing or duration of pressing. For example, pressing the button36once may turn on one electrical component, pressing the button36longer than a predetermined duration may turn on all electrical components, and pressing the button36a predetermined amount of times sequentially may turn on some electrical components. FIG.5is a magnified perspective view of a nozzle assembly or a laser aiming nozzle assembly16attached to the blaster10. The nozzle20(seeFIGS.4-5) is incorporated into the laser aiming nozzle assembly16. As such, the laser aiming nozzle assembly16includes a main body portion21, an upper connection member22aand a lower connection member22bextending from the main body portion21, and connection flanges28. Each of the tunnels24of the nozzle20house laser emitters30. The projectile opening26may be distally offset from the bore23such that the bore23does not extend out of the nozzle20. This may allow for the laser emitters30to have a clear path for projection that is not interrupted by the bore23. FIG.6is a magnified perspective view of a nozzle assembly or a light nozzle assembly18attached to the blaster10. The nozzle20(seeFIGS.4-5) is incorporated into the light nozzle assembly18. As such, the light nozzle assembly18includes a main body portion21, an upper connection member22aand a lower connection member22bextending from the main body portion21, and connection flanges28. Each of the tunnels24of the nozzle20house light emitters32, which may be LEDs. One or more lenses34may be disposed distally from each light emitter32. The lenses34may provide protection for the light emitters32as well as focus the light emitted. The projectile opening26may be distally offset from the bore23such that the bore23does not extend out of the nozzle20. This may allow for the light emitters32to have a clear path for projection that is not interrupted by the bore23. FIG.7is a schematic of the laser aiming nozzle assembly16and exemplary projectile trajectories of a blaster equipped with laser aiming nozzle assembly16. All text on the drawings is incorporated by reference herein. The blaster10is shown inFIG.7as an example. The schematic ofFIG.7illustrates the purpose of the positional arrangement of the laser emitters30. The laser emitters30may be positioned around the projectile opening26such that a triangle is formed among the laser emitters30. The laser emitters30refer to a laser emitter30aof an upper tunnel24a, a laser emitter30ba lower right tunnel24b, and a laser emitter30cof a lower left tunnel24c. The laser emitter30ais set at 0° such that the laser emitter30ais vertically aligned with the projectile opening26. This positioning allows the laser emitter30ato stays true to the projectile opening26. The laser emitters30a,30bare positioned to show where a fired projectile may fall based on fall or drop rate and error left-right, drift, or lateral error rate. In some examples, the laser emitters30a,30bmay angled up or down in place at a predetermined angle. The predetermined angle may also be based on the drop rate and the lateral error rate. The drop rate may refer to the tendency of the projectile to fall relative to the originally aimed location. The lateral error rate may refer to the tendency of the projectile to land to the left or the right of the originally aimed location. The drop rate and the lateral error rate may be calculated over distance. For example, as shown inFIG.7, firing at a target 5 m (meter in metric units) away may result in a change of 25 mm, firing at a target 10 m away may result in a change of 50 mm, and firing a at a target 20 m away may result in a change of 1 cm. FIG.8is an exploded perspective partial view of the blaster10.FIG.8. The blaster10may include one or more electrical connection members or terminals11a. All nozzle assemblies discussed herein may further include one or more electrical connection members or terminals11b. The tip member14is shown inFIG.8as an example for simplification. The terminals11a,11bmay be collectively referred to as terminals11. The terminals11amay be positioned anywhere within a front nozzle recess38. In some examples, the terminals11amay be mounted on posts40. In other examples, a flange, a shelf or a wall may also be used for mounting the terminals11a. The terminals11bmay be positioned anywhere such that the terminals11make contact with each other once the nozzle assembly is attached to the blaster10. In an example, the terminals11bmay be integrated with the connection members22and/or the connection flanges28. In such examples, once the nozzle assembly is mechanically coupled to the blaster10, the nozzle assembly may also be electrically coupled to the blaster10. FIG.9is a perspective partial view of the blaster25. The blaster25may include one or more electrical connection members or terminals11. The terminals11may be positioned anywhere within a front nozzle recess38. In some examples, the terminals11may be mounted on posts40. In other examples, a flange, a shelf or a wall may also be used for mounting the terminals11. It will be appreciated that the terminals or electrical connections that mate with or connect to the terminals11in the front nozzle recess38may be located proximally on any nozzle assembly discussed herein. The terminals on the nozzle assemblies may be positioned anywhere such that the terminals make contact with each other once the nozzle assembly is attached to the blaster25. In some examples, once the nozzle assembly is mechanically coupled to the blaster25, the nozzle assembly may also be electrically coupled to the blaster25. FIG.10is a schematic of electrical components of a blaster10and a nozzle assembly35. The blaster10and the nozzle assembly35are shown inFIG.10by example and not to exclude blasters and nozzle assemblies discussed herein may have same or similar specifications. The blaster10may include a PCB52, a battery54, and a terminal11a. The PCB52may connect electronic components of the blaster10to each other. In some examples, the PCB52may be replaced by another conventional electronic medium. The electronic components may be attached to the PCB52via SMT or another method known in the art. The PCB52may include a processor to execute machine readable instructions and controller to actuate other electronic components of the blaster10. The battery54may provide power to the blaster10and other devices, components, and/or accessories electrically coupled to the blaster10. The battery54may transmit power to an output terminal11ato power external connections. The terminal11aand the battery54may be electrically connected to each other via the PCB52. The nozzle assembly35may include an input terminal11bto receive power from external sources. The terminal11bmay be electrically connected to electrical components56, which include laser emitters30(seeFIG.1C) and a flashlight39(seeFIG.1C) by example. Once the terminal11aand the terminal11bcontact each other to conduct electricity, the electrical components56may draw power from the battery54and turn on. In some embodiments, the nozzle assembly35may also includes a PCB or the like for controlling the nozzle assembly35(e.g., turning electrical components56on and off, adjusting the brightness of diodes, cycling through or choosing different colors, flashing, etc.). The electrical components56may also be mounted on or connected to the PCB. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling of connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description of the Preferred Embodiments using the singular or plural number may also include the plural or singular number respectively. The word “or” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list. The above-detailed description of embodiments of the disclosure is not intended to be exhaustive or to limit the teachings to the precise form disclosed above. While specific embodiments of and examples for the disclosure are described above for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize. Further, any specific numbers noted herein are only examples: alternative implementations may employ differing values, measurements or ranges. Although the operations of any method(s) disclosed or described herein either explicitly or implicitly are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner. The teachings of the disclosure provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments. Any measurements or dimensions described or used herein are merely exemplary and not a limitation on the present invention. Other measurements or dimensions are within the scope of the invention. Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference in their entirety. Aspects of the disclosure can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further embodiments of the disclosure. These and other changes can be made to the disclosure in light of the above Detailed Description of the Preferred Embodiments. While the above description describes certain embodiments of the disclosure, and describes the best mode contemplated, no matter how detailed the above appears in text, the teachings can be practiced in many ways. Details of the system may vary considerably in its implementation details, while still being encompassed by the subject matter disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features or aspects of the disclosure with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the disclosures to the specific embodiments disclosed in the specification unless the above Detailed Description of the Preferred Embodiments section explicitly defines such terms. Accordingly, the actual scope of the disclosure encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the disclosure under the claims. While certain aspects of the disclosure are presented below in certain claim forms, the inventors contemplate the various aspects of the disclosure in any number of claim forms. For example, while only one aspect of the disclosure is recited as a means-plus-function claim under 35 U.S.C. § 112, ¶6, other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. (Any claims intended to be treated under 35 U.S.C. § 112, ¶6 will include the words “means for”). Accordingly, the applicant reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the disclosure. Accordingly, although exemplary embodiments of the invention have been shown and described, it is to be understood that all the terms used herein are descriptive rather than limiting, and that many changes, modifications, and substitutions may be made by one having ordinary skill in the art without departing from the spirit and scope of the invention. | 29,715 |
11859942 | DETAILED DESCRIPTION OF THE INVENTION While this invention may be embodied in many different forms, there are described in detail herein specific embodiments of the invention. This description is an exemplification of the principles of the invention and is not intended to limit the invention to the particular embodiments illustrated. For the purposes of this disclosure, like reference numerals in the figures shall refer to like features unless otherwise indicated. FIG.1shows an embodiment of an archery bow10. In some embodiments, a bow10comprises a riser12arranged to support a first limb assembly16and a second limb assembly18. In some embodiments, a bow10comprises a non-compound bow (now shown) and a bowstring extends between the limb assemblies16,18. In some embodiments, a bow10comprises a compound bow, for example comprising rotatable members and a compound cabling arrangement. In some embodiments, the first limb assembly16supports a first rotatable member17and the second limb assembly18supports a second rotatable member19. In some embodiments, the first limb assembly16supports a first axle26and the first axle26supports the first rotatable member17. In some embodiments, the second limb assembly18supports a second axle28and the second axle28supports the second rotatable member19. In some embodiments, the first limb assembly16comprises a first limb member30and a second limb member32that collectively support the first axle26. In some embodiments, the second limb assembly18is configured similarly to the first limb assembly16. In some embodiments, the first limb assembly16comprises a support member40. In some embodiments, the support member40is supported by the riser12. In some embodiments, the support member40provides support to the first limb member30. In some embodiments, the support member40provides support to the first limb member30and the second limb member32. FIG.2shows an embodiment of a bow10in greater detail.FIG.3shows a similar view with the limb assembly16detached from the riser12. In some embodiments, a limb assembly16comprises a limb cup20. In some embodiments, a limb cup20is supported by the riser12. In some embodiments, the limb cup20supports the limb member(s)30,32. In some embodiments, the limb cup20supports the support member40. In some embodiments, the limb cup20and limb member(s)30,32comprise features as disclosed in U.S. Pat. No. 8,453,635, the entire disclosure of which is hereby incorporated herein by reference. In some embodiments, the limb cup20comprises a first cavity22for the first limb member30and a second cavity24for the second limb member32. In some embodiments, the limb cup20comprises a cavity42for the support member40. In some embodiments, each cavity22,24,42of the limb cup20comprises a protrusion/recess engaging arrangement with the respective limb member30,32as disclosed in U.S. Pat. No. 8,453,635. In some embodiments, the protrusion/recess engaging arrangement is used with the support member40. In some embodiments, the support member40is engaged with the limb cup20at a location near the riser12and extends away from the limb cup20in the same direction as the limb member(s)30,32. In some embodiments, the first limb member30and second limb member32are spaced apart laterally defining a gap31. In some embodiments, the support member40is positioned in the gap31between the limb members30,32. In some embodiments, a length of the support member40is less than a length of the limb member(s)30,32. In some embodiments, an end of the support member40is aligned with a midportion of the limb member(s)20,32. In some embodiments, the support member40is arranged to support the midportion of the limb member(s)20,32. In some embodiments, the support member40contacts the first limb member30. In some embodiments, the support member40contacts the second limb member32. In some embodiments, a limb member30,32comprises a tension side36and a compression side38, and the support member40contacts the compression side38. In some embodiments, the support member40supports a crossmember or axle46, and the axle46supports a limb member30,32. In some embodiments, the support member40comprises a cavity41and the axle46extends through the cavity41. In some embodiments, a portion of the axle46located to a first side of the support member40supports the first limb member30and a portion of the axle46located to a second side of the support member40supports the second limb member32. In some embodiments, a support member40comprises a roller48. In some embodiments, the roller48is arranged to contact the limb member30. In some embodiments, the axle46supports the roller48and the roller48rotates with respect to the support member40. In some embodiments, the support member40comprises a first roller48arranged to contact the first limb member30and a second roller49arranged to contact the second limb member32. In some embodiments, a bearing assembly, such as a roller bearing, is oriented between the axle46and a roller48,49. In some embodiments, a bearing assembly, such as a roller bearing, is oriented between the support member40and axle46. In some embodiments, the limb cup20comprises a first cavity22and a second cavity24that are aligned with one another, and a cavity42that is offset from the first and second cavities22,24in at least one orthogonal direction. In some embodiments, ends of the first limb member30and second limb member32are aligned with one another in the limb cup20, and an end of the support member40is offset from the ends of the limb members30,32. In some embodiments, a limb cup20supports a limb member30,32at a first location70. In some embodiments, the first location70comprises a location where the limb cup20applies a supporting force to the compression side38of the limb member30,32. In some embodiments, a limb member30,32supports an axle26at a second location72. In some embodiments, a support member40is arranged to apply a force to a limb member30,32at a support location74. In some embodiments, a support location74comprises a location where the support member40applies a supporting force to the compression side38of the limb member30,32. In some embodiments, the support location74is oriented between the first location70and the second location72along the length of the limb member30,32. FIG.4shows an exploded view of an embodiment of a limb assembly16.FIG.5shows a cross-sectional view of an embodiment of a bow10. With reference toFIGS.3-5, in some embodiments, a limb assembly16is attached to the riser12via the limb cup20. In some embodiments, the limb cup20is attached to the riser12with a moment connection comprising a compression portion and a tension portion. In some embodiments, the compression portion comprises a compression bearing66. In some embodiments, the tension portion comprises a tension connection68comprising a limb bolt60. In some embodiments, a limb bolt60engages a barrel nut62as known in the art. In some embodiments, the support member40comprises an aperture44or cavity that provides clearance for the tension connection68. In some embodiments, the support member40comprises an aperture44and a limb bolt60passes through the aperture44. In some embodiments, the aperture44continues to an end of the support member40and comprises a slot formed in the end of the support member40. In some embodiments, the limb cup20comprises an aperture21for the limb bolt60. In some embodiments, the aperture44in the support member40is aligned with the aperture21in the limb cup20. In some embodiments, the limb cup20comprises a first limb pad50for the first limb member30and a second limb pad52for the second limb member32. In some embodiments, a limb pad50,52comprises the first location70where the limb cup20supports a limb member30,32. In some embodiments, the limb cup20comprises a compression pad54for the support member40. In some embodiments, the compression pad54is oriented in the cavity42of the limb cup20that receives the support member40. In some embodiments, the compression pad54is shaped differently from the first limb pad50. In some embodiments, the compression pad54is laterally aligned with the first limb pad50and the second limb pad52. In some embodiments, a longitudinal axis33of a limb member30,32comprises curvature along its length. In some embodiments, a longitudinal axis35of a support member40comprises curvature along its length. In some embodiments, a longitudinal axis35of a support member40extends nonparallel to a longitudinal axis33of a limb member30,32. In some embodiments, the longitudinal axis35of a support member40comprises a curved portion comprising a higher degree of curvature than any portion of the longitudinal axis33of the limb member30,32. In some embodiments, the longitudinal axis35of a support member40and the longitudinal axis33of a limb member30,32comprise portions that extend parallel with one another. FIG.6shows another embodiment of a bow10. In some embodiments, a support member40comprises a low friction pad58comprising a bearing surface59that contacts a limb member30,32. A low friction pad58desirably comprises a material having a lower coefficient of friction than material of the limb member30,32. In some embodiments, a pad58comprises PTFE or another suitable polymer comprising a relatively low coefficient of friction. FIG.7shows another embodiment of a bow10. In some embodiments, the support member40comprises a crossmember comprising a pin47that engages the limb members30,32. In some embodiments, the support member40supports the pin47and the pin47supports the limb members30,32. In some embodiments, a limb member30,32comprises a cavity and the pin47is oriented in the cavity. In some embodiments, the support member40extends from the limb cup20in a direction that is substantially parallel to the limb members30,32and remains aligned with the limb members30,32near the location of the pin47. In some embodiments, an entire longitudinal axis35of a support member40extends parallel to a longitudinal axis33of a limb member30,32. In some embodiments, a bow10can be provided with multiple support members40having different strength characteristics and changing the support members40can change the bow10without adjusting other components of the bow10. For example, a bow10can be arranged with a first set of support members40to have a predetermined draw weight. The first set of support members40can be removed and replaced with a second set of support members40, wherein the second set of support members40have a greater amount of strength than the first set. The bow10configured with the second set of support members40will have a higher draw weight, which is achieved without adjusting other portions of the bow10, such as the limb members30,32, limb bolts, cams or cam modules, etc. The above disclosure is intended to be illustrative and not exhaustive. This description will suggest many variations and alternatives to one of ordinary skill in this field of art. All these alternatives and variations are intended to be included within the scope of the claims where the term “comprising” means “including, but not limited to.” Those familiar with the art may recognize other equivalents to the specific embodiments described herein which equivalents are also intended to be encompassed by the claims. Further, the particular features presented in the dependent claims can be combined with each other in other manners within the scope of the invention such that the invention should be recognized as also specifically directed to other embodiments having any other possible combination of the features of the dependent claims. For instance, for purposes of claim publication, any dependent claim which follows should be taken as alternatively written in a multiple dependent form from all prior claims which possess all antecedents referenced in such dependent claim if such multiple dependent format is an accepted format within the jurisdiction (e.g. each claim depending directly from claim1should be alternatively taken as depending from all previous claims). In jurisdictions where multiple dependent claim formats are restricted, the following dependent claims should each be also taken as alternatively written in each singly dependent claim format which creates a dependency from a prior antecedent-possessing claim other than the specific claim listed in such dependent claim below. This completes the description of the preferred and alternate embodiments of the invention. Those skilled in the art may recognize other equivalents to the specific embodiment described herein which equivalents are intended to be encompassed by the claims attached hereto. | 12,693 |
11859943 | DETAIL DESCRIPTIONS OF THE INVENTION All illustrations of the drawings are for the purpose of describing selected versions of the present invention and are not intended to limit the scope of the present invention. The present invention is to be described in detail and is provided in a manner that establishes a thorough understanding of the present invention. There may be aspects of the present invention that may be practiced or utilized without the implementation of some features as they are described. It should be understood that some details have not been described in detail in order to not unnecessarily obscure focus of the invention. References herein to “the preferred embodiment”, “one embodiment”, “some embodiments”, or “alternative embodiments” should be considered to be illustrating aspects of the present invention that may potentially vary in some instances, and should not be considered to be limiting to the scope of the present invention as a whole. In reference toFIG.1through8, the present invention is a dart pistol comprising a pistol body20, a launch channel21, a carriage22, an operating bar24, an engagement feature25, a mainspring23, a trigger member26, and a retainer pin27. The pistol body20constitutes a rigid structural body generally resembling a pistol or other handgun-analog as shown inFIGS.1and2, though the broader conceptions of the present invention enable the pistol body20to be extended or remolded into any form factor as may be realized by a reasonably skilled individual. Across multiple embodiments, the pistol body20comprises a fore body end28and a rear body end29analogous to the muzzle and the butt of most conventional weapons, respectively. The launch channel21traverses into the pistol body20from the fore body end28as shown inFIG.1. The launch channel21provides guidance and support to the firing assembly of the present invention, specifically guiding the carriage22along a linear path defined within the launch channel21. Accordingly, the carriage22is slidably mounted along the launch channel21with appropriate tolerances to the inner surfaces of the launch channel21to permit free motion therethrough. The mainspring23constitutes a metallic coil-spring of suitable strength to propel the carriage22forward under retraction, thereby providing the necessary acceleration to the at least one projectile52to affect a launch. Accordingly, the mainspring23is connected in between the pistol body20and the carriage22as shown inFIG.5through7. This connection is ideally releasable to allow a user to exchange the mainspring23for another iteration of greater or lesser power, affording a player with the ability to extend or limit the launch-range of the present invention. Further, the mainspring23may be exchanged to accommodate various embodiments of the at least one projectile52, i.e., embodiments of greater or lesser total mass. The carriage22is terminally connected to the operating bar24as shown inFIGS.1and6as a combined cocking handle and as a portion of the fire-control group, in combination with the trigger member26and the retainer pin27. More specifically, the operating bar24is positioned along the launch channel21. As shown inFIGS.2,6, and7, the operating bar24traverses out of the pistol body20from the rear body end29to provide a manual cocking lever and to visually indicate that the present invention is ready to fire. Further, the retainer pin27is laterally mounted into the launch channel21, and the engagement feature25is laterally integrated into the operating bar24, adjacent to the carriage22as indicated inFIG.8. Positioning the engagement feature25onto the retainer pin27secures the operating bar24, and the carriage22by extension, in a rearward position within the launch channel21as shown inFIGS.1and6. This constitutes the ‘cocking’ action, as the present invention is now ready to fire. As shown, the mainspring23is fully extended and places the carriage22under tension between the retainer pin27and the fore body end28. The trigger member26is pivotably mounted to the pistol body20, adjacent to the launch channel21, positioned to separate the engagement feature25from the retainer pin27when the user squeezes the trigger member26as shown inFIG.6. Dislodging the operating bar24from the retainer pin27with the trigger member26causes the carriage22to travel forward along the launch channel21under force form the mainspring23, constituting the ‘firing’ function of the present invention. This may be done with or without at least one projectile52, enabling a user to ‘de-cock’ the present invention when unloaded. Accordingly, with the present invention positioned in the ‘cocked’ configuration, the present invention may further comprise at least on projectile52releasably positioned between the carriage22and the pistol body20as shown inFIG.1. The at least one projectile52is ideally cradled against the carriage22to prevent accidental displacement of the at least one projectile52, but no permanent connections are otherwise considered. Likewise, the at least one projectile52is configured to slide along the pistol body20with minimal contact during firing such that the friction between the at least one projectile52and the pistol body20is minimized. Positioned thusly, the present invention is ‘loaded’ and may be fired by actuating the trigger member26against the operating bar24. More specifically, the trigger member26is operatively coupled to the operating bar24as shown inFIGS.6and6, wherein the trigger member26is used to selectively release the retainer pin27from the engagement feature25. This constitutes ‘firing’ the present invention by allowing the carriage22to travel along the launch channel21under force from the mainspring23. Momentum imparted to the at least one projectile52during the acceleration of the carriage22under force from the mainspring23provides all necessary launch energy to deliver at least one projectile52into a target area. The target area is broadly considered to encompass all types of target backstops, electronic game boards, target zones, or other elements of gamification as may be realized by any reasonably skilled individual. The structure and features of the carriage22is essential to a successful launch of the at least one projectile52, requiring that the at least one projectile52is guided during acceleration but not limited during separation. In reference toFIG.4, the carriage22may comprise a base section31, a connecting strut32, and a projectile mount33, while the launch channel21may comprise an elongated chamber34, a guide slot35, and a projectile track36. As shown inFIGS.5and7, the elongated chamber34centrally traverses into the pistol body20from the fore body end28. Further, the guide slot35laterally traverses through the pistol body20and into the elongated chamber34. The projectile track36is externally connected to the pistol body20with the projectile track36and is perimetrically positioned about the guide slot35, completing an uninterrupted forward-profile of the launch channel21. Accordingly, the connecting strut32is connected in between the base section31and the projectile mount33with the base section31being slidably mounted along the elongated chamber34. The connecting strut32is positioned within the guide slot35, and the projectile mount33is externally positioned to the pistol body20, adjacent to the projectile track36. As illustrated in FIG.5, this slidable mechanical engagement ensures that the linear path of the carriage22is indexed to the launch channel21along the full length-of-travel of the carriage22. The projectile mount33protrudes from the launch channel21to serve as a backstop for at least one projectile52, cradling and indexing at least one projectile52into a ‘launch-ready’ position attached to the carriage22. The pistol body20ideally utilizes a modular construction to enable a shooter to customize and optimize their own iteration of the present invention to suit individual styles and techniques. Accordingly, the pistol body20comprises a receiver38, a grip section39, a trigger pocket40, and at least one bracket41. The receiver38constitutes a containment structure for the operable components of the present invention excluding the trigger member26, with the grip section39constituting an ergonomic haft containing and supporting the trigger member26. The separation of the receiver38from the grip section39enables the grip section39to be exchanged for various alternate embodiments of said grip section39. This modularity enables a user to adapt the present invention with regards to grip angle, handle size, finger contouring, or other ergonomic considerations by installing an appropriate instance of the grip section39. Likewise, the receiver38may be exchanged to adapt any embodiment of the present invention to utilize alternate embodiments of the at least one projectile52. Further, at least one bracket41constitutes a universal mating component for these various instances of the receiver38and the grip section39, enabling the modular exchange of these components via a common interconnecting element. As illustrated inFIGS.1and2, the grip section39is laterally mounted to the receiver38by the at least one bracket41, releasably fixing the receiver38to the grip section39utilizing any suitable means of mechanical fastener. The grip section39is positioned adjacent to the rear body end29, mimicking the ergonomic styles of a conventional automatic pistol. The trigger pocket40is integrated into the grip section39, adjacent to the receiver38as indicated inFIG.5through6, further replicating the facile structure of a conventional pistol. The trigger member26is pivotably mounted within the trigger pocket40to allow the necessary articulation of the trigger member26to disconnect the engagement feature25from the retainer pin27, as previously outlined. The trigger pocket40, and the positioning of the trigger member26therein, may be adjusted or adapted according to the various forms and styles of grip section39as may be selected by any individual user. According to the primary and intended functions of the present invention, the operation of the trigger member26should closely approximate or simulate the function of a conventional firearm. A key element of a proper trigger-squeeze is a smooth, well-controlled pull of the trigger member26(or equivalent fire-control component) that does not deviate the point-of-aim from a selected target. In reference toFIG.5through7, the trigger member26may comprise a lever body43, a finger groove44, a pivot pin45, and at least one cam lobe46configured to simulate this functionality within the present invention. The finger groove44is terminally integrated into the lever body43, ideally defining as concave surface feature matching the inner curve of a user's finger. The finger groove44is generally analogous to conventional trigger profiles as may be realized by any reasonably skilled individual and may be exchanged according to user preferences. Further, the at least one cam lobe46is terminally positioned along the lever body43, opposite the finger groove44. The at least one cam lobe46is configured to convert any rotational motion imparted upon the finger groove44into linear motion of the operating bar24, by extension. Accordingly, the lever body43is pivotably connected to the pistol body20about the pivot pin45to define and support a limited rotation of the lever body43relative to the pistol body20. As shown inFIG.6, the pivoting action of the lever body43about the pivot pin45brings the at least one cam lobe46into contact with the operating bar24. Under force from the user via the finger groove44, the at least one cam lobe46gradually transacts the rotating motion of the lever body43into a linear motion of the operating bar24. This motion eventually lifts the operating bar24clear of the retainer pin27, allowing the carriage22to spring forward as previously described. It is proposed that the use of at least one cam lobe46, as opposed to a single acute point of contact, enables the trigger member26to simulate the full, smooth draw of a conventional firearm trigger. It is further considered that the effective difficulty of using the present invention may be increased, proportional to the skill of a user. More specifically, a mechanism capable of ‘punishing’ an over-extension of the trigger member26with a near-guaranteed missed shot may be implemented. This over-extension is a rough equivalent to a hasty or sloppy trigger squeeze on a conventional firearm (that may result in a shift in point-of-aim), further enhancing the simulacra of firearm handling skills provided by the present invention. Referring toFIG.7, wherein the trigger member26and the operating bar24are arranged into an overextended configuration, the operating bar24is trapped between the launch channel21and the trigger member26. More specifically, the launch channel21may comprise a friction-inducing section48with the friction-inducing section48positioned adjacent to the rear body end29. As indicated inFIG.9, the friction-inducing section48is laterally positioned against the operating bar24as the user over-squeezes the trigger member26, effectively delaying the movement of the carriage22towards the fore body end28. Consequently, at least one projectile52launches with lower initial velocity and likely fails to hit a target regardless of the accuracy of the user's point-of-aim. It is further proposed that the simulated trigger-break (i.e., the position wherein a conventional trigger activates a firing sequence for a conventional weapon) of the present invention may be adjusted. In conventional firearms a sensitive trigger, or ‘hair-trigger’, requires very little force to actuate. This type of adjustment may be replicated by the present invention to further moderate the difficulty of executing a proper shot with the present invention or may be adjusted to suit individual user preferences. As indicated inFIGS.8and9, the pistol body20may further comprise a conduit50. The conduit50laterally traverses out of the launch channel21and into the rear body end29, which allows the retainer pin27to be threadedly engaged into the conduit50. This configuration enables a user (or armorer) to advance or retract the retainer pin27along the conduit50, thereby increasing or decreasing the length of the retainer pin27that is exposed within the launch channel21. Accordingly, the engagement feature25of the operating bar24has greater or lesser purchase on the retainer pin27requiring a variable degree of force via the trigger member26to dislodge the operating bar24. It is further considered that advancing the retainer pin27into the conduit50limits the clearance between the retainer pin27and the friction-inducing section48of the launch channel21, thereby reducing the range of viable trigger-squeezes without causing the penalty-braking function of a trigger over-squeeze. It is further considered that the present invention may comprise at least one projectile52specifically configured for use with the novel firing mechanism described herein. The at least one projectile52is ideally configured as a modified dart of unconventional dimensions and weight, though it is possible for the at least one projectile52to define a common throwing dart of a normal type or style. As shown inFIG.1, the at least one projectile52is releasably positioned against the carriage22and the pistol body20. The carriage22, as explained previously, imparts momentum to the at least one projectile52via the mainspring23once the user released the operating bar24. The pistol body20serves as a launching platform as outlined above, wherein the projectile track36guides and directs the at least one projectile52under acceleration. In reference toFIG.5, a preferred embodiment of the at least one projectile52comprises a missile body54, a contact spike55, a dorsal fin56, a guide fin57, and at least one lateral wing58. The contact spike55is terminally connected to the missile body54, providing a means for the at least one projectile52to puncture or stick to a target once struck. The contact spike55and the missile body54are generally similar to the tip of a conventional dart and the shaft of said dart, respectively. However, in a broader understanding of the present invention, the contact spike55may define any type of magnetic, adhesive, ablative, or conductive element that may indicate a strike-location on a target. The dorsal fin56, guide fin57, and at least one lateral wing58are terminally connected to the missile body54, opposite to the contact spike55as illustrated inFIG.10. The dorsal fin56and the guide fin57are positioned opposite to each other about the missile body54to control the yaw of the missile body54in flight, ideally defining any suitable static control surfaces as may be recognized by a reasonably skilled individual. In reference toFIG.10, the at least one lateral wing58is positioned in between the guide fin57and the dorsal fin56about the missile body54, wherein the at least one lateral wing58is distinct in both position and geometry from the dorsal fin56or the guide fin57. Unlike conventional dart-fins, wherein the fins are generally uniform in construction, the at least one lateral wing58presents a glide-type wing profile. This configuration separates the at least one projectile52from common dart-analogues by flattening the ballistic trajectory of the at least one projectile52. This ‘glide’ flight path ideally presents the contact spike55forward to a greater extent than a simple stabilized ballistic arc, thereby maximizing the chances for the contact spike55to make effective contact with a target. The at least one projectile52is further configured to temporarily engage into the carriage22and the launch channel21to guide a proper loading operation. More specifically, the guide fin57is engaged into the launch channel21and the dorsal fin56is engaged into the carriage22as indicated inFIG.1. As shown, the guide fin57may be truncated to fit within the launch channel21, and the dorsal fin56may be narrowed to slot into the projectile mount33of the carriage22. As shown inFIG.5, the at least one projectile52further comprises a variable mass60. The variable mass60is mounted into the missile body54to enable a user to adjust and adapt the at least one projectile52to suit a variety of game standards. This type of adjustment may comprise an adjustment of total mass, wherein the maximum range of the at least one projectile52using a given mainspring23is limited. Further, the variable mass60is positioned at a user-selected position along the missile body54. This adjustability enables the center-of-mass of the at least one projectile52to be adapted to ensure a stable flight path between the pistol body20and any target. Although the invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention as hereinafter claimed. | 19,084 |
11859944 | The same reference numerals refer to the same parts throughout the various figures. DESCRIPTION OF THE CURRENT EMBODIMENT An embodiment of the sling slider element of the present invention is shown and generally designated by the reference numeral10. FIG.1illustrates the improved sling slider element10of the present invention. More particularly,FIG.1shows the sling slider element in use attached to a weapon sling strap12connected to a rifle14. The weapon sling strap has a first end16connected to a front portion18of the rifle by a rifle engagement element20. The weapon sling strap has a second strap portion22having an opposed end24connected to a rear portion26of the rifle by a rifle engagement element28. The sling slider element10has a frame30and a rotor32pivotally connected to the frame. The rotor32includes a handle element34with an attached handle36. The weapon sling strap12has a second end38connected to the frame, a first intermediate portion40passing movably between the frame and the rotor, and a second intermediate portion42forming a loop. The second strap portion22has a first end44slidably engaging the loop via connector46. The frame and rotor are configured to receive the weapon sling strap. The rotor is movable with respect to the frame between a disengaged position in which the weapon sling strap is free to slide with respect to the rotor and an engaged position in which the weapon sling strap is engaged to the frame and rotor. Sliding of the weapon sling strap with respect to the rotor changes the size of the loop formed by the second intermediate portion, thereby altering the overall length of the weapon sling strap between the rifle engagement element20and the rifle engagement element28. FIG.2illustrates the improved sling slider element10of the present invention. More particularly, the frame30is shown inverted and has a top48, bottom50, front52, rear54, left side56, and right side58. The frame defines an aperture60, which is rectangular in the current embodiment. The left side defines a pivot pin aperture62that receives a pivot pin64. The right side defines a pivot pin aperture66that receives a pivot pin68. The front of the frame includes a frame strap support bar70that defines a recess72that receives the weapon sling strap12. The rear of the frame includes a frame strap support bar74that defines a recess76that receives the weapon sling strap. The frame strap support bars are opposed, parallel and spaced-apart. In the current embodiment, the frame is a planar body. The rotor32has a planar body portion78that is received in the aperture60of the frame30. In the current embodiment, the planar body portion of the rotor is rectangular. The planar body portion has a top80, bottom82, front84, rear86, left side88, and right side90. The planar body portion defines an aperture92. The left side defines a pivot pin aperture94that receives the pivot pin64to pivotally connect the left side of the planar body portion to the left side56of the frame by spanning the aperture of the frame. The right side defines a pivot pin aperture96that receives the pivot pin68to pivotally connect the right side of the planar body portion to the right side58of the frame by spanning the aperture of the frame. The front of the planar body portion includes a rotor strap support bar98that defines a recess100that receives the weapon sling strap12. The rear of the planar body portion includes a rotor strap support bar102that defines a recess104that receives the weapon sling strap. The rotor strap support bars are opposed and parallel. The handle element34extends away from the top80of the planar body portion78of the rotor32. In the current embodiment, the handle element extends perpendicularly from the planar body portion. The handle element includes a handle attachment facility106that enables attachment of the handle36to the handle element. FIG.3illustrates the improved sling slider element10of the present invention. More particularly, the sling slider element is shown with the rotor32having been pivoted counterclockwise within the aperture60of the frame30into one of the two disengaged positions in which the weapon sling strap12is free to slide with respect to the rotor to adjust the overall length of the weapon sling strap. The rotor is pivoted clockwise within the aperture of the frame to be placed in the other of the two disengaged positions. Pivoting of the rotor can be accomplished by either pulling on the handle36or pushing on the handle element34in the desired direction. The top and bottom front edges of the planar body portion78of the rotor can be radiused to facilitate operation of the sling slider element. FIG.4illustrates the improved sling slider element10of the present invention. More particularly, the sling slider element is shown with the rotor32in the engaged position in which the weapon sling strap12is engaged to the frame30and rotor. When the rotor is in the engaged position, the weapon sling strap is prevented from sliding with respect to the rotor, thus maintaining the overall length of the weapon sling strap at a selected amount. The planar body portion78of the rotor is coplanar with the frame when the rotor is in the engaged position. It should also be appreciated that the weapon sling strap passes between the planar body portion78of the rotor and the handle attachment facility106. FIGS.5A-Cillustrate the improved sling slider element10of the present invention. More particularly,FIG.5Ashows the rotor32in the engaged position, andFIGS.5B& C show the rotor in the two disengaged positions. The rotor is placed in the two disengaged positions by pivoting the rotor either clockwise or counterclockwise within the aperture60of the frame30. The rotor strap support bars98,102are each proximate and associated frame strap support bar70,72when the rotor is in the engaged position, and the rotor strap support bars are spaced apart from the associated frame strap support bars when the rotor is in the disengaged position. A gap108is defined between the rotor strap support bar102and the frame strap support bar74. The width of the gap is adjustable between a narrower condition when the rotor is in the engaged position and a wider condition when the rotor is in one of the two disengaged positions. The weapon sling strap12passes on a first side of the frame (bottom50) and on an opposite side of the planar body portion78of the rotor (top80). It should be appreciated that to thicknesses of the weapon sling strap110pass between the frame and the planar body portion of the rotor through the gap between them where the second end38of the weapon sling strap connects to the frame. When the rotor is in the engaged position, the two thicknesses of weapon sling strap are pinched together so that the friction between the two thicknesses prevents the weapon sling strap from sliding with respect to the rotor. When the rotor is in one of the two disengaged positions, the gap is widened relative to the engaged position such that the friction between the two thicknesses is lessened sufficiently to permit the weapon sling strap to slide freely with respect to the rotor. The equilibrium state of the rotor is the engaged position when the weapon sling strap is in a state of tension to prevent the weapon sling strap from sliding with respect to the rotor. While a current embodiment of a sling slider element has been described in detail, it should be apparent that modifications and variations thereto are possible, all of which fall within the true spirit and scope of the invention. Although rifles have been disclosed, the sling slider element is also suitable for use with shotguns, light and medium machine guns, and other firearms. With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the invention, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present invention. Therefore, the foregoing is considered as illustrative only of the principles of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention. | 8,637 |
11859945 | REFERENCE SIGNS LIST 10: slide11: mounting unit12: rail unit12A: coupling protrusions12B: insertion groove12B′: spring fixing groove12C: through-hole13: lever13A: push guide groove13B: pin coupling hole13C: spring fixing groove14: coupling pin15: elastic spring20,20A,20B,20C, and20D: adapter plate21: base plate22: sight23: push member inserting groove DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Hereinafter, the preferred embodiment of the invention will be described in detail with reference to the accompanying drawings. The invention is made to provide a handgun equipped with an adapter plate and a slide for mounting a dot-sight with an improved assembly structure, the handgun enabling the adapter plate for mounting the dot-sight to be securely and easily assembled into a slide even without using a fastening member such as a screw. The invention has a configuration in which an adapter plate (20) slides toward one side and is assembled into a slide (10), and then a projection lever restricts sliding of the adapter plate (20) in an opposite direction such that the adapter plate (20) is not unintendedly detached from the slide (10). Hereinafter, as illustrated inFIGS.1and2, configurations of the slide (10) and the adapter plate (20) will be described in detail with an embodiment of the invention. In addition, for convenience of description, a side of a muzzle is referred to as a ‘front side’, and an opposite side thereof is referred to as a ‘rear side’ in the following description. In the configuration, the slide (10) is configured to be provided on a frame (not illustrated) of the handgun and be moved horizontally toward front and rear sides, thereby moving a bullet in a magazine (not illustrated) to a location at which the bullet can be fired and causing the handgun to come into a loaded state. As illustrated inFIGS.1and2, the slide (10) is formed to have a predetermined length based on the frame of the handgun, and an undersurface of the slide (10) has an accommodation groove having a predetermined depth into which a barrel (not illustrated), a recoil spring assembly (not illustrated), or the like is accommodated. A mounting unit (11) having a predetermined length in a front-rear direction is formed on the rear side (rear sight position) of the slide (10), and the mounting unit (11) is formed to have a flat surface on which the adapter plate (20) to be described below can be mounted by sliding in a horizontal direction. In addition, the mounting unit (11) has a rail unit (12) formed to project by a predetermined height and have a predetermined length in the front-rear direction, and coupling protrusions (12A) having a predetermined size are formed to project from both side surfaces of the rail unit (12). In this manner, the rail unit (12) and the coupling protrusions (12A) form a “T”-shaped cross section. In this case, the coupling protrusion (12A) can be formed all along the length of the rail unit (12), or a plurality of coupling protrusions (12A) can be formed to have a predetermined length in a front-rear direction of the rail unit (12). In a configuration of the rail unit (12) and the coupling protrusions (12A) described above, the adapter plate (20) is fitted and assembled into the mounting unit (11) by sliding horizontally from a rear side toward a front side in a state of being in close contact with the mounting unit (11). In addition, an insertion groove (12B) having a predetermined depth is formed on a rear side of the rail unit (12), and a lever (13) having a predetermined size is rotatably provided in the insertion groove (12B). In this respect, as illustrated inFIG.3, the rail unit (12) has a through-hole (12C) having a predetermined diameter which penetrates both side surfaces of the insertion groove (12B), and a coupling pin (14) having a predetermined length is inserted to penetrate the through-hole (12C) as illustrated inFIG.4such that the lever (13) is fixed to have one end that is rotatable by the coupling pin (14) inside the insertion groove (12B). A bottom surface on one side of the insertion groove (12B) have a spring fixing groove (12B′) having a predetermined depth, and one end (lower end) of an elastic spring (15) is inserted into the spring fixing groove (12B′) by a predetermined depth and has a fixed position. On the other hand, as illustrated inFIG.5, the lever (13) can have a push guide groove (13A) having a predetermined length in the front-rear direction at a top surface of the lever. Hence, one end of a push member (not illustrated) is accurately guided to a location by the push guide groove (13A) such that the adapter plate (20) fixed (locked) by the lever (13) can be unfixed (unlocked). In addition, a pin coupling hole (13B) having a predetermined diameter is formed to penetrate both side surfaces at one end of the lever (13), and the coupling pin (14) is inserted into the pin coupling hole (13B). An undersurface of the lever (13) has a spring fixing groove (13C) having a predetermined diameter as illustrated inFIG.6, and the other end (upper end) of the elastic spring (15) is inserted into the spring fixing groove (13C). In this manner, the elastic spring (15) is stably fixed at a location between the insertion groove (12B) and the lever (13) to elastically support a free end (one end which is not fixed with the coupling pin (14)) of the lever (13). The adapter plate (20) is configured to be mounted on the mounting unit (11) of the slide (10) and used to mount an aiming-assist sight such as a dot-sight. As illustrated inFIG.8, the adapter plate (20) includes a base plate (21) that has a predetermined length and comes into surface contact with the mounting unit (11) of the slide (10) and a sight (22) that is positioned at one end of the base plate (21) and is formed to project perpendicularly by a predetermined height. As illustrated inFIG.9, an undersurface of the base plate (21) has a rail coupling groove (21A) which has a predetermined depth and is formed along a length thereof corresponding to a position of the rail unit (12) of the slide (10), and the rail coupling groove (21A) has a lever locking groove (21B) having a predetermined depth corresponding to a position of the lever (13) in a state where the adapter plate (20) is completely assembled into the mounting unit (11). In addition, the rail coupling groove (21A) is formed to have a “T”-shaped cross section corresponding to the shape of coupling protrusion (12A) of the rail unit (12). Additionally, the sight (22) fulfils a rear sight function in a state where a separate dot-sight is not mounted, and therefore a groove (not assigned with Reference sign) having a predetermined depth for an aimed shot may be formed at an upper central side of the sight (22) as illustrated inFIGS.8and10. A push member inserting groove (23) that communicates with the rail coupling groove (21A) and the lever locking groove (21B) is formed on a side of the sight (22), and the lever (13) positioned inside on the rear side of the adapter plate (20) is exposed through the push member inserting groove (23) as illustrated inFIG.10. In this respect, as illustrated inFIG.11, the push member (not assigned with Reference sign) having a predetermined length can be inserted between the adapter plate (20) and the mounting unit (11) of the slide (10) through the push member inserting groove (23). As a result, the push member can press the lever (13) positioned inside such that the adapter plate (20) is unlocked, and thereby the adapter plate (20) can be detached from the slide (10). On the other hand, the base plate (21) of the adapter plate (20) can have position fixing holes or screw coupling holes formed at various positions depending on a dot-sight model. For example, as illustrated inFIG.12, the base plate (21) of the adapter plate (20) can have four coupling holes at predetermined intervals and two position fixing holes at a predetermined interval from the coupling holes, the position fixing holes formed to project. Alternatively, adapter plates (20A and20B) that have two coupling holes and two position fixing holes formed at predetermined intervals can be realized. Alternatively, an adapter plate (20C) that has two coupling holes and one position fixing hole can be realized. Alternatively, an adapter plate (20D) that has two position fixing holes and two coupling holes, the position fixing holes being formed to project by a predetermined height, can be realized. Although not described, the number or position of the coupling holes and the position fixing holes can be variously modified depending on various commercial models of the dot-sight. AS described above, according to the invention, the adapter plate slides to be assembled into the slide of the handgun, and thereby the adapter plate is fixed by the lever to have a fixed position. In this manner, the adapter plate can be easily and quickly assembled even without using a fastening member such as a screw. In addition, the push member is inserted into the push member inserting groove formed at the rear side of the adapter plate, and thereby the lever between the slide and the adapter plate is pressed to unlock (unfix) the adapter plate. In this manner, the adapter plate can be easily replaced according to various types of dot-sights. According to the invention, the following advantage is obtained. The adapter plate slides to be assembled into the slide of the handgun, and thereby the adapter plate is fixed by the lever to have a fixed position. In this manner, the adapter plate can be easily and quickly assembled even without using a fastening member such as a screw. In addition, the invention has another advantage in that the push member is inserted into the push member inserting groove formed at a rear side of the adapter plate, and thereby the lever between the slide and the adapter plate is pressed to unlock (unfix) the adapter plate. In this manner, the adapter plate can be easily replaced according to various types of dot-sights. In the above, for the convenience of explanation, the drawings illustrating the preferred embodiments and the configurations shown in the drawings have been described with reference numerals and names. However, as an embodiment according to the present invention, the scope of the invention should not be interpreted as it is limited to the shapes shown in the drawings and the names given. While the present invention has been described with respect to the specific embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims. | 10,754 |
11859946 | The same reference numerals refer to the same parts throughout the various figures. Description of the Current Embodiment A current embodiment of the rifle scope with the locking device of the present invention is shown and generally designated by the reference numeral10. FIGS.1-4illustrate the improved rifle scope with zero lock10of the present invention. More particularly, the rifle scope with zero lock has an elevation turret12mounted to a main tube14of the rifle scope. Within the main tube, at least one adjustable element, such as a reticle, lens assembly, or other optical or electrical elements (not shown), may be movably mounted in a substantially perpendicular orientation relative to a longitudinal tube axis16. The main tube further includes a seat18, which has a bore20sized to receive the elevation turret. The bore includes threads22formed on an interior wall or shoulder that mate with corresponding exterior threads24on a turret flange26to releasably secure the elevation turret to the main tube when the elevation turret is installed. The bore20defines slot28that is sized to receive one end of a plunger30that protrudes below the turret flange26. The plunger is connected to an elevation adjustment spindle32by a threaded end120threadedly received within a threaded bore122in the elevation adjustment spindle. The plunger30extends into main tube14and is constrained from rotating about vertical axis/knob axis34by the slot so that rotation of the elevation adjustment spindle is translated into linear motion of the plunger along the vertical axis, thereby adjusting a position of the adjustable element within the main tube. The elevation adjustment spindle32includes a lower base portion (not visible) that receives the turret flange26and an upper neck portion36, which preferably is smaller in diameter than the lower base portion. The turret flange surrounds the lower base portion of the elevation adjustment spindle and retains the elevation adjustment spindle against seat18of main tube14. The exterior threads24on the turret flange are sized to mesh with threads22in the bore. Thus, the elevation adjustment spindle is captured against the main tube and allowed to rotate about vertical axis34, but is constrained from traveling along the vertical axis by the turret flange. An outer sleeve38surrounds the elevation adjustment spindle and the turret flange, but leaves the threads24on the turret flange uncovered. Two set screws40received in threaded bores110threadedly secure the bottom42of the outer sleeve against the turret flange immediately above threads24. The top44of the outer sleeve defines windows46and includes indicia48on either side of the windows. The turret flange26has an interior surface54that faces and surrounds the elevation adjustment spindle32to provide tactile and/or audible feedback to the shooter when the elevation turret12is rotated. The interior surface of the turret flange includes regularly spaced apart features (shown inFIGS.3&4), which preferably include splines or a series of evenly spaced vertical grooves or ridges. Other engagement features may include a series of detents, indentations, apertures, or other suitable features. A click pin56with a ramped surface58is configured to engage the regularly spaced apart features of the interior surface. The click pin is housed within a bore60in the elevation adjustment spindle that has an open end facing the interior surface. A spring or other biasing element (not shown), urges the click pin to extend outwardly from within the bore and engage the interior surface. In operation, rotational movement of the elevation turret about vertical axis34causes the click pin to move out of contact with one groove and into a neighboring groove, thereby producing a click that is either audible, tactile, or both. Each click may coincide with an adjustment amount to alert the user about the extent of an adjustment being made. The click pin continues clicking as long as the elevation turret is rotated. A revolution indicator/indicator skirt64surrounds the elevation adjustment spindle32and at least a portion of the index ring52. The revolution indicator is surrounded by the outer sleeve38, except for a small portion of the revolution indicator that is exposed by the windows46in the top44of the outer sleeve. The exterior66of the revolution indicator has indicia68, which denote 0, 20, 40, and 60 Minutes Of Angle (MOA) in the current embodiment. The top70of the revolution indicator has exterior threads72. The top of the interior74of the revolution indicator includes a guideway76having a curved clearance surface78extending around and facing vertical axis34. The guideway includes a ramp130, a notch/skirt stop surface82, and an overtravel stop80. The ramp, notch, and overtravel stop are located above indicia68, and the notch extends in a radial direction relative to the vertical axis. A dial/knob84is mounted over the revolution indicator64and the elevation adjustment spindle32for rotation about vertical axis/knob axis34when elevation turret12is installed on the main tube14. The dial includes a cylindrical gripping surface86that may be notched, fluted, knurled, or otherwise textured to provide a surface for the user to grip when manually rotating the dial. The dial has a fine scale composed of parallel longitudinal indicia88spaced apart around the circumference of the dial to facilitate fine adjustments. The dial includes three threaded bores90equal distantly spaced around the circumference of the dial and sized to receive threaded set screws92. It should be appreciated that any number of bores, with a corresponding number of set screws, may be provided on the dial. The set screws rigidly couple the dial to the upper portion94of the elevation adjustment spindle so the dial and elevation adjustment spindle rotate together as a unit. Thus, the dial is operably connected to the optical adjustor to position the optical adjustor based on a rotational position of the dial. A tool, such as a hex key (not shown), can be used to tighten the set screws such that the set screws bear against the upper portion of the spindle. Similarly, the tool can be used to loosen the set screws so that the dial can be rotated relative to the elevation adjustment spindle about the vertical axis or be removed and replaced with a different dial if desired. In other embodiments (not shown), the dial is coupled or releasably coupled to the elevation adjustment spindle in a manner other than by set screws. An index ring52includes an exterior tooth62that engages with a vertical slot/channel124(shown inFIGS.3&4) on an interior surface of the revolution indicator64. The exterior tooth is constrained for movement within the vertical slot, and the vertical slot is parallel to the vertical axis34to prevent rotation of the revolution indicator about the vertical axis when the dial84is rotated. Because the revolution indicator is constrained from rotating about the vertical axis, rotation of the dial is translated into linear motion of the revolution indicator along the vertical axis, thereby changing the portion of indicia68that are viewable through windows46of the outer sleeve38. Thus, the axial position of the revolution indicator is based on the rotational position of the dial. Grip surface86of dial84defines an aperture96with a slot98that is sized to closely receive a locking push button100having a locking pin/knob stop surface102received in an aperture104. The locking push button is operably associated with the locking pin and is manually depressible to urge the locking pin out of a locked position and thereby allow the dial to be manually rotated about vertical axis34away from the locked position. The locking pin has a cylindrical lower portion106that is slidably received by slot98and guideway76. The locking pin can be considered to be a post extending on a post axis126parallel to the vertical axis. The locking pin has a flat end surface128. The locking pin is configured to travel along the guideway, riding against the end of slot98and not touching the curved clearance surface78in response to rotation of the dial. The locking push button includes a pair of openings (not visible) sized to interact with a pair of springs108or other biasing elements. The springs bias the locking push button and the locking pin in a radial direction relative to the dial so as to urge movement of the locking pin when the dial is rotated. When elevation turret12is in a locked position, locking pin102has a knob stop surface103that is aligned with and seated in notch/skirt stop surface82, thereby constraining dial84and preventing inadvertent rotation of the dial relative to the main tube14. Thus, the knob stop surface and the skirt stop surface are configured to positively contact each other to establish a limit of rotational travel of the dial. For the purposes of the specification, “positively” means where direct contact is made by two surfaces that abut each other without a substantial wedging effect. Examples of “non-positive” are any screw threads or multi-start screw threads with a helical angle of less than 45°, but a substantially sloped surface, such as a 45° angle, would be considered “positive” because there is substantially no wedging effect. The preferred embodiment with surfaces that are perpendicular to their direction of approach are an ideal example of “positive.” Even though the locking pin and notch are curved surfaces rather than flat, the line of contact at some point is perpendicular. Another way of describing positive contact is when the surfaces approach each other with more of a face-to-face approach than a sliding approach. Furthermore, the knob stop surface and skirt stop surface are parallel to the vertical axis/knob axis34such that they contact each other in an abutting manner without a wedging effect. The notch serves as a channel receiving the locking pin at the limit of rotational travel and has a closed end providing the skirt stop surface. The channel is concentric to the vertical axis. In this position, springs108urge the cylindrical lower portion106of locking pin102into notch82. To unlock the elevation turret, locking push button100is depressed inwardly toward the vertical axis to urge the locking pin out of the notch. From this position, dial84can be manually rotated about the vertical axis away from the locked position. Thus, the knob stop surface is movable radially with respect to the vertical axis between a locked position in which the rotation of the dial is prevented and an unlocked position in which rotation of the dial is enabled. Furthermore, the knob stop surface is connected to a movable button (the locking push button) protruding radially from the dial. As the dial is rotated (i.e., as the user is making a desired adjustment), the locking button can be released, and the locking pin rides away from the notch and along the ramp and curved clearance surfaces. The ramp surface is a flat surface parallel to the vertical axis that defines a recess in the form of notch82. The ramp130is shaped to help create and define the notch82. As the dial rotates, the revolution indicator64descends within the dial and the outer sleeve38. Once the dial has completed a rotation around the vertical axis, the revolution indicator has descended sufficiently so the locking pin does not engage with overtravel stop80, the ramp130, or the notch on the second and subsequent rotations. Thus, the dial can continue to turn for multiple rotations without locking. As the dial completes a rotation around the vertical axis, the portion of indices68viewable through the windows46and aligned with indicia48changes, which enables the user to readily determine how many rotations of the dial about the vertical axis have been completed. The user can continue turning the dial until the revolution counter64bottoms out against the flange26somewhere between 60 and 80 MOA of adjustment, or the rifle scope itself runs out of internal elevation travel, whichever comes first. At that point, further rotation of the dial in this direction is prevented. The dial can still be rotated in an opposite direction for further fine adjustment and/or to return the dial to its zero point/home position where the dial automatically locks by engagement of the cylindrical lower portion106of locking pin102in notch82. The overtravel stop80is a surface that keeps the dial from rotating past 0 even when the locking push button is pressed. The curved surface to the left of the overtravel stop has that shape because of the tool geometry used to cut guideway76. During the first rotation of the dial, the locking pin is not prevented from moving further out radially by curved clearance surface78; in most tolerance conditions, the locking pin never touches the curved clearance surface. Instead, the locking pin is prevented from moving out radially by the end of the slot98so the locking pin does not drag on the curved clearance surface during the first rotation (which would result in an undesirable tactile feel). The locking pin only drags on ramp130to compress the springs and move the locking pin radially inward, allowing the locking pin to then return outward into the notch created by the ramp. The revolution indicator, locking push button, and locking pin are preferably constructed of or coated with a rigid, durable, and wear-resistant material, such as nylon, PTFE polymers (e.g., Teflon®), steel, aluminum, or other suitable material, to withstand wear due to friction as the locking pin slides along or within the revolution indicator. In other embodiments, the locking push button may be manufactured from one material, and the locking pin may be manufactured from a different material. For instance, since the locking push button may not experience as much wear from friction as compared to the locking pin, the locking push button may be constructed from anodized aluminum or other material to provide a balance of component weight, wear-resistance, and strength. On the other hand, since the sliding action of the locking pin on or along the revolution indicator will wear the locking pin over time, the locking pin may be manufactured from or coated with a different material, such as stainless steel, for strength, wear-resistance, and corrosion-resistance. FIGS.2-4illustrate how the indicia68exposed by windows46indicate whether dial84is in the zero point locked position and also for indicating the number of rotations of the dial. Simply by considering the relative positions of indicia68and indicia48, the user can quickly determine the state of the dial (i.e., whether it is locked and/or the number of rotations about vertical axis34). In an example operation, when the dial is in a locked position (during which locking pin102is received within notch82), locking push button100is in a first position, such as illustrated inFIG.3in solid lines. In this first position, the locking push button extends outwardly from grip surface86. Indicia88show the indicium for 0 MOA centered over indicia68, and indicia68have the indicium for 0 MOA visible through right hand window46and aligned with right hand indicium48. To unlock dial84, the user may depress locking push button100inwardly toward the vertical axis34until the locking push button is substantially flush in relation to grip surface86(the position shown in dashed lines inFIG.3). Depression of the locking push button contracts springs108and urges cylindrical lower portion106of locking pin102out of alignment with notch82and onto ramp surface130as previously described. The dial is unlocked and can be manually rotated in a single direction about vertical axis34. The overtravel stop80obstructs the cylindrical lower portion of the locking pin to prevent the dial from being manually rotated in the opposite direction. As the dial is rotated, the locking button can be released and the pin slides on the ramp. The locking push button and locking pin return to the locked position under the influence of the springs, and the locking pin is stopped by the end of slot98in the dial. The dial remains unlocked because the locking pin is in or above guideway76(i.e., throughout all rotations of adjustment until the cylindrical lower portion of the locking pin is engaged with the ramp in the process of being returned to the notch). As the dial rotates, the revolution indicator64descends to expose a different portion of indicia68through the windows46denoting increasing amounts of adjustment until further rotation of the dial is prevented as described previously when 60 to 80 MOA of adjustment is reached. The cross-sectional view inFIG.4illustrates the position of the locking pin after the dial has been rotated once about the vertical axis. Reversing rotation of the dial84at any point causes the same functions to be performed in reverse. For example, when the dial is rotated in the reverse direction, the revolution indicator64ascends within the dial and outer sleeve38to expose a different portion of indicia68through the windows46denoting decreasing amounts of adjustment. As the dial is turned back into the zero point locked position, cylindrical lower portion106of locking pin102is forced radially inward by ramp130until the locking pin is urged into notch82by springs108acting on locking push button100to automatically lock the dial. The locking push button is also returned to the locked position where the locking push button extends outwardly from gripping surface86. The elevation turret12of the current invention allows for more available rotations of the dial84than traditional elevation turrets having a zero point lock capability and provides a zero point lock capability at a reduced cost of manufacture compared to traditional approaches. A critical difference of the elevation turret of the current invention is the threading of the revolution indicator64to the dial with multi-start threads72on the revolution indicator and multi-start threads112on the interior114of the dial (shown inFIGS.3&4). The multi-start threads (four start threads in the current embodiment) enable the elevation turret to be built without timing threads or additional adjustable components, which helps reduce cost. In conventional elevation turrets having a zero point lock capability, the height between the dial/pin and the locking feature/notch is fixed. The conventional locking mechanism has a path that wraps around and curls inside itself allowing two or three revolutions. However, more than two or three revolutions would make the conventional dial prohibitively large in diameter. By making the locking feature/notch move away from the locking pin in the current invention during the first revolution, multiple additional revolutions are enabled. By using four start threads72,112, the current invention allows for more engagement of cylindrical lower portion106of locking pin102with notch82than a similar one start thread would (one start maximum engagement for 48 pitch threads would be 1/48=0.021″, whereas four start maximum engagement for 48 pitch threads would be 1/48*4=0.083″). Thus, the revolution indicator/indicator skirt64is threadedly engaged to the dial/knob84by threads having a selected pitch providing a selected axial offset of the revolution indicator with respect to the dial from one rotation of the dial. Furthermore, the revolution indicator has indicia68that include rotation indicators spaced apart by a distance equal to the selected axial offset. The indicia are a plurality of parallel lines. The use of four start threads also minimizes the amount of variation in that engagement by starting on the correct thread. This can be accomplished by keeping a tight enough tolerance on the height from the notch to where the threads start, in combination with alignment features that indicate which orientation the dial and notch need to be held for the correct thread start to catch and engage when assembling the revolution indicator to the dial. If assembled correctly, the height of the total dial and revolution indicator assembly will be within a band that is the width of 1 thread (48 pitch thread results in a band 0.021″ wide) plus the tolerance of the revolution indicator and the dial. Correct assembly can be checked with calipers or a gauge. When installed with one start threads, the engagement of the cylindrical lower portion106of the locking pin102with notch82would vary from 0″ to 0.021″, whereas correctly installed four start threads will allow the use of a 0.021″ range of the 0.083″ total engagement available. For example, once the tolerance stack is considered, the ideal engagement may be 0.054″ to 0.075″ to make sure there is always good engagement of the locking pin with the notch and the dial and revolution indicator assembly never bottoms out before the locking pin engages with the notch. This would not be possible without timed threads using a one start thread and, even if timed threads were used, it would be significantly more susceptible to wear and damage because of the extremely limited 0.021″ maximum engagement of the locking pin with the notch, which would have to be limited even further due to tolerance considerations. In some embodiments, the locking pin102could be threaded into the locking push button100so as to be adjustable to maximize engagement with the locking notch82when using single start threads and/or compensate for the variation cause by untimed threads. In some embodiments, rifle scope with a locking device10may include sealing devices and other features to minimize entry of foreign materials, such as dust, dirt, or other contaminants, to help prevent rust, wear, or other damage to the components of the rifle scope with the locking device. The seals may be hermetic seals, and the interior of the main tube14may be filled with a dry gas, such as nitrogen or argon, to help prevent fogging that may otherwise be caused by condensation of moisture vapor on surfaces of lenses and other optical elements within the main body. For example, in some embodiments, elevation turret12may include a pair of contaminant seals116,118sandwiched between the turret flange26and the elevation adjustment spindle32to seal any openings or gaps between the two components and the bore20. The contaminant seals are preferably O-rings formed of rubber or another elastomeric material, but may be formed by any other suitable sealing material, such as plastic, nylon, or PTFE polymers (e.g., Teflon®). FIGS.5&7-9illustrate a current embodiment of the improved rifle scope with zero stop200of the present invention. More particularly, the rifle scope with zero stop has an elevation turret212mounted to a main tube14of the rifle scope. Within the main tube, at least one adjustable element, such as a reticle, lens assembly, or other optical or electrical elements (not shown), may be movably mounted in a substantially perpendicular orientation relative to a longitudinal tube axis16. The main tube further includes a seat18, which has a bore20sized to receive the elevation turret. The bore includes threads22formed on an interior wall or shoulder that mate with corresponding exterior threads224on a turret flange226to releasably secure the elevation turret to the main tube when the elevation turret is installed. The bore20defines an aperture28that is sized to receive one end of a plunger230that protrudes below the turret flange226. The plunger is connected to an elevation adjustment spindle232by a threaded end320threadedly received within a threaded bore322in the elevation adjustment spindle. The plunger230extends into main tube14and is constrained from rotating about vertical axis34so that rotation of the elevation adjustment spindle is translated into linear motion of the plunger along the vertical axis, thereby adjusting a position of the adjustable element within the main tube. The elevation adjustment spindle232includes a lower base portion (not visible) that receives the turret flange226and an upper neck portion236, which preferably is smaller in diameter than the lower base portion. The turret flange surrounds the lower base portion of the elevation adjustment spindle and retains the elevation adjustment spindle against seat18of main tube14. The exterior threads224on the turret flange are sized to mesh with threads22in the bore. Thus, the elevation adjustment spindle is captured against the main tube and allowed to rotate about vertical axis34, but is constrained from traveling along the vertical axis by the turret flange. In the current embodiment, an 0-ring316is sandwiched between lower base portion of the elevation adjustment spindle and the base of the seat. An index ring252surrounds the elevation adjustment spindle and the turret flange, but leaves the threads224on the turret flange uncovered. The index ring has a rear vertical slot262. A revolution indicator264surrounds the elevation adjustment spindle232and at least a portion of the index ring252. The exterior266of the revolution indicator has indicia268, which denote 0, 15, 30, 45, and 60 Minutes Of Angle (MOA) in the current embodiment. The top270of the revolution indicator has exterior threads272. A zero stop boss274protrudes upwards from the top of the revolution indicator. A tooth276protrudes inwardly towards the vertical axis34from the interior278of the revolution indicator. A dial284is mounted over the revolution indicator264and the elevation adjustment spindle232for rotation about vertical axis34when elevation turret212is installed on the main tube14. The dial includes a cylindrical gripping surface286that may be notched, fluted, knurled, or otherwise textured to provide a surface for the user to grip when manually rotating the dial. The dial has a fine scale composed of parallel longitudinal indicia288spaced apart around the circumference of the dial to facilitate fine adjustments. The dial includes two threaded bores (not visible) spaced around the circumference of the dial and sized to receive threaded set screws292. It should be appreciated that any number of bores, with a corresponding number of set screws, may be provided on the dial. The set screws rigidly couple the dial to the upper neck portion236of the elevation adjustment spindle so the dial and elevation adjustment spindle rotate together as a unit. A tool, such as a hex key (not shown), can be used to tighten the set screws such that the set screws bear against the upper neck portion of the spindle. Similarly, the tool can be used to loosen the set screws so that the dial can be rotated relative to the elevation adjustment spindle about the vertical axis or be removed and replaced with a different dial if desired. In other embodiments (not shown), the dial is coupled or releasably coupled to the elevation adjustment spindle in a manner other than by set screws. A flanged portion294on the upper neck portion help prevent the dial from lifting upward in a direction along the vertical axis. The tooth276of the revolution indicator264engages with rear vertical slot262in the index ring252to prevent rotation of the revolution indicator about the vertical axis34when the dial284is rotated. The index ring is prevented from rotating when the dial is rotated by a press fit and/or adhesive between the index ring and the flange226. Because the revolution indicator is constrained from rotating about the vertical axis, rotation of the dial is translated into linear motion of the revolution indicator along the vertical axis, thereby changing the portion of indicia268that is viewable below the dial. Referring now toFIG.6, the underside296of the dial284defines a curved slot298. The slot closely receives the zero stop boss274at one end300when the dial is positioned at the zero point, thereby constraining the dial and preventing further rotation of the dial about the vertical axis34beyond the zero point relative to the main tube14. From this stopped position, the dial can be manually rotated about the vertical axis away from the zero point position. As the dial is rotated (i.e., as the user is making a desired adjustment), the zero stop boss rides away from the stopped position and along the curved slot. As the dial rotates, the revolution indicator264descends within the dial. Once the dial has completed a rotation around the vertical axis, the revolution indicator has descended sufficiently so the zero stop boss does not engage with the end or any other portion of the curved slot on the second and subsequent rotations. Thus, the dial can continue to turn for multiple rotations without stopping. As the dial completes a rotation around the vertical axis, the portion of indices268viewable below the dial changes, which enables the user to readily determine how many rotations of the dial about the vertical axis have been completed. The user can continue turning the dial until the revolution counter264bottoms out against the flange26somewhere between 60 and 75 MOA of adjustment or the scope itself runs out of internal elevation travel, whichever comes first. At that point, further rotation of the dial in this direction is prevented. The dial can still be rotated in an opposite direction for further fine adjustment and/or to return the dial to its zero point/home position where the dial automatically stops by contact between the zero stop boss and the end of the curved slot. The revolution indicator, dial, and zero stop boss are preferably constructed of or coated with a rigid, durable, and wear-resistant material, such as nylon, PTFE polymers (e.g., Teflon®), steel, aluminum, or other suitable material, to withstand wear from the zero stop boss stopping further rotation when hitting the end of the zero stop slot. The zero stop boss never touches the outside edges of the slot298in the dial. The zero stop boss only touches the stop face300when the adjustment reaches zero to prevent further rotation. This interface is critical because the user may hit the stop quite hard, damaging the zero stop boss if the zero stop boss is not sufficiently durable to withstand that force. In other embodiments, the dial may be manufactured from one material, and the zero stop boss may be manufactured from a different material. For instance, since the dial may not experience as much wear from stopping the rotation due to the amount of material supporting the zero top interface as compared to the zero stop boss, the dial may be constructed from anodized aluminum or other material to provide a balance of component weight, wear-resistance, and strength. On the other hand, since the zero stop boss is smaller and has less strength due to less supporting material, the zero stop boss may be manufactured from or coated with a different material, such as stainless steel, for strength, wear-resistance, and corrosion-resistance. FIGS.7-9illustrate how the indicia268exposed below dial284indicate whether the dial is in the zero point stopped position and also for indicating the number of rotations of the dial. Simply by considering the relative position of indicia268and the bottom304of the dial, the user can quickly determine the state of the dial (i.e., whether it is stopped and/or the number of rotations about vertical axis34). In an example operation, when the dial is in a stopped position (during which zero stop boss274is received within curved slot298and is obstructed by end300), the revolution indicator264is in a first position, such as illustrated inFIG.7. In this first position, indicia268have the indicium for 0 MOA visible. When dial284is in the zero point position, the dial can be manually rotated in a single direction about vertical axis34. The end300of the curved slot298obstructs the zero stop boss274to prevent the dial from being manually rotated in the opposite direction. As the dial is rotated, the zero stop boss slides in the curved slot. As the dial rotates, the revolution indicator264descends to expose a different portion of indicia268below the dial denoting increasing amounts of adjustment until further rotation of the dial is prevented as described previously when 60-75 MOA of adjustment is reached. The diagonal cross-sectional view inFIG.9illustrates the position of the revolution indicator after the dial has been rotated once about the vertical axis. Reversing rotation of the dial284at any point causes the same functions to be performed in reverse. For example, when the dial is rotated in the reverse direction, the revolution indicator264ascends within the dial to expose a different portion of indicia268below the dial denoting decreasing amounts of adjustment. As the dial is turned back into the zero point stopped position, the zero stop boss274is obstructed by end300of the curved slot298, which prevents further rotation of the dial past the zero point. The elevation turret212of the current invention allows for more available rotations of the dial284than traditional elevation turrets having a zero point stop capability, and provides a zero point stop capability at a reduced cost of manufacture compared to traditional approaches. A critical difference of the elevation turret of the current invention is the threading of the revolution indicator264to the dial with multi-start threads272on the revolution indicator and multi-start threads312on the interior314of the dial (shown inFIGS.6-9). The multi-start threads (four start threads in the current embodiment) enable the elevation turret to be built without timing threads, which helps reduce cost. In conventional elevation turrets having a zero point stop capability, the height between the dial/stop and the stopping feature/curved slot end is fixed. By making the zero stop boss move away from the stopping feature/curved slot end in the current invention during the first revolution, multiple additional revolutions are enabled. By using four start threads272,312, the current invention allows for more engagement of zero stop boss274with curved slot end300than a similar one start thread would (one start maximum engagement for 48 pitch threads would be 1/48=0.021″, whereas four start maximum engagement for 48 pitch threads would be 1/48*4=0.083″). The use of four start threads also minimizes the amount of variation in that engagement by starting on the correct thread. This can be accomplished by keeping a tight enough tolerance on the height from the curved slot end to where the threads start, in combination with alignment features that indicate which orientation the dial and curved slot end need to be held for the correct thread start to catch and engage when assembling the revolution indicator to the dial. If assembled correctly, the height of the total dial and revolution indicator assembly will be within a band that is the width of 1 thread (48 pitch thread results in a band 0.021″ wide) plus the tolerance of the revolution indicator and the dial. Correct assembly can be checked with calipers or a gauge. When installed with one start threads, the engagement of the zero stop boss274with the end of the zero stop slot300would vary from 0″ to 0.021″, whereas correctly installed four start threads will allow the use of a 0.021″ range of the 0.083″ total engagement available. For example, once the tolerance stack is considered, the ideal engagement may be 0.054″ to 0.075″ to make sure there is always good engagement of the zero stop boss with the curved slot end and the dial and revolution indicator assembly never bottoms out before the zero stop boss engages with the curved slot end. This would not be possible without timed threads using a one start thread and, even if timed threads were used, it would be significantly more susceptible to wear and damage because of the extremely limited 0.021″ maximum engagement of the zero stop boss with the curved slot end, which would have to be limited even further due to tolerance considerations. Furthermore, the four start threads allow the revolution indicator to move vertically four times as far per dial revolution as standard one start threads, enabling the zero stop boss on the revolution indicator to be well clear of the curved slot, including the end denoting the zero point, on the second and subsequent revolutions of the dial. In some embodiments, the zero stop boss274could be a separate component that was adjustably threaded into the revolution counter264in order to be made from a stronger material and/or to be adjustable to maximize engagement when using single start threads and/or compensate for the variation cause by untimed threads. In some embodiments, rifle scope with a locking device200may include sealing devices and other features to minimize entry of foreign materials, such as dust, dirt, or other contaminants, to help prevent rust, wear, or other damage to the components of the rifle scope with the locking device. The seals may be hermetic seals, and the interior of the main tube14may be filled with a dry gas, such as nitrogen or argon, to help prevent fogging that may otherwise be caused by condensation of moisture vapor on surfaces of lenses and other optical elements within the main body. For example, in some embodiments, elevation turret212may include a pair of contaminant seals316,318sandwiched between the turret flange226and the elevation adjustment spindle232to seal any openings or gaps between the two components and the bore20. The contaminant seals are preferably 0-rings formed of rubber or another elastomeric material, but may be formed by any other suitable sealing material, such as plastic, nylon, or PTFE polymers (e.g., Teflon®). In the context of the specification, the terms “rear” and “rearward,” and “front” and “forward” have the following definitions: “rear” or “rearward” means in the direction away from the muzzle of the firearm while “front” or “forward” means it is in the direction towards the muzzle of the firearm. While a current embodiment of a rifle scope with a locking device and a current embodiment of a rifle scope with zero stop have been described in detail, it should be apparent that modifications and variations thereto are possible, all of which fall within the true spirit and scope of the invention. With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the invention, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present invention. Therefore, the foregoing is considered as illustrative only of the principles of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention. | 39,208 |
11859947 | DETAILED DESCRIPTION The following text sets forth a detailed description of numerous different embodiments. However, it should be understood that the detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical. In light of the teachings and disclosures herein, numerous alternative embodiments may be implemented. It should be appreciated that while the following disclosure refers to bows and other low-velocity projectile weapons, embodiments of the invention may be utilized with other types of weapons. In some exemplary embodiments of the invention, the targeting system interacts with a firearm, a grenade launcher, artillery and other large projectile weapons, a missile, a rocket, a torpedo, or a weapon associated with a vehicle (such as an aircraft, a ship, a tank, an armored personnel carrier, a mobile artillery piece, or the like). It should therefore be noted that throughout the description, “bow” may be replaced by “projectile weapon” or any of the above-mentioned examples; “arrow” may be replaced by “projectile” or any projectile associated with the above-mentioned examples; and “operator” could be replaced with “user,” “hunter,” “gunner,” “shooter,” “driver,” or the like. Various bows utilize sights to assist an operator with aligning the bow to aim an arrow at a target and strike the target with the arrow. For ranging a target, an illuminated display surface may present or project a dot for use with ranging to the target, but the projection may appear to move as the frame-of-reference changes because the beam may be projected relative to the frame of reference of the sight assembly. Small movements of the bow in elevation or azimuth angles may cause the sighting point of the ranging module to shift substantially. Many conventional bows include a peep sight that is attached to or incorporated within the bow string to aim at a target using a pin or LED in a target sighting window of a conventional scope attached to the bow. The peep sight typically forms a small, circular opening through which a target sighting window, which includes a calibrated pin or LED, and a target scene is viewed by the user. The typical location of the peep sight on the bow string requires the bow to be fully drawn (the bow string is pulled by the user to an anchor point in the fully drawn position) and thus limits its use to that position. As a result, use of a conventional peep sight establishes an aiming sight line (a line of sight) extending through the peep sight and the conventional target sighting window to align a target with a pin or LED in the target sighting window when viewed from a user's eye position. Although the peep sight may be positioned close to the user's eye, even slight movement or rotation of the user in the fully drawn position may cause misalignment of the bow and result in errant ranging or shot of the arrow. Similarly, many conventional rifles integrate on a top surface of the rifle a front sight and a rear sight, both of which are aligned by the user when aiming the rifle at a desired target. Typically, one of the rifle sights is a vertical post and the other rifle sight is shaped such that it has a central U-shaped or V-shaped opening through which the user looks to align the rifle properly to strike a desired target. The two sights on a top surface of a conventional rifle, or any projectile weapon, enable a user to properly orient the rifle because the sight points serve as two points that align the user's aiming sight line to the barrel of the rifle. Some conventional rifle sights include a projector engine operable to output a holographic image that projects one or more sighting elements onto a surface viewed by a user such that the projected elements appear to be located closer to a target (i.e., the projected sighting elements are projected onto a target plane) when the surface is viewed from a perspective corresponding to a user's eye position. To aid with properly aiming at a target for determining a range to the target, some conventional targeting systems include a target sighting window including a reticle or a ranging module including a laser diode operable to output a visible light (e.g., a laser) on a desired target. For targeting systems that provide information relating to a recommended orientation (e.g., vertical or lateral angular adjustment), it is important that a range is being accurately measured for the desired target instead of a nearby object. Embodiments of the present invention enable precise ranging to such a target using a combination of a laser sighting reticle and a fixed sighting mark presented within a sight without requiring use of a peep sight located on a bow string or a laser diode that outputs a visible light on the target. An arrow's velocity may be impacted by its characteristics, a full bow draw distance, and type of string release mechanism. Arrows vary by weight, length, as well as various other characteristics. The Earth's gravitational pull causes an arrow to drop after its release while it travels to a target. When aiming for a target, it is well-known for a user (operator) to account for the gravitational pull of the Earth based on a distance to the target and then utilize one or more pins calibrated for various distances to successfully strike a target with the arrow. Additionally, conventional bow sights enable an operator to align, before an arrow is released, one of a plurality of vertically-aligned pins, each of which is calibrated for a known distance, with a desired target based on a distance to the target. For instance, five vertically-aligned pins within a conventional scope may calibrated for distances of 10 yards, 20 yards, 30 yards, 40 yards, and 50 yards for a particular type of arrow. Calibration of the pins to correspond to each the desired calibration distances may include adjusting a vertical position of each pin and/or a position of the scope including the pins on the bow. Once a distance to a target is known, the user of a conventional scope may utilize one or more of the calibrated pins to aim at and strike a target. Therefore, determining an accurate range (distance) to a desired target can assist the operator with successfully striking the target. For example, an operator may use a pin calibrated for 20 meters to strike a target located 20 meters away from the bow by aligning the target with the calibrated pin. When a desired target is located at a distance that does not correspond to one or a plurality of pins calibrated for predetermined distances, an operator typically determines a tilt angle of the bow based on the available pins, each calibrated for a predetermined distance. For instance, an operator may use vertically-aligned pins calibrated for predetermined distances of 20 meters and 40 meters to strike a target located at a range of 30 meters by orienting the bow such that the target is positioned halfway between the two pins calibrated for the predetermined distances of 20 meters and 40 meters, respectively. The targeting system disclosed herein implements features and techniques to aid a user in aligning his aiming sight line to a path of a ranging module beam and adjust an orientation of the bow to successfully strike a target with an arrow. The targeting system may use a distance to the target to determine and present information on a target sighting window relating to an orientation of the bow and arrow. For instance, the presented information may include a variable compensated sighting mark determined based on a determined range to the target, arrow characteristics (e.g., arrow weight, length, velocity, etc.), an inclination of the bow or operator, a direction or speed of wind, or other criteria. Thus, embodiments of the targeting system aid an operator with accurately aiming the bow and arrow towards a target using a target sighting window, accurately ranging (determining a distance to) the target, and determining an orientation of the bow based on a compensated sighting mark presented on the target sighting window. The targeting system may present in the target sighting window two sighting elements that are utilized to align his aiming sight line to a path of a ranging module beam. The two sighting elements are presented such that one sighting element appears closer to the target than the other sighting element when the target window is viewed from a perspective corresponding to a user's eye position. A user may utilize the two sighting elements to align the bow and arrow to correspond to a location at which a ranging module is directed. In embodiments, a ranging module beam is output from a location of the targeting system that is one or two inches adjacent to the center of the target sighting window. For example, the sighting elements may be presented on one transparent surface such that a first sighting element is positioned or presented on the surface and a second sighting element is projected onto the same surface, where the projected second sighting element appears to be closer to the target than the first sighting element when the target window is viewed from a perspective corresponding to a user's eye position. Alternatively, the sighting elements may be presented on parallel, transparent surfaces such that a first sighting element is positioned or presented on a first surface and a second sighting element is positioned or presented on a second surface, where the second surface is closer to the target than the first surface when the surfaces are viewed from a perspective corresponding to a user's eye position. In embodiments, a processor of the targeting system may be configured to present a first sighting element (a fixed sighting mark) on a target sighting window and control a projector to present a second sighting element (a laser sighting reticle) onto the target sighting window. In embodiments, the first sighting element (the fixed sighting mark) is permanently affixed to a surface of the target sighting window such that it is visible at all times. As detailed below, the combination of the fixed sighting mark and the projected laser sighting reticle on the target sighting window provide two points along a user's aiming sight line that aids with aligning his aiming sight line to a path of a ranging module beam and properly orienting the targeting system for ranging to a desired target. When the combination of sighting marks presented in the target sighting window are combined with a peep sight, the user has three sighting references to accurately aim the bow and arrow to the target, thus improving the user's aim. However, a peep sight is not required to align the bow to with the user's aiming sight line towards the desired target because the combination of the fixed sighting mark and the projected laser sighting reticle (both sighting marks presented on the target sighting window) provide two points a user needs to properly ensure orientation of the targeting system to align with the path of the ranging module beam. The two sighting marks are fixed relative to each other and enable a user to aim at a target desired to be ranged without requiring the bow string to be fully drawn to bring a peep sight into a position at which it may be viewed through from a user's eye position. Thus, in embodiments of the targeting system including a peep sight for use when the bow string is fully drawn, the user has three sight points for reference when aiming at a target (if the peep sight is aligned to the line of sight corresponding to the sighting marks presented on the target sighting window for a predetermined distance, such as 20 yards). As discussed below, the bow string is drawn by the user to bring the peep sight near the user's eye position and used for alignment with a variable compensated sighting mark presented in the target sighting window when the user is ready to release the arrow from the bow towards the target. The targeting system may also include a plurality of vertically-aligned light sources presented on or projected onto a surface of the target sighting window for use after a desired target has been ranged. Specifically, after a desired target has been ranged (i.e., a distance from the bow and arrow to the target has been determined), the processor may be further configured to present on the target sighting window a variable sighting mark indicative of a compensated targeting mark. The vertically-aligned light sources are substantially continuous such that any point along a center vertical line extending between the top and bottom of the target sighting window may be illuminated by the processor and the position of the compensated targeting mark on the target sighting window is determined at least in part based upon a range determined by the ranging module to a target. For instance, the targeting system processor may utilize a distance, as well other criteria, to a target to determine a location (point) along the vertical line at which the user should aim to strike the target and then control the corresponding light source to be illuminated. Similar to the fixed sighting mark, the vertically-aligned light sources may be positioned or presented on the target sighting window. One of the vertically-aligned light sources is illuminated by the processor to present the variable compensated sighting mark. For instance, the light sources may be OLED light sources mounted to a transparent strip affixed to a target sighting window. Alternatively, the light sources may be located in a housing of the target sighting system and reflected onto a reflective side of the target sighting window. An operator may use the targeting system in a variety of manners. For instance, an operator positioned at a location that is level with (having similar elevation to) a location of a desired target may use the combination of the laser sighting reticle and the fixed sighting mark presented on the target sighting window to aim the ranging module at a target, provide a user input to cause a processor of the targeting system to determine a range (distance) to the desired target, determine an orientation of the bow and arrow based on the determined range (distance), and then present on the target sighting window a variable compensated sighting mark, which the user uses to aim for the desired target before an arrow is released. For the ranging step, a ranging module of the targeting system determines a distance to a target once the operator has oriented the bow and ranging module to aim towards a desired target such that a beam of the ranging module may be output from the ranging module and reflect to the ranging module. The ranging module beam is output from a predetermined position relative to the target sighting window (e.g., one or two inches adjacent to the center of the target sighting window). The combination of the fixed sighting mark and the projected laser sighting reticle on the target sighting window provide two points along which to align a user's aiming sight line to the path of a ranging module beam. The target sighting window of the targeting system may present a fixed sighting mark and a laser sighting reticle to enable an operator to aim the bow and ranging module towards the target by orienting the bow and ranging module to a desired target such that the ranging module may accurately determine a range to the desired target. Because the fixed sighting mark and laser sighting reticle align only when the bow and ranging module is properly aimed at a target, the operator may utilize both sighting marks to aim at a desired target to which the ranging module may determine a range. Such use of two sighting marks presented in the target sighting window does not require aligning a peep sight with the user's eye, which typically requires fully drawing a bow string to an anchor, and thereby avoids inadvertently determining a range to an object other than the desired target. The processor may determine a distance to the target based on the duration of time that passed for the beam to travel to the target and reflect back to the ranging module. In embodiments, the targeting system may include a depressible trigger (e.g., a button, switch, etc.) for the processor to receive user input that the bow has been oriented to aim towards a desired target and is ready for ranging. For instance, the targeting system may be configured such that the user engages and holds the trigger during the aiming process and then releases the trigger once the bow has been oriented towards a target (providing an input to the processor to determine a distance to the target). Alternatively, the targeting system may be configured such that the user does not engage the trigger during the aiming process and then engages the trigger once the bow has been oriented towards a target to determine a distance to the target. The processor may determine an angle at which the bow and arrow should be tilted vertically (up or down), rotated laterally (left or right), or any combination thereof, based at least partially on the determined range (distance) to the target and present on the target sighting window a variable compensated sighting mark, which the user aligns with the desired target before an arrow is released. The user orients the bow and arrow to the determined angle by aligning the variable compensated sighting mark with the target. The processor may determine a location on the target sighting window onto which the variable sighting mark is presented and control a corresponding light source to be illuminated. The processor may utilize the determined range to the target, an inclination, a sensed direction or speed of wind, a stored velocity of an arrow, or any combination thereof, to determine the location of the variable sighting mark that is presented on the target sighting window. The operator may align the variable sighting mark with the desired target before releasing an arrow with which the operator desires to strike the target. As a result, a user may utilize the variable sighting mark to orient the bow vertically (up or down), laterally (to left or right side), or any combination thereof, to strike the target. Thus, the targeting system ensures that elements of a target sighting window and a beam of a ranging module properly align to aim at a common target for accurate ranging to the target. The targeting system utilizes the range (distance) to provide a compensated sighting mark in the target sighting window to aid a user with adjusting his aim towards the target before an arrow is released. A discrepancy between the ranging step and orienting the bow and based on the determined range may cause the arrow to miss the desired target. Embodiments of the invention may be used in an environment of a bow100. Bow100may be a straight bow, a recurve bow, or a compound bow. Alternatively, bow100may be a cross-bow, a firearm, or the like. As illustrated inFIG.1, in some embodiments, bow100may be a compound bow.FIG.1shows a bow100with a targeting system102thereon, as seen from an operator's perspective (with a target positioned on the opposite side of bow100and targeting system102). The targeting system102may be mounted to the bow100above an arrow rest104and arrow106. Targeting system102contains a transparent or semi-transparent target sighting window108. An object to be targeted using targeting system102is seen by a user through target sighting window108. The target sighting window108enables a processor of targeting system102to present or display one or more sighting marks (such as a fixed sighting mark110, a laser sighting reticle112, and a variable compensated sighting mark114, each of which is discussed in depth below) used for calibration of targeting system102and the targeting of an object of interest. The processing system may calibrate the targeting system102and determine an orientation of the bow to strike a target with an arrow based on a determined range to the object and information from sensors (e.g., an inclinometer, a gyroscope, etc.). The targeting system102includes a laser ranging module500. In embodiments, the targeting system102may further include an alphanumeric display116for the display of information to the operator, as discussed below. The target sighting window108is substantially transparent, with a reflective layer such that it is operable to allow light to pass through to observe the target218and to direct a targeting projection to the operator. As discussed more below, the surface of the target sighting window108may be partially reflective (for example, within a range of 10-50%), polarized, and/or may incorporate a narrow-band reflectivity to enhance the visibility of the various projected reticles. The projector is operable to project onto the target sighting window108a fixed sighting mark110and/or a laser sighting reticle112that substantially aligns line of sight208to the ranging module transmit axis212. The projector is further operable to project a variable compensated sighting mark114onto the target sighting window108. The variable compensated sighting mark114is associated with a compensated targeting axis210, which is determined at least in part based upon the range indication. In embodiments, the color of the variable compensated sighting mark114may be the same color as the fixed sighting mark110or the variable compensated sighting mark114may be a different color to increase visibility of the variable compensated sighting mark114. Targeting system102may include, in embodiments, a projector housing406enclosing a processor, a memory, a ranging module500, an inclinometer, an accelerometer, a battery, and other components. The targeting system102may include a processor (which may be the microcontroller illustrated inFIG.11). The processor provides processing functionality for the targeting system102and may include any number of processors, micro-controllers, or other processing systems, and resident or external memory for storing data and other information accessed or generated by the targeting system102. To provide examples, the processor may be implemented as an application specific integrated circuit (ASIC), an embedded processor, a central processing unit associated with targeting system102, etc. The processor may execute one or more software programs that implement the techniques and modules described herein. The processor is not limited by the materials from which it is formed or the processing mechanisms employed therein and, as such, may be implemented via semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)), and so forth. It is to be understood that the processor of targeting system102may be implemented as any suitable type and/or number of processors. For example, the processor may be a host processor of targeting system102that executes functions and methods relating to the information presented on target sighting window108as well as functions and methods relating to ranging module500. It should also be appreciated that the discussed functions and methods performed by the processor of the targeting system102may be performed by the processor of the ranging module500. In embodiments, ranging module500includes a separate processor and the described structure of the processor may also describe corresponding structure on the processor of the ranging module500. The targeting system may also include a communications element (not illustrated) that permits the targeting system102to send and receive data between different devices (e.g., the ranging module500, the inclinometer, other components, peripherals, and other external systems) and/or over the one or more networks. The communications element includes one or more Network Interface Units. NIU may be any form of wired or wireless network transceiver known in the art, including but not limited to networks configured for communications. Wired communications are also contemplated such as through universal serial bus (USB), Ethernet, serial connections, and so forth. Targeting system102may include multiple NIUs for connecting to different networks or a single NIU that can connect to each necessary network. The targeting system102may also include a memory (not illustrated). The memory is an example of device-readable storage media that provides storage functionality to store various data associated with the operation of the targeting system102, such as the software program and code segments discussed below, or other data to instruct the processor and other elements of the targeting system102to perform the techniques described herein. A wide variety of types and combinations of memory may be employed. The memory may be integral with the processor, a stand-alone memory, or a combination of both. The memory may include, for example, removable and non-removable memory elements such as RAM, ROM, Flash (e.g., SD Card, mini-SD card, micro-SD Card), magnetic, optical, USB memory devices, and so forth. In embodiments of the targeting system102, the memory may include removable ICC (Integrated Circuit Card) memory such as provided by SIM (Subscriber Identity Module) cards, USIM (Universal Subscriber Identity Module) cards, UICC (Universal Integrated Circuit Cards), and so on. The targeting system102may also comprise an inclinometer operable to determine an inclination of a ranging module transmit axis212relative to horizontal (e.g., relative to an artificial horizon). The compensated targeting axis210is determined at least in part by a horizontal component to the range indication. As the target218may be above or below targeting system102and its ranging module500, the range indication can be expressed as a vertical component and a horizontal component (being the vertical and horizontal sides of a right triangle, with a line from the ranging module500to the target218being the hypotenuse). As the force of gravity affects travel in the horizontal direction, only the horizontal component (or some associated ratio) may be used in calculating the compensated targeting axis210. The targeting system102may also comprise an accelerometer (illustrated schematically inFIG.11) operable to detect a shot from the bow100. The accelerometer detects accelerations or other motion of the targeting system102. If the detected acceleration is above a certain threshold, the accelerometer (or the processor) may process a shot. The determination that the operator has shot the bow100may then be used for various purposes. For example, during the calibration process the processor may prompt the user, via the alphanumeric display116, to input whether the arrow106struck the target218and/or any targeting error between the sighted point and the impact point. As another example, following the calibration process, the processor may instruct the projector600to turn off the variable sighting mark114. The targeting system102may also comprise an ambient light sensor (illustrated schematically inFIG.11) operable to detect an ambient light level at the bow100. A characteristic of the variable compensated sighting mark114is determined by the ambient light level (or more specifically, determined by an ambient light reading from the ambient light sensor). Characteristics of the variable compensated sighting mark114that may change include a brightness level, a color, a shape or a size, or other visual characteristic. The characteristic is changed such that the operator can still see the variable compensated sighting mark114as well as the target218without the variable sighting mark114being too obtrusive. For example, in low light scenarios, a smaller and/or dimmer variable sighting mark114will allow the operator to observe both the variable sighting mark114and the target218. The variable sighting mark114may also be in the red spectrum so as to reduce night blindness in the operator. In brightly lit scenarios, a larger and/or brighter variable sighting mark114may be used to ensure that the operator can see the variable sighting mark114. In embodiments of the invention, the changing of the characteristic is performed without operator input (e.g., the processor selects the characteristics of the variable sighting mark114without prompting the operator). The operator may additionally or alternatively be provided with a selection for the variable sighting mark114(for example, the operator may indicate that a brighter variable sighting mark114is generally desired by the operator, or that the operator prefers the variable sighting mark114to be a certain color). In embodiments of the invention, the targeting system102includes an alphanumeric display116to present information to the operator (as illustrated inFIG.7). In embodiments, the alphanumeric display116may comprise an LCD (Liquid Crystal Diode) display, a TFT (Thin Film Transistor) LCD display, an LEP (Light Emitting Polymer) or PLED (Polymer Light Emitting Diode) display, an OLED (Organic Light-Emitting Diode), and so forth, configured to display text and/or graphical information such as a graphical user interface. The alphanumeric display116could also be a three-dimensional display, such as a holographic or semi-holographic display. The alphanumeric display116may be backlit via a backlight such that it may be viewed in the dark or other low-light environments, as well as in bright sunlight conditions. The alphanumeric display116may be provided with a screen for presentation of information and entry of data and commands. In one or more implementations, the screen comprises a touch screen. For example, the touch screen may be a resistive touch screen, a surface acoustic wave touch screen, a capacitive touch screen, an infrared touch screen, optical imaging touch screens, dispersive signal touch screens, acoustic pulse recognition touch screens, combinations thereof, and the like. Capacitive touch screens may include surface capacitance touch screens, projected capacitance touch screens, mutual capacitance touch screens, and self-capacitance touch screens. The alphanumeric display116may therefore present an interactive portion (e.g., a “soft” keyboard, buttons, etc.) on the touch screen. In some embodiments, the alphanumeric display116may also include physical buttons integrated as part of targeting system102that may have dedicated and/or multi-purpose functionality, etc. In other embodiments, the alphanumeric display116includes a cursor control device (CCD) that utilizes a mouse, rollerball, trackpad, joystick, buttons, or the like to control and interact with the alphanumeric display116. FIG.2shows a side view of the bow100in both drawn and undrawn positions. A bow string206,216provides an exemplary form of propulsion for arrow106. Bow string206corresponds to bow100in the fully drawn position where bow string206and arrow106have been pulled by the user to an anchor point. Bow string216corresponds to bow100when in the undrawn position. The targeting system102is aligned with bow100or positioned in front of bow100using the attachment arm200. The attachment arm200places the targeting system102approximately 0.6 to 0.8 meters from an eye position202of the user when bow100is drawn. In some embodiments, such as bow100being a compound bow, a peep sight204is attached to or incorporated within bow string206. The peep sight204forms a small, circular opening through which the target scene and target sighting window108are viewed by the user from eye position202. A line of sight208extends from eye position202, through peep sight204, through the target sighting window108, to a target218while bow100is in the drawn position. Movement of peep sight204attached to bow string206from an unused initial position214to a drawn position is illustrated using a broken line. To help illustrate use of targeting system102, a plurality of axes are described herein only for illustrative purposes. It is to be understood that two or more of the axis may be directed in the same direction at some moments in time and each axis may be directed in different directions at other moments in time. A first axis, a line of sight208, extends from eye position202through the target sighting window108to a target218. When bow100is in the drawn position, line of sight208extends through peep sight204. A second axis, a compensated targeting axis210, corresponds to a trajectory of the arrow106after release. A third axis, a ranging module transmit axis212, corresponds to the beam output from ranging module500towards target218. It is to be understood thatFIG.2is not drawn to scale, but the compensated targeting axis210is generally illustrative of an initial inclination of the trajectory of the arrow106after release, and is generally aligned with (e.g., parallel to) a ranging module transmit axis212(discussed below). The arrow106follows a trajectory220through the air to a desired point on target218. For instance, if arrow106travels a significant distance from bow100to reach a target218located at a similar height as bow100, trajectory220rises to an apex before gravity and air resistance cause the arrow to descend to the target218. It should therefore be appreciated that a compensated targeting axis210may be raised such that arrow106is aiming above the target218. The compensated targeting axis210is the axis in which the arrow106travels initially upon leaving the bow100. For a target218located at a similar height to bow100, the compensated targeting axis210is typically above a target sight line222extending from eye position202to the target218, such that (from the operator's perspective) the trajectory220of arrow106appears to be above the target218. The location of variable compensated sighting mark114, as discussed in depth below, is determined by the processor to enable the operator to orient bow100such that the compensated sighting mark114is placed onto a location of the desired target218(by viewing target218through the target sighting window108). In embodiments, the targeting system102comprises a ranging module500(illustrated inFIG.5), a target sighting window108, and a projector (best illustrated inFIG.6). The ranging module500is operable to determine a range to a target218and has an associated ranging module transmit axis212along which a beam is transmit to the target218(a reflection of the beam from target218may follow the same path). In some embodiments of the invention, the targeting system102may be integrated into the bow100. In other embodiments of the invention, the targeting system102is a standalone device that is secured to the bow100. In still other embodiments of the invention, the targeting system102is a standalone device that may additionally or optionally interface with other external devices (such as a bow camera, a smart phone, a location element, or other device). FIG.3shows the targeting system102detached from the bow100. The targeting system102may include target sighting window108as well as various sensors and circuitry to calculate a range from bow100to a target218, determine an orientation of bow100, or environmental conditions (e.g., wind sensor, ambient light sensor, etc.). Targeting system102may include a housing formed from a unitary assembly or combined in a semi-permanent configuration containing the components of targeting system102. As discussed below, the operator may align the targeting system102such that the fixed sighting mark110such that line of sight208intersects (coincides with) ranging module transmit axis212at a certain distance when target window108is viewed from a perspective corresponding to eye position202. As seen inFIG.6, line of sight208and ranging module transmit axis212are separated by a predetermined distance (e.g., 1-2 inches) and originate from eye position202and beam source508, respectively. The separation between line of sight208and ranging module transmit axis212is identified by reference “D.” Therefore, fixed sighting mark110enables a user to ensure that the target being aimed towards from eye position202corresponds to the beam output from ranging module500for accurately ranging the target218. The attachment arm200therefore may be operable to be adjusted by the operator to provide this alignment of line of sight208and ranging module transmit axis212. Such proper alignment is confirmed and adjusted as needed during the calibration process. The attachment arm200may be adjusted in a variety of manners to enable proper use of targeting system102with bow100. For instance, the attachment arm200may include translation adjustments, angle elevation adjustments (which may be referred to as “pitch”), azimuth adjustments (which may be referred to as “yaw), and/or rotation adjustments (which may be referred to as “roll”). The attachment arm200may include or couple to an alignment mechanism300that provides translation of the targeting system102in elevation and azimuth to align a fixed sighting dot to the ranging module transmit axis212as well as the nominal trajectory220of the arrow106. Exemplary components of the alignment mechanism300, such as those for rack and pinion elevation302and azimuth adjustments304are shown. Further examples could include a rotational adjustment306, which provides rotation or roll of the targeting system102. A yaw sight adjustment308moves the targeting system102in a yaw direction, and a pitch sight adjustment310moves the targeting system102in the pitch direction. It should be appreciated that these adjustments are made relative to the bow100on which the attachment arm200is mounted. FIG.4illustrates an exemplary targeting system102. Generally, the processor determines a range (distance) to a target218and controls projector600to present a variable compensated sighting mark114on target sighting window108so that the operator can orient the bow100and arrow106based on the location of the variable compensated sighting mark114to strike target218with the arrow106. Ranging module500is communicatively coupled to the processor of targeting system102such that a distance as well as additional information to target218is determined by the targeting system102.FIG.4illustrates a backside view of the targeting system102with a detail of a region of the window in proximity of the fixed sighting mark110. In embodiments of the invention, the fixed sighting mark110is a dot or other shape presented on target sighting window108. For instance, the fixed sighting mark110may be reflected onto a reflective side of the target sighting window108or permanently etched or printed on the target sighting window108. If fixed sighting mark110is permanently shown on target sighting window108, the fixed sighting mark110is visible without activating the targeting system102. The fixed sighting mark110is used with laser sighting reticle112for initial targeting of the target218for ranging calculations. In embodiments, the processor may present on the target sighting window108one or more alignment guidance marks408on target sighting window108to assist a user with orienting himself or bow100to bring fixed sighting mark110and laser sighting reticle112near or closer to each other when target window108is viewed from a perspective corresponding to eye position202. In other words, because an initial orientation of bow100relative to a position of the operator's eye202may result in the fixed sighting mark110not being proximate to the laser sighting reticle112, the processor of the targeting system102may present alignment guidance marks408indicating the direction in which bow100must be moved (oriented) to bring fixed sighting mark110and laser sighting reticle112near or closer to each other within an alignment region400. The alignment region400represents a general area in which the fixed sighting mark110and the laser sighting reticle112are projected or presented. It should be appreciated that typically the alignment region400is not physically shown on the target sighting window108, but is illustrated inFIG.4for illustrative purposes. The alignment guidance marks408assist a user with orienting himself (his eye position202) or the bow100to which the targeting system102is attached to enter a small eye-box or viewing area in proximity of the operator's eye202and align fixed sighting mark110with laser sighting reticle112and thereby confirm the line of sight208intersects (coincides with) ranging module transmit axis212when target window108is viewed from a perspective corresponding to eye position202. In embodiments, each alignment guidance mark408may include or integrate arrows (or similar indicators of direction) to help the user properly align the bow100and targeting system102to reach alignment region400. For instance, in certain orientations of the bow100relative to the user's eye202, only one alignment guidance mark408may be visible from eye position202. The processor may control projector engine616and light array110to output any combination of the fixed sighting mark110, laser sighting reticle112, and alignment guidance marks408. For example, in some embodiments, projector engine616may output a holographic image including one or more alignment guidance marks408and the laser sighting reticle112and light array610may output fixed sighting mark110. In other embodiments, projector engine616may include an active display device, such as a liquid crystal on silicon (LCoS), which changes an orientation of the projected first sighting mark (laser sighting reticle112) and/or the alignment guidance marks408on the target sighting window108. Movement of the projected first sighting mark may allow aiming at a compensated distance to target108without the use of a peep sight204. As a result, in such embodiments, both the alignment guidance marks408and the laser sighting reticle112appear to be located closer to target218than fixed sighting mark110when target window108is viewed from a perspective corresponding to eye position202. In another example, projector engine616may output a holographic image including the laser sighting reticle112and light array610may output fixed sighting mark110and one or more alignment guidance marks408. As a result, in such embodiments, both the laser sighting reticle112appear to be located closer to target218than fixed sighting mark110and the alignment guidance marks408when target window108is viewed from a perspective corresponding to eye position202. In yet another example, the projector engine616may output a holographic image including alignment guidance marks408and light array610may output laser sighting reticle112and fixed sighting mark110. As a result, in such embodiments, the alignment guidance marks408appear to be located closer to target218than fixed sighting mark110and laser sighting reticle112when target window108is viewed from a perspective corresponding to eye position202. FIG.4illustrates the target sighting window108including a window402disposed diagonally (as best illustrated inFIG.6) to line of sight208and the ranging module transmit axis212. The window402is enclosed, at least in part, by a window housing404. The window housing404reduces glare on the window402and also provides for structural strength and protection for the window402. The window housing404may be substantially tubular and the window402may present an elliptical shape. As such, the window402is disposed within the tubular window housing404at a diagonal angle (e.g., approximately 45 degrees, between 40 and 50 degrees, between 30 and 60 degrees, etc.) as viewed from above or below (as illustrated inFIG.6). The targeting system102also includes a projector housing406adjacent to the window housing404. The projector600(as illustrated inFIG.6) is disposed within the projector housing406. The target sighting window108provides an unobstructed view of the target218as the projector600and the projector housing406are adjacent thereto. The projector housing406may also contain other components described herein such as the processor, the memory, the ranging module, the inclinometer, the accelerometer, a battery, or other components. FIG.5illustrates a cross section of a ranging module500, and also shows an illuminated reticule pattern generator502with associated folding optics504. The ranging module500includes a transmit beam tube506having a beam source508therein. The beam source508outputs a beam directed along ranging module transmit axis212to the target218. The ranging module500further includes a receive beam tube510having a beam receptor512therein. The beam receptor512detects reflected beams from target218and also detects various properties of the reflected beams (such as time of flight, frequency, wavelength, angle received from, or other property). In embodiments of the invention, the ranging module500is configured to emit a beam and receive a reflected beam from a target218. The emitted beam may be a laser beam, an energized beam, a light beam, a radar beam, an infrared beam, a sonar beam, an ultraviolet beam, or other electromagnetic or physical beam. In some embodiments of the invention, the ranging module500may include a plurality of sensors that utilize any or all of the above-discussed types of signals. The ranging module500may alternatively or additionally include a camera for the detection of visible light. Typically, the ranging module500will be oriented relative to the bow100outward in a certain range or field, as set by the attachment arm200discussed above. Signals reflected back to the ranging module500are analyzed by the processor of the targeting system102to determine a distance and/or direction to target218from bow100and arrow106. Typically, the processor may determine the distance from the ranging module500to the target218(as illustrated asFIG.2) by analyzing the reflected beam and the outputted beam to calculate a duration of time that passed for the beam to travel to target218and reflect back to ranging module500. Signals received outside a certain range may be discarded as not being reflected from the target218. It is to be understood that the processor of targeting system102performs the analysis of the reflected signals and a portion of the processor may be within the ranging module500. This analysis determines the range (distance) to the target218as measured from the ranging module500. The ranging module500may also detect a relative altitude of the target218relative to the ranging module500. The relative altitude may be measured by use of an inclinometer. The hardware associated with the projection of any sighting marks projected on target sighting window108is illustrated inFIG.6, and projector600will now be discussed in detail. As discussed above, in embodiments, the sighting elements may be presented on one transparent surface such that one element is projected onto the surface and appears to be closer to the target than the other sighting element when target window108is viewed from a perspective corresponding to eye position202. For instance, targeting system may be configured to present a first sighting element (a fixed sighting mark110) on a target sighting window108and control projector600to present a second sighting element (a laser sighting reticle112) onto the target sighting window108. The projector600outputs various lights, beams, or other energized particles toward the window402that includes a reflective side602, and a transmissive side604. The beam is reflected off the reflective side602of the target sighting window108and toward the operator's sight. A direct beam606may reflect directly toward the operator, while a wide-angle beam608may reflect outward. During ranging to a target218, a laser sighting reticle112is projected on target sighting window108in such a manner that it appears to be closer to target218than the fixed sighting mark110when target window108is viewed from a perspective corresponding to eye position202. When both sighting marks are aligned, the line of sight208intersects (coincides with) ranging module transmit axis212to enable accurate ranging to a target218when target window108is viewed from a perspective corresponding to eye position202. After ranging, one or more variable sighting marks114are presented on the surface of the target sighting window108to enable a user to orient bow100and arrow106to strike target218. Unless the target is located at a distance corresponding to fixed sighting mark110, the variable sighting mark114presented on target sighting window108at a location different from the location of the fixed sighting mark110based on a determined distance to a target218as well as other criteria. In embodiments, sighting marks110,114,900are not visible from the transmissive side604of the targeting system102. As such, the target218(such as an animal) will not be able to see the sighting marks110,114,900or other information presented on the reflective side602of the target sighting window108, so as to increase the stealth of the operator in firing at the target218and to prevent any light from being transmitted toward the target218. In embodiments of the invention, the processor is operable to utilize and control a light array610to present one or more of sighting marks110,114,900(e.g., thirty, sixty, etc.). Selection of the separation and number of sighting marks110,114,900is based on the targeting accuracy and maximum compensated distance to the target218. At a given distance, the physical size of the desired target zone relates to the separation of sighting marks110,114,900. Each sighting mark110,114,900is produced by one or more LEDs of light array610emitting light. As can be seen inFIG.6, the light emitted by the LEDs is directed toward the diagonal window402of the target sighting window108and then reflected therefrom to the operator's eye202. The LEDs may have an adjustable brightness level, as influenced by ambient conditions (as detected by an ambient light sensor discussed below) and/or user selection. In some embodiments, sighting marks110,114,900may originate from projector engine616and are projected onto target sighting window108. In other embodiments, sighting marks110,114,900may originate from a light array610and are reflected onto target sighting window108. For example, light array610, a combiner mirror612and a window opening614allow light from the light array610to reflect onto target sighting window108. The light array610may limit visible light that may be output toward the target218(such as an animal that may become alerted by visible light). The light array610is adjacent to, but distinct from a projector engine616. The combiner mirror612combines the visual field of the light array610with the floating laser sighting reticle112projected from projector engine616indicating the ranging module transmit axis208. The targeting system102may further include a light array610that is disposed vertically. The light array610is itself or is a component of the projector600. The light array610is disposed vertically (relative to an artificial horizon) because sighting marks will be disposed vertically. The adjustment due to the distance to the target218are vertical adjustments. As such, the sighting marks110,114,900are typically disposed vertically, as each sighting mark110,114,900will be aligned on a vertical axis. In other embodiments, each sighting mark110,114,900may be offset from the vertical axis, to provide improved visibility and compensate for other known variables. For example, if the targeting system102has access to weather information, such as through a smart phone or other internet-accessible device or a wind sensor or manual input, the variable sighting mark114presented on target sighting window108may provide a compensation for wind based upon a known orientation of the targeting system102as received from a magnetometer, which is schematically illustrated inFIG.11. In some embodiments of the invention, the light array610includes a set of light emitting diodes (LEDs). The set of LEDs of light array610may be disposed vertically so as to provide a variable sighting mark114at a location on target sighting window108that corresponds with the compensated targeting axis210determined based on distance, wind and other variables. The light array610may additionally or alternatively include a liquid crystal display (LCD), a liquid crystal on silicon display (LCOS), an organic light emitting diode display (OLED), and/or another type of light projection. The light array610may be operable to output a set of vertically-aligned sighting marks110,114,900that reflect onto the target sighting window108. In other embodiments, light array610may be operable to project the vertically-aligned sighting marks onto the target sighting window108. Upon a detection of a reflected signal by the beam receptor512, the processor may analyze the reflected signal to determine a distance to the target218and various characteristics of the target218as indicated by the reflected signal, via a computer program stored on the memory (being a non-transitory computer readable storage medium). The processor may analyze the target data in raw reflected signal information and determine target indications. Also illustrated inFIG.6is a power source618. The power source618provides power to the various electronic components of the targeting system102. An example of a power source618could be two AA batteries, as illustrated at cross-section inFIG.6. The power source618may be removable and replaceable upon becoming depleted, or may be internally rechargeable (such as by plugging in the targeting system102to an external power system. The power source618may additionally or alternatively include a solar panel (not illustrated) for providing at least a portion of the power that is consumed by the targeting system102. In embodiments of the invention, the targeting system102presents the variable sighting mark114and fixed sighting mark110on target sighting window108, such as by placement on target sighting window108or reflection onto target sighting window108. Additionally, the targeting system102presents the laser sighting reticle112, such as via a projection, on the target sighting window108such that laser sighting reticle112appears closer to the target than the variable sighting mark114and fixed sighting mark110when target window108is viewed from a perspective corresponding to eye position202. The processor stores in memory a location of the laser sighting reticle112and a location of the fixed sighting mark110. The processor presents the laser sighting reticle112and fixed sighting mark110and when the laser sighting reticle112and fixed sighting mark110align, the line of sight208intersects (coincides with) ranging module transmit axis212to enable accurate ranging to a target218when target window108is viewed from a perspective corresponding to eye position202. In some embodiments, the processor controls light array610and/or projector engine616to present the variable sighting mark114after the ranging module500has determined a range to the target. For example, the variable sighting mark114may be presented on target sighting window108when triggered by a detected movement of the weapon, a detected pressure against a component of the weapon, a selection or powering by the operator, a detection that the weapon is about to fire (such as by the disabling of a safety device or pulling back of the bow string206, or other triggering event). In other embodiments, the variable sighting mark114is presented on the target sighting window108when a target218is detected. In still other embodiments, the variable sighting mark114is permanently displayed while the bow100is operational and/or loaded. In some embodiments, the operator may manually select to view one or more sighting marks110,114,900when desired. Upon a triggering event, the one or more sighting marks110,114,900are presented on the respective display and may include an alert to the operator (such as a visual and/or audio signal indicative that the one or more sighting marks110,114,900are being shown). It should also be appreciated thatFIG.7andFIG.9illustrate exemplary displays and that various embodiments of the invention may include displays that look different thanFIG.7andFIG.9. The processor controls light array610and/or projector engine616to present a simplified interface so as to clearly present information to the operator on target sighting window108without providing excessive information that may obstruct the operator's view of the target218or distract the user. The processor projects or otherwise displays a variable sighting mark114on the target sighting window108so as to enable the operator to observe the target218with a substantially unobstructed view. The alphanumeric display116presents information indicative of the target218determined by the processor. The target description may be a string, a number, a graphic, a representation, or other illustration based upon the available information for the target218. The target description therefore provides the operator with the available information about the target218in an easy-to-read representation. This information may include distance, direction, location, angle upward or downward, the horizontal component of the distance, or some combination thereof. It should be appreciated that, as used herein, “icon” and “graphic” may refer to any graphical representation of the respective information. An “icon” or a “graphic” may include graphics, pictures, photographs, words, numbers, symbols, lines, colors, opacity, cross-hatching, and other fill textures and visual representations of information. The “icon” or “graphic” may also change, alter, update, and delete as new information is obtained. For example, as a target218and/or bow100moves, additional or updated information may be displayed. Similarly, if the target218is no longer detected by the processor, the processor may cause the target description to be removed from the alphanumeric display116. In embodiments of the invention, the target description is indicative of a determined distance to the target218. In some embodiments, the distance to the target218may be expressly shown on the alphanumeric display116. FIG.7shows a simplified view of the targeting system102containing the target sighting window108, alphanumeric display116and a plurality of user interface elements700(e.g., a set of depressible switches). The targeting system102, upon the processor determining activation of one or more of said user interface elements700(or other power switch, not illustrated), may illuminate one or more sighting marks110,114,900for low-light conditions. A pressure switch702coupled with the processor is typically mounted on or near the arrow rest104to allow simple control over the sighting/ranging process. Upon sustained depression of the pressure sensitive switch702, the processor may activate the alphanumeric display116and present a measured distance704to target218. The processor may control projector600to present two sighting elements110,112in such a manner that one of the sighting elements appears to be closer to the target218than the other sighting element when target window108is viewed from a perspective corresponding to eye position202. For example, the processor may utilize projector600(including a light array610and a projector engine616) to present on a transparent target sighting window108a first sighting element that appears to be positioned or presented on the surface of the target sighting window108and a second sighting element projected onto the target sighting window108such that the second sighting element appears to be closer to the target218than the first sighting element when target window108is viewed from a perspective corresponding to eye position202. Similarly, the two sighting elements110,112may be presented on parallel, transparent target sighting windows108such that a first sighting element is positioned or presented on a first target sighting window108and a second sighting element is positioned or presented on a second target sighting window108, where the second target sighting window108is closer to the target than the first surface when target window108is viewed from a perspective corresponding to eye position202. For example, in embodiments, the processor controls projector616of projector600to project onto target sighting window108the first sighting element, such as the laser sighting reticle112, and light array610of projector600to present on the target sighting window108the second sighting element, such as the fixed sighting mark110. In such an example, the projected laser sighting reticle112appears to be closer to the target218than the fixed sighting mark110that appears to be positioned on the reflective side602of the target sighting window108when target window108is viewed from a perspective corresponding to eye position202. The projected sighting element, such as the laser sighting reticle112or tar, has an apparent origin at some distance in the far field (for example, 5 yards or 20 yards) in front of the targeting system102. In this example, the projected laser sighting reticle112appear to float relative to the fixed sighting mark110, which appears to be fixed on the target sighting window108. When both sighting elements110,112are aligned by the user, the user's line of sight208intersects (coincides with) ranging module transmit axis212to enable accurate ranging to a target218when target window108is viewed from a perspective corresponding to eye position202. As a result, the laser sighting reticle112is aligned to the ranging module transmit axis212and corresponds to a location at which a beam output by ranging module is pointing. The alignment region400surrounding the fixed sighting mark110may represent the area over which the collimated laser sighting reticle112is projected. It is known that the size of projected elements, such as laser sighting reticle112, is correlated to the physical envelope of the collimation optics504(e.g., a mirror) and illumination source502contained within the targeting system102. The linear extent706of the laser sighting reticle112is extended beyond the nominal eye-box to allow viewing a portions of the cross-pattern well past the extent of the alignment region400. The large projection angular extent provides the capability to view the laser sighting reticle112even if viewed significantly off-axis. In embodiments, upon release of the depressible trigger702, a distance704to the target is presented on alphanumeric display116, and a target-specific variable compensated sighting mark114is presented on target sighting window108. The processor may determine the location of the variable compensated sighting mark114on target sighting window108based on a measured range (distance) to target218and an elevation inclination of the ranging module500when a beam was output to range the target. In embodiments, the laser sighting reticle112presents a shape adapted to allow the operator to center the laser sighting reticle112around the fixed sighting mark110. When the fixed sighting mark is centered within the laser sighting reticle112when target window108is viewed from a perspective corresponding to eye position202, both sighting elements110,112are aligned and the user's line of sight208intersects (coincides with) ranging module transmit axis212to enable accurate ranging to a target218. The shape of the laser sighting reticle112may be generally x-shaped, cross shaped, crosshair shaped, reticle shaped, angularly extending line shape, or other shape. The operator aligns an intersection of the laser sighting reticle112center with the fixed sighting mark110so as to center the sighting elements110,112thereby aligning the user's line of sight208intersects (coincides with) ranging module transmit axis212when target window108is viewed from a perspective corresponding to eye position202. User interface elements700couple with the processor and provide operator access to a variety of modes of operation. An exemplary layout of user interface elements700could include an enter switch708, a left/down switch710, a right/up switch712, and a back switch714. It should be appreciated that other layouts and switches may be utilized to allow the operator to enter the requested information and to perform the various desired functions. FIG.8illustrates an optical block diagram. One embodiment provides both a projected laser sighting reticle112focused into the far-field and a series of selectable elevation-offset variable compensated sighting marks114roughly appearing to be presented on the target sighting window108. In embodiments, the collimated laser sighting reticle112does not fill the entire display region of the target sighting window and may provide collimated colored pattern superimposed and centered around the fixed sighting mark110to indicate the ranging module transmit axis212for both initial alignment during calibration and to indicate the orientation of the bow100during the active shot targeting process. Continuing withFIG.8, the projected laser sighting reticle112may originate in projector engine616followed by a diffuser802and patterned optical mask804. The patterned optical mask804is projected into the far-field using collimating lens806followed by light array610and a combiner mirror612directing the projection towards the reflective side602of the target sighting window108. The reflective side602of the window402has a partially reflective coating while the reflective side604has an anti-reflective coating to prevent the formation of a double image when the operator views the projected sighting marks. The laser sighting reticle112appears to surround the fixed sighting mark110when the ranging module transmit axis212is parallel with the arrow trajectory220defined by the peep sight204and fixed sighting dot. Located immediately behind the folding mirror is a light array610that projects a line of dots indicating the variable sighting mark114when viewed off the target sighting window108. One or more sighting dots are presented or projected towards the viewer when the targeting system102is active. The vertically-aligned array of sighting marks110,114,900fills a relative large field of view in elevation as illustrated by a beam envelope of the light array610. To increase the size of the alignment region400sighting mark openings radiating out from the center of the reticule allow viewing outside the central alignment region400angular field of coverage. FIG.9shows a series of sighting point positions generated by the light array610presented on window402of target sighting window108. As opposed to the laser sighting reticle112, which is used in the sighting process to determine a range (distance) to a target218, one or more of the variable sighting marks114produced by the light array610are selected by the processor based on the measured distance and inclination of the shot. In embodiments, a user-defined sighting mark900will relate to a specific distance and inclination of the shot, as discussed in depth below. Additional information may be provided by the projector through the activation of additional user-defined sighting marks900or other informational sighting marks surrounding the target-specific sighting mark based on measured distance and inclination. The color of the various sighting marks may change based on day or night use, to improve color contrast against a background, or to indicate different targeting conditions. As shown inFIG.9, the processor may present on target sighting window108a variable sighting mark114as well as one or more user-defined sighting marks900. Each user-defined sighting mark900is associated with a known targeting axis at a certain targeting distance. Each user-defined sighting mark may be determined and stored in memory within a user profile during the calibration process. Each user-defined sighting mark900has a related targeting distance. For instance, the fixed sighting mark110may correspond to a baseline distance of 20 yards to the target218when laser sighting reticle112is centered with the fixed sighting mark110(aligning the user's line of sight208with ranging module transmit axis212at 20 yards) when target window108is viewed from a perspective corresponding to eye position202. Each subsequent user-defined targeting point may be associated with an interval relative to the baseline distance (for example, 30 yards, 40 yards, and 50 yards). The variable sighting mark114that is associated with each interval distance is determined during the calibration process, as discussed below. It should be appreciated, however, that sighting marks110,114,900apply for a particular operator, bow100, and type of arrow106. If any of these conditions are changed (for example, the targeting system102is attached to a different type of bow100, or the operator switches to a heavier arrow106), the operator may then re-calibrate the targeting system102for the changed condition and store the calibration information in the memory of the targeting system102. In embodiments, the user interface elements700may be utilized to switch between calibrations, which may be stored in memory as profiles. If the profile is changed, the processor may present each of sighting marks110,114,900differently to account for the changed conditions). FIG.9further illustrates a target-specific sighting mark902of the variable sighting mark114being displayed with sighting marks110,114,900. For instance, the variable sighting mark114may include a target-specific sighting mark902disposed between two user-defined sighting marks900of the set of user defined sighting marks900. The acquired range indication (for the current target218) is between the two certain targeting distances associated with each said two user-defined sighting marks900. This is because to reach a target218between the two certain targeting distances, the variable sighting mark114is displayed between them. However, it should be appreciated that the relationship between the distances and the variable sighting mark114sis not linear. This is because the drop of the arrow106is an acceleration toward the earth due to the pull of gravity. The following discussion describes procedures that can be implemented in a targeting system102. The procedures can be implemented as operational flows in hardware, firmware, software, or a combination thereof. These operational flows are shown below as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. The features of the operational flows described below are platform-independent, meaning that the operations can be implemented on a variety of device platforms having a variety of processors. Before discussing the steps performed by the targeting system102, exemplary steps performed by the operator will be discussed to provide an example of how embodiments of the invention may be used. In this example, the operator uses a bow100. An operator observing a target218picks up the bow100and momentarily depresses a trigger702located in close proximity to the handle of the bow100. The targeting system102becomes active and alphanumeric display116displays the last distance measurement and the elevation offset associated with that distance. If the operator is roughly in the same relative position to the target218, the bow string206may be drawn and an arrow fired using the previous distance and elevation information. If the operator's position relative to the target218has changed, the trigger702may be depressed and held resulting in the activation of the integrated ranging module500and inclination sensor. The alphanumeric display116and laser sighting reticle112become active providing the display of measured distance and optionally inclination. The measurement rate may be operator selectable or defaults to a nominal rate (such as 10 Hz) to minimize measurement lag tracking a moving target218. The fixed sighting mark100and laser sighting reticle112may be utilized to aim for a target218to be ranged. Once comfortable with an aim point on the target218, the processor senses a release of the depressible trigger702, resulting in the processor calculating a distance to the target218and a recommended elevation aim point compensation along with activation of a corresponding fixed sight offset indication and/or variable sighting mark114. If the bow string206was not drawn during ranging, the bow string206is drawn with the variable sighting mark114used to aim towards a target and a shot is taken by the operator. If the trigger702is not depressed or activated for a predetermined time-out period, the targeting system102deactivates the various sighting marks112,114and the alphanumeric display116. If an acceptable target218is identified at a similar distance and inclination, the trigger702can be momentarily depressed and a new shot can be made without a new range measurement. If a new target218appears with a different range or inclination condition, the trigger702depression time can be extended to reinitiate a new distance measurement and update compensated sighting point. Another example may include controlling the targeting system102using the trigger702to toggle the measurement state of the targeting system102. A mode selected through the system menu initiates fixed-frequency operation of the ranging module500, the alphanumeric display116and the laser sighting reticle112. Depressing the trigger702terminates this looped ranging operation, freezing the distance display while activating the appropriate variable sighting mark114sighting point based on the held distance and inclination. The processor may cause a set of sighting marks110,114,900stored in memory to be presented on target sighting window108upon sensing successive depressions of trigger702. Similarly, if the operator desires to start a new sighting process, the trigger702can be depressed again momentarily and the ranging module500restarts along with associated display elements. If the operator wishes to store a measurement and put the targeting system102to sleep, the trigger702can be depressed for a longer period. A momentary depression of the trigger702may re-wake the system indication of the previous saved distance and compensated sight point. Thus, use of the trigger702allows quick targeting without repeating the initial steps of the above-discussed process. Lack of activity for a period of time may still cause shifting to a sleep state as in the previous operational scenario. Exemplary steps performed by the operator in setting up the bow100will now be discussed. After the targeting system102is mounted to the bow100, the vertical and horizontal lateral adjustments on the attachment arm200are used by the operator to position the fixed sighting mark110to the arrow's approximate trajectory220over the baseline distance (e.g., 20 yards, 40 yards, etc.), which may relate to the type of bow100, the type of arrow106and other factors. The operator moves to the desired targeting distance desired for the baseline distance for the fixed sighting mark110. The operator performs one or more test shots at the baseline distance by aligning his aim by centering the fixed sighting mark110and laser sighting reticle112. The operator may then adjust the vertical and horizontal lateral position of the targeting system102to correct for any targeting discrepancy between the fixed sighting mark110and the cluster of arrow impact points on the target218. The operator may then activate power to the targeting system102followed by depressing the trigger702located near the arrow rest104to activate the ranging module500and the associated collimated laser sighting reticle112indicating the pointing direction of the ranging module500. The operator may then again draw the bow string206, aim at the target218using the fixed sighting mark110and laser sighting reticle112(as well as peep sight204) at the distance used to previously to calibrate the fixed sighting mark110. If the fixed sighting mark110or laser sighting reticle112is not sufficiently accurate, the operator may adjust pitch and yaw adjustment on the attachment arm200to move the fixed sighting mark110and/or laser sighting reticle112in the direction of the targeting error. The operator may then repeat the above-discussed steps as required until there is sufficient alignment between the fixed sighting mark110and laser sighting reticle112for the baseline distance. Once the ranging module500is nominally aligned to the fixed sighting mark110, the operator will enter a target-distance calibration mode, such as by using the user interface elements700. The collimated laser sighting reticle112will deactivate and the operator will move to a longer distance than the baseline distance (for example, if the baseline distance was twenty yards, the operator may move back from the target218to a distance of 30 yards from the target218). The operator will then depress the trigger702near the arrow rest104to obtain the range, draw the bow string206and aim at the target218using the fixed sighting mark110and laser sighting reticle112. When the operator releases the trigger702, a variable sighting mark114is displayed (typically below the fixed sighting mark110on level ground) with an offset distance determined by the processor based on at least the distance to target218. The offset distance is associated with the compensated targeting axis210, and may additionally be based on default parameters (such as an average or standard draw weight, arrow weight, or other ballistic characteristics). The operator may then fire one or more arrows at target218using the variable sighting mark114and noting actual impact point, measure the discrepancy either directly at the target218or by counting a number of displayed dots between the variable sighting mark114and the actual impact location. Additionally, or alternatively, the operator may physically measure the miss distance in proximity to the target218. If the sight displayed sight point was adequate, the operator may then enter normal sighting operation using the user menu and controls, as discussed above. If the sighting point was in error, enter the error distance and/or the number of displayed dots along with the direction of the error (above or below) using the menu controls. FIG.10presents a flowchart illustrating the operation of a method performed by a processor. While the various procedures and methods have been discussed throughout, general steps of the method will now be described. In embodiments of the invention, a computerized method is utilized for performing the discussed steps. In other embodiments, a non-transitory computer readable medium has a computer program thereon. The computer program instructs at least one processing element to perform the discussed steps. As discussed herein, the processor may be the microcontroller discussed below, or another processor associated with the targeting system102. The steps of the described method may additionally or alternatively be performed by more than one processing element. FIG.10generally illustrates a method of calibrating and using the targeting system102after calibration. In Step1000, the processor acquires an indication that the operator (user) desires to determine a distance to the target218. The operator may provide such an input by operating the trigger702(engaging or releasing trigger702), selecting a use mode (e.g., calibration, hunting, etc.) from a menu utilizing the user interface elements700, or by initially powering the targeting system102. In Step1002, the processor instructs the projector600to project a laser sighting reticle112onto the target sighting window108and the light array610to output a fixed sighting element110onto the target sighting window108. The laser sighting reticle112is presented on the target sighting window108and appears around the fixed sighting mark110(as discussed above) such that the operator can use both elements to ensure that the operator's line of sight208is aligned with the ranging module transmit axis212when target sighting window108is viewed from a perspective corresponding to eye position202. The processor may additionally instruct the projector600to continue projecting the laser sighting reticle112for a certain duration. For example, the projector may continue projecting the laser sighting reticle112throughout the following steps, cease after the range indication has been received, cease after a certain time interval, cease upon a selection of a different mode by the operator, cease upon powering down the targeting system102, or cease at another time. Step1004, the processor presents a request to the operator to use the fixed sighting mark110and the laser sighting reticle112to align to the target218. The processor also instructs the ranging module500to output a beam towards the target218and receive a reflection of the beam from218. The processor may determine, or cause the ranging module500to determine, the distance from the bow100to the target218. This may be performed upon an indication that the operator has aligned the fixed sighting mark110and the laser sighting reticle112to the target218. It should be appreciated that the initial distance may be the above-discussed baseline distance. It should be appreciated that the line of sight208and the first compensated targeting axis210are generally aligned with a ranging module transmit axis212of the ranging module500when the fixed sighting mark110and the laser sighting reticle112to align to the target218. In Step1006, the processor inquires of the operator whether the arrow106struck the target218. This inquiry may be shown on the alphanumeric display116, via an audible sound, via a secondary display (such as a smart phone or other external computing device, or directly on the target sighting window108). The operator may then respond to the inquiry by various methods. For example, the operator may respond by selecting the appropriate user interface input700, speaking an audible command, entering information on the secondary display, or by another input method. If the operator provides an indication that the arrow106did not strike the target218to a sufficient accuracy, in Step1008, the processor will present a request that the operator manually adjust the attachment arm200of the targeting system102so as to align the fixed sighting mark110and/or laser sighting reticle112with the correct targeting axis at the baseline distance. In some embodiments, the processor will present, to the operator via the alphanumeric display116, an adjustment instruction for the operator to perform on an attachment arm200that is securing the targeting system102to the bow100. The operator may input a relative impact location for the arrow106relative to the target218and the processor may calculate the appropriate adjustments to be made to orient bow100to strike target218. For example, based upon the supplied information, the processor may control the display to present instructions that the operator should move the yaw adjustment two full rotations clockwise and move the azimuth adjustment one full rotation clockwise. Other embodiments of the attachment arm200may indicate to the user how far to slide the dovetail assembly in a particular direction to achieve the same yaw and azimuth adjustments. The processor may then receive, from the operator, an indication that the adjustment instruction has been performed. Upon the operator indicating that the adjustment is complete, the processor will then return to Step1000and begin the calibration process anew. If the operator provides an indication that the arrow106did strike the target218to a sufficient accuracy, in Step1010, the processor will present a request to the operator to move back by a predetermined distance. Alternatively, steps1010-1024may occur during use of the targeting system102after calibration. After acquiring, from the operator, a selection that the arrow106struck the target218at the first distance by aligning a fixed sighting mark110and the laser sighting reticle112presented on the target sighting window108, the processor may determine additional user-defined sighting marks900. The processor will do so by presenting, to the operator via the display, a request that the operator move to a first predetermined distance from the target218. As discussed above with reference toFIG.9, each predetermined distance may be of a fixed length (e.g., 10 yards or meters, 20 yards or 20 meters, etc.). The predetermined distance is used such that in future shooting scenarios, the processor can interpolate between, or extrapolate beyond, these user-defined sighting marks900to determine the variable sighting mark114for a wide range of distances. In Step1012, the processor acquires an indication that the operator would like to determine a distance, similarly as to Step1000. This indication is typically associated with the operator moving back approximately back the interval distance. In Step1014, the processor instructs the projector600to show the laser sighting reticle112, such that the operator can know that the target218is being measured by the ranging module500(and not another object in the proximity or behind the target218). In Step1016, the processor determines or acquires, from the ranging module500, a distance to the target218. In Step1018, the processor determines a desired targeting orientation of the bow100(e.g., inclination, rotation, etc.) for the arrow106to strike the target218. In Step1020, the processor instructs the projector600to project, or the light array610to output onto the target sighting window108, a variable sighting mark114on the target sighting window108that is indicative of the desired targeting orientation of the bow100to strike the target218based at least partially on the determined distance to the target218. It should be appreciated that the variable sighting mark114will typically be below the fixed sighting mark110if the target is level to the operator and aligned vertically therewith (e.g., located along a vertical axis passing through the fixed sighting mark110, not illustrated). In embodiments in which step1016occurs during use of the targeting system102after calibration, the variable sighting mark114, the location of which is based at least partially on the determined distance to the target218, is presented on the target sighting window108and is utilized by the user to strike target218. In Step1022, the processor instructs the display to present another query to the operator, regarding whether the arrow106struck the target218using the afore-mentioned variable sighting mark114. The operator will then provide a response, such as discussed above. If the operator provides an indication that the arrow106did not strike the target218to a sufficient accuracy, in Step1024, the processor may present another query to the operator as to the miss magnitude and direction (e.g., upward or downward from the target218). The magnitude and direction of the miss may be entered by the operator using the user interface elements700located atop the alphanumeric display116, or via another method (such as entry into a wirelessly connected electronic device such as a smart phone). If the operator provides an indication that the arrow106did strike the target218to a sufficient accuracy, in Step1026, the processor will present an additional inquiry to the operator regarding whether the operator desires to set additional user-defined sighting marks900. Upon acquiring, from the operator, an indication that the arrow106struck the target218at the second distance by aligning the variable sighting mark114of the target sighting window108, the variable sighting mark114associated with that distance will be used as a future reference point both by the processor (in determining the appropriate compensated sighting mark for future calculations) and/or by the operator (in viewing the variable sighting mark114in relation to the user-defined sighting mark, as illustrated inFIG.9). The processor will therefore save the variable sighting mark114(or information related thereto) as a first user-defined sighting mark associated with the second compensated targeting axis210and the second distance. If the operator indicates, in response to the inquiry, that additional user-defined sighting marks900are desired, the processor will return to Step1010such that another user-defined sighting mark can be saved. In this manner, multiple user-defined sighting marks900can be determined and saved. Based upon at least two user-defined sighting marks900, the processor may be able to determine certain characteristics of the bow100and the arrow106, such as exit speed, arrow weight, arrow drag, and other characteristics. These known characteristics may be used by the processor in determining the compensated targeting axis210for the future user-defined sighting marks900and variable sighting marks114. As such, the determinations may become more accurate through the iteration of cycles of the above-discussed steps. It should be appreciated that in repeating the steps, the processor may present, to the operator via the display, a request that the operator move to a second interval from the target218. The processor may then determine, from the ranging module500, a third distance indication of a third distance to the target218. The processor may then determine a third desired targeting inclination for the arrow106to strike the target218at the third distance, and instruct the projector600to project the variable sighting mark114on the target sighting window108that is indicative of the third desired targeting inclination. Upon an indication from the operator that the arrow106struck the target218at the third distance by aligning the variable sighting mark114of the target sighting window108, associated with the third desired targeting inclination, the processor will save the variable sighting mark114as a second user-defined sighting mark associated with the second-profile compensated targeting axis and the second distance. The processor will also save the second user-defined sighting mark to the first user profile. If the operator indicates, in response to the inquiry, that additional user-defined sighting marks900are not desired, the processor will move to Step1028. In Step1028, the processor saves said first user-defined sighting mark (and any other determined user-defined sighting marks900) to a first user profile for the operator. The first user profile includes the user-defined sighting marks900and other information. In some embodiments, the processor may receive, from the operator, an indication that the operator has changed an arrow106parameter. Arrow106parameters affect the flight characteristics of the arrow106, such that changing a parameter will (to some extent) the user-defined sighting marks900are no longer accurate for future firings. The parameter may be changing a bow setting, changing to a second arrow106that is different than another arrow106, changing the bow100, changing to a second operator, or other changes. Changing a parameter may be appropriate for various reasons. The changing of the parameter may occur at the end of the calibration process, so as to perform a second calibration that is stored in the memory of the targeting system102. For example, this may be done if two different types of arrows are likely to be used in the future. The operator will perform two calibrations, one with each type of arrow, and save a user profile respective to each. Separate of the calibration process, processor may present, to the operator via the display, an option to select either the first user profile or the second user profile based upon the parameter being utilized. The processor may also instruct the projector600to project the first user-defined sighting mark900and the second user-defined sighting mark900on the target sighting window108. The processor may then instruct the projector600to project the variable sighting mark114on the target sighting window108between the first user-defined sighting mark900and the second user-defined sighting mark900, such as illustrated inFIG.9. The variable sighting mark114is therefore indicative that the distance to the target218is between the second distance and the third distance. Generally, any of the functions described herein may be implemented using software, firmware, hardware (e.g., fixed logic circuitry), manual processing, or a combination of these implementations. The terms “module” and “functionality” as used herein generally represent software, firmware, hardware, or a combination thereof. The communication between modules in the targeting system102may be wired, wireless, or some combination thereof. In the case of a software implementation, for instance, the module represents executable instructions that perform specified tasks when executed on a processor. The program code may be stored in one or more device-readable storage media, an example of which is the memory of the targeting system102. FIG.11shows an exemplary targeting system102electrical block diagram. It should be appreciated that, like other figures discussed herein, the block diagram is only exemplary to aid in the understanding by the reader. The targeting system102includes a processor1100(which may be itself, or may be associated with the above-discussed processor) supporting a mix of serial buses and programmable logic standards. Both the light array610and the projector engine616are driven by current-controlled LED driver1102under the control of general purpose IO's and PWM outputs for brightness control. A trans-reflective LCD display1106(associated with the alphanumeric display116) may contain a sub-processor to reduce main processor1100loading and communications requirements. An ambient light sensor1108measures the light levels of the target scene to allow adaptive brightness control of the targeting LEDs and the activation of the display backlight under darker conditions, as discussed above. The ranging module1110includes a laser driver1112, single mode or pulsed laser diode1114(all associated with the beam source508discussed above) as well as a receiver1116(associated with the beam receptor512). In embodiments, a portion of processor1100and a memory of the targeting system102may be located within ranging module1110. Processor1100may determine a range (distance) to a target based on a calculated delay between a transmission of a coded burst code and the reception of a reflected transmission and subsequent correlation of the received signal against a stored transmit signature corresponding to the transmitted signal. The laser diode1114offers a precise measurement beam with a divergence under a minimum threshold (for example, under 1 milli-radian). Bias supply1118provides a regulated high voltage output controlled by the microcontroller1100based on inputs of the system noise floor as measured by the processor1110and a temperature sensor. Solid-state gyro1120(being the above-discussed inclinometer) provides bow inclination information, which is used to calculate the required elevation offset based on calculations for arrow drop when combined with target range. Accelerometer1122is used to monitor bow rotational dynamics during a shot which can be used to detect incorrect firing technique of the operator, and to detect release of arrow106. Magnetometer1124performs functions as a digital compass, in conjunction with the gyro1120and the measured distance from the ranging module500can provide heading, distance and inclination to a target218. This information, when combined with the capability to transmit the data to a GPS-enabled smart phone using communication element1126operating using any of various wireless standards (such as BLUETOOTH or the low-power ANT wireless standard). The communication element also allows the logging or forwarding of the location of the target218to an external system (for example, to mark the target location on a map for later inspection by the operator). Serial flash1128can be used to store user programmed parameters, software downloads and the storage of a history of operation for later review. In various embodiments of the present invention, “short range” pin functionality is provided to enable better aiming by the archer at close range. When an archer draws back a bow and anchors the string at whatever anchor point he or she chooses (typically near the corner of the mouth or on the jaw line), the eye is approximately 3.5 inches above the shaft of the arrow. As illustrated inFIG.12, at ranges over about 15 yards, the angle between that arrow pointing to the target and the eye aiming through the sight to the target is so small (approaching 0 degrees) that parallax is essentially eliminated. But, as shown inFIG.13, at ranges under 15 yards, the illustrated angle increases. Initially the increase is not dramatic, and parallax still does not enter the picture. However, at ranges less than 12 yards, the “pin” needed to accurately shoot a short range target reverses the sighting trend and begins moving lower as you approach 0 distance. This is due to parallax. As the range decreases, the offset between the eye looking through the sight to the target and the arrow pointed towards the target becomes a significant factor in determining the correct pin location (because the angle is increasing). As a result, with an extreme decrease in range, for example down to 2 or 3 yards, the pin used to take such as shot is roughly the equivalent of a 50-60 yard pin. At ranges under 12 yards, use of the 20 yard pin will result in the arrow impacting below the intended bullseye. The shorter the range gets, the more this condition worsens. In embodiments of the present invention, a targeting system may provide short range pins with a digital sight to provide the correct targeting pin(s) for short ranges while reducing negative impacts of conventional targeting systems. Thus, the processor may determine the location on the target sighting window using a first configuration for ranges above a first threshold (e.g., about 15 yards, about 10-15 yards, etc.) and a second configuration for ranges below the first threshold to account for parallax caused by an eye position of the operator. Although there are a wide range of conventional techniques for adjusting single pins, a battery of pins individually, or a battery of individual pins all at once (to speed in setup), enhancements to the information presented on a display may enable a user to improve his aiming and striking accuracy. Information may be presented on a display before, immediately after, and post shot (analysis) may come from a variety of sensors, such as an accelerometer, a gyroscope, a compass, a barometer, a GPS receiver, an ambient light sensor, a wireless transmitter, or a variety of other sensors. The display may include an archery sight (sunlight readable) having a backlight and/or transflective coatings. The information presented on the display includes, but is not limited to: a.Electronic levelb.Level (roll) at time of shotc.Pitchd.Yawe.Shot detectionf.Shot counterg.Arrow Speedh.Wind speed and direction relative to direction of firei.GPS positionj.Altitude (to adjust arrow drag profile)k.Barometric Pressure1.Light readingsm.Angle compensated range to targetn.Compass headingo.Confirmation of data transfer to another devicep.Adjustment of pin brightnessq.Selection of bow profilesr.Review of calibrated ranges and fixed pinss.Compliance certificationst.Device serial number identificationu.Software versionv.Time of Dayw.Sunlight/Twilight Timesx.Mapping informationy.Navigational information An ambient light sensor embedded within the internal electronic structure of an archery sight, along with the associated light pipe that allows external light to enter the internal structure and allows metering of ambient light by the sensor, can be used to automatically adjust the brightness of the pins presented to the user for aiming, the laser range finding reticle, backlight, or the presentation of information inside the housing of the sight. Conventional devices and displays utilize a manual rheostat used to dial (up or down) the input voltage to the driver illuminating the pin. The use of a liquid crystal display (LCD), light emitting diode (organic or otherwise), digital light projection, or any other variant of display, enables changing of the color of a targeting pin to nearly any color hue desired by the user. The conventional sights utilize a physical pin having a predetermined (preset) color that cannot be easily changed. In some instances, changing the color of a pin may require replacing the LED or the fiber optic illuminating the pin or replacing the entire pin with a pin having another color. Other conventional sights enable color selection by changing the color input to the fiber optic. However, these conventional sights do not provide color selected virtual pins with automatically adjusting brightness as implemented by embodiments of the present invention. The use of a display on an archery sight with electronic pins enables use of one or more profiles with the sight. A profile may include a flight of the arrow and accessories used on the arrow or bow. The flight of the arrow may be affected by several factors, such as a draw weight of the bow, arrow spine, arrow length, or arrow weight. The accessories used on the arrow or bow may include string silencers or lighted nocks. A memory device may include each of these combinations of parameters that result in a specific arrow profile, and the display and storage device enables a user to review and change the calibrated ranges associated with each profile. Changing the ranges may include adding new ranges or recalibrating existing ranges. The sight may also include a “Sighting in Procedure” that enables electronically calibrating a sight. Conventional sights typically require a user to guess how far to manually move pins up/down when a point of impact does not correspond to a desired point (target). In embodiments of the present invention, the user may provide an input indicating how far (up or down) the point of impact is from the desired points (target) (in inches or mm), and the processor may adjust the pin based on stored information. In embodiments, an LED pin and aiming reticle may be used to enable the user to understand the torque he applies to the bow before releasing the shot. For instance, upon successful completion of setting up the bowsight, the archer may have a small LED inside of a “heads up” reticle. If at a later point, the small LED is not positioned directly inside the circle as the archer is at full draw, the processor may determine that the offset is caused by a left or right hand pressure that is different from a “nominal” hand pressure, which is calibrated when the sight is initially set up. This use of a static pin or virtual pin inside of a projected reticle (or projected Heads Up Display) may enable a user to eliminate or reduce hand torque prior to releasing the arrow and thereby improving accuracy. Although systems and methods for targeting displays have been disclosed in terms of specific structural features and acts, it is to be understood that the appended claims are not to be limited to the specific features and acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed devices and techniques and it is noted that equivalents may be employed and substitutions made herein without departing from the scope of the technology as recited in the claims. | 104,043 |
11859948 | DETAILED DESCRIPTION Embodiments will now be described in detail with reference to the accompanying drawings, in which like reference numerals refer to like elements. The embodiments may have different forms and should not be construed as limiting the scope of the disclosure. Accordingly, the embodiments are described, by referring to the accompanying drawings, to explain various aspects of the disclosure. As used herein, the term “and/or” may include any and all combinations of one or more of the associated items. The disclosure will now be described more fully with reference to the accompanying drawings, in which example embodiments of the disclosure are shown. The disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that the disclosure will be easily understood by those skilled in the art, and will be defined by the scope of the appended claims. The terminology used herein is for the purpose of describing the embodiments only and is not intended to limit the scope of example embodiments. As used herein, the singular forms are intended to include the plural forms, unless the context clearly indicates otherwise. The terms “comprise” and/or “comprising” may indicate the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Although the terms first, second etc. may be used herein to describe various elements or components, these elements or components should not be limited by these terms. These terms may be used only to distinguish one element or component from another element or component. FIG.1illustrates an arming device and an operating device according to an embodiment. In an embodiment, a remote weapon system100may include an operating device120, a joystick122, and an arming device110. The arming device110may include an imaging device130and a range finder135. The range finder135may be embodied separately from the imaging device130or may be integrated into the imaging device130. The arming device110may be formed in various forms. For example, the arming device110may include all types of devices capable of firing bullets, shells, or the like. According to an embodiment, the arming device110may include a support170on which a weapon is mounted, a trigger solenoid160configured to trigger the weapon, an ammunition supply device180that supplies ammunition to the weapon and loads the ammunition, the imaging device130that observes daytime and night-time targets and measures ranges, an image driver140that drives the imaging device130, an elevation driver142that elevates the support170on which a firearm150is mounted, and a rotation driver144that rotates the arming device110. The imaging device130may refer to a device that captures front images and measures ranges according to an operation of the arming device110, and transmits, to the operating device120, image signals received from a TV camera131, an IR camera133, and the range finder135. Also, the imaging device130may transmit image signals received from an image capturing element or an image sensor of the TV camera131or the IR camera133and range measurement values measured by the range finder135. A TV camera and an IR camera may be modified into or replaced with various types of other elements for capturing images. An aiming point of the range finder135may be aligned with a center of an image of the TV camera131and the IR camera133. Therefore, if a target is located at the center of the image captured by the TV camera131and the IR camera133, it means that the target is matched to the aiming point of the range finder135and is matched to a center of a laser beam used by the range finder135. A gyro sensor may be mounted on the arming device110to measure angular velocities in roll, pitch, and yaw axis directions entering a remote weapon and perform rotation, yaw and pitch 2-axis stabilization control for external disturbances through a control device. Stabilization control is performed for the arming device100in rotation, yaw and pitch directions to keep the firearm150in a preset direction on the basis of angular velocity values of the roll, pitch, and yaw axis directions measured by the gyro sensor. The operating device120may be embodied in the form of a terminal including a display, a memory and a processor. The operating device120may receive from the arming device110daytime and nighttime observation images, range measurement value information, and state information about the arming device110. Although not described, the processor may include a central processing unit (CPU), a microprocessor, or the like that performs respective functions described later in reference toFIG.2. The operating device120may store rotation information, such as yaw and pitch of a particular area acquired by driving the elevation driver142and the rotation driver144, position values of the imaging device130, field of view (FOV) of images, the range measurement values, and the like, which are received from the arming device110. The operating device120may also display a target or objects through the display, may operate and control the arming device110and the joystick122, and may include a tracking device therein. An operator may manipulate the joystick122to drive the elevation driver142and the rotation driver144of the arming device110. According to an embodiment, the remote weapon system100rotates the arming device110or drives the arming device110in yaw and pitch directions to position the target on a center of a line of sight and then selects the tracking device included in the imaging device130in a tracking mode to lock on the target. In this case, the line of sight may be changed by a tracking gate. FIG.2illustrates an internal structure of an operating device200according to an embodiment. The operating device200may include a plurality components such as a tracking image confirmer210, a correlation determiner220, a hit rate determiner230, and a range measurement value determiner240, and a replacer250that may be implemented by the processor as described above in reference toFIG.1. Further, it is noted that at least one of these components may be embodied as various numbers of hardware, software and/or firmware structures that execute respective functions described below. For example, at least one of these components may use a direct circuit structure, such as a memory, an internal processor, a logic circuit, a look-up table, etc. that may execute the respective functions through controls of the processor described above. Also, at least one of these components may be specifically embodied by a module, a program, or a part of code, which contains one or more executable instructions for performing the respective functions, and executed by the processor. If a range measurement mode of a range finder is set to a continuous measurement mode, the operating device200may continuously measure ranges in preset cycles. In this case, the tracking image confirmer210confirms a tracking image at each firing time point of a laser beam transmitted by the range finder. According to an embodiment, an imaging device (130ofFIG.1) tracks a target, and cycles in which the imaging device captures images may be different from cycles in which the range finder measures ranges. Referring toFIG.5, the imaging device tracks a target and captures an image in shorter cycles (e.g., C1501, C2502, C3503, C4504, C5505, C6506, and C7507) than a preset time T520of the range finder. The tracking image confirmer210confirms a tracking image at respective firing time points, for example, L1511, L2512, L4513, L514of laser beams transmitted by the range finder. According to an embodiment, the range finder performs a range measurement at each preset time T520and transmits the measured range measurement value to the operating device200, regardless of whether the imaging device tracks the target. Since the operating device200receives the range measurement value measured from the range finder regardless of whether the imaging device tracks the target, it may be difficult for the operating device200to determine whether the received range measurement value is true or false. To solve this, the correlation determiner220determines that the target is normally locked on if an image correlation value of the target locked on by a tracking gate in the tracking image exceeds a threshold value. The threshold value may be input by an operator or may use a preset value. If a two-dimensional correlation value falls below the threshold value such as if the lock-on of the target is missed, if a position of the target deviates from an aiming point of the range finder, and if obstructions appear in front of the target, the correlation determiner220determines the range measurement value measured by the range finder as a wrong measurement value or a false value. The correlation determiner220may use a correlation determination technique including the sum of absolute differences (SAD), the sum of squared differences (SSD), the normalized cross correlation (NCC), and the like to determine whether an image correlation value exceeds a threshold value. The correlation determiner220tracks a target in a position having the highest correlation value as the target, on the basis of correlation values calculated by using the correlation determination technique. Here, although the target has the highest correlation value in a search area, when the highest correlation value is smaller than or equal to the threshold value, the correlation determiner220determines the range measurement value measured by the range finder as a false value to enhance the tracking performance. The greatest value of the correlation value may be 1, and for example, the threshold value may be greater than or equal to 0.3, but may be set to any value between 0 and 1. SAD=∑x,y[f(x,y)-t(x,y)]and(1)SSD=∑x,y[f(x,y)-t(x,y)]2,(2) respectively. NCC=1n∑x,y(f(x,y)-f_)(t(x,y)-t_)σfσt(3) Here, definitions of respective variables are as follows. n: the number of pixels f(x,y): (x,y) pixels of a sub-image that is part of a source image, where the source image refers to an image that is being captured by the imaging device. t(x,y): (x,y) pixels of a template image, where the template image refers to an image displayed on the tracking gate. f bar: average value of sub-image pixels t bar: average value of template image pixels σf: standard deviation of sub-image pixels σt: standard deviation of template image pixels According to an embodiment, the correlation determiner220may calculate a moving path and a moving speed of the target and predict a range measurement value in a predetermined cycle to update the range measurement value, on the basis of a moving path and a moving speed of a mobile platform on which an imaging device or a remote weapon is mounted, inclined angles of the mobile platform in roll, pitch, and yaw directions, a rotation or elevation angle of the remote weapon, and a range measurement value measured in a previous cycle, if the two-dimensional image correlation value is less than the threshold value. The hit rate determiner230determines whether the laser beam from the range finder hits the target determined as being normally locked on by the correlation determiner220, by using a center value of the tracking image and a center value of the tracking gate. Here, a range measurement value of the range finder may be returned if a size or a signal level of the laser beam that is transmitted from the range finder, hits the target, reflects from the target, and is incident upon a receiving optical system of the range finder, is greater than or equal to a minimum detectable signal level of a detector installed in the range finder. The Equation (4) is used to calculate the range measurement value as follows. Pr=P0×Topt×Drx2×ρ×Tatm4R2×(1-e-2S2(aR)2)≥MDS(4) Here, definitions of respective variables are as follows. Pr: laser beam reflecting signal incident on receiving optical system Po: laser beam output signal Topt: transmitting and receiving optical transmittance Drx: receiving optical system diameter ρ: target reflectance Tatm: atmospheric transmittance S: target size α: laser beam diameter R: target range MDS: detector minimum detectable signal level Referring to the Equation (4), a range-measurable value of the range finder may be affected by environmental factors such as the target reflectance p, the atmospheric transmittance Tatm, and the like. For example, if the laser beam hits the target by 20% or more, for example, a surface area of the target, the hit rate determiner230may exclude a wrong measurement due to a background behind the target and determines that the laser beam properly hit the target determined as being normally locked on by the correlation determiner220. If the hit rate determiner230determines that the laser beam hits the target by a predetermined threshold, the hit rate determiner230determines a corresponding range measurement value as a true value, or otherwise, determines the corresponding measurement value as a false value. According to an embodiment, the hit rate determiner230may determine whether a range measurement value is true or false, in consideration of a ratio of an area of the laser beam hitting the target, a size of the tracking gate, and an error value from a center of a camera image to a center of the tracking gate. An embodiment of calculating a hit rate at which a laser beam from a range finder hits a target will now be described with reference toFIGS.3and4. A reference axis of a range finder may be commonly physically aligned with a TV camera, an IR camera, or a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) sensor mounted in an imaging device. Specifically, a center of a laser beam from the range finder is aligned to be matched to a center of an image of the TV camera and the IR camera. InFIG.3, if pixel number a1*a2of a tracking gate310of a TV camera300or IR camera of an imaging device is 1280×960 pixels, and horizontal×vertical field of view of the TV camera300is 4.0°×3.0°, resolution per pixel of the TV camera300or IR camera is 0.003125°/pixels. If a size of a laser beam diameter320is 1 mrad×1 mrad, pixel number b1*b2of the laser beam diameter320is calculated as 18 pixels×18 pixels. As an example shown inFIG.3, if a laser beam hits a target displayed in the tracking gate310, a hit rate of the laser beam may be calculated as follows. First, a range between a center point310aof the tracking gate310and a center point320aof the laser beam may be calculated on the basis of pixel number c1*c2, i.e., 9 pixels*5 pixels. In this case, the hit rate of the laser beam is calculated as follows. (a1+b12-c1)*(a2+b22-c2)b1*b2*100%=(26+182-9)*(20+182-5)18*18*100%(5) An operation410of calculating a hit rate H at which a laser beam from a range finder hits a target, by the hit rate determiner230, is illustrated inFIG.4. In operation S420, the hit rate determiner230determines whether (a1+b12-c1) value is greater than 0. If not greater, the hit rate determiner230calculates the hit rate H as H=0% in operation470, and if greater, determines whether (a2+b22-c2) value is greater than 0 in operation430. If not greater, the hit rate determiner230calculates the hit rate H as H=0% in operation470, and if greater, calculates (a1+b12-c1)*(a2+b22-c2)b1*b2*100% value in operation440. In this case, if a value of the hit rate H is greater than 100% in operation450, the hit rate determiner230calculates the hit rate H as H=100% in operation460, and otherwise, calculates the hit rate H as H=H % in operation462. According to an embodiment, if the value of the hit rate H or an area where the laser beam overlaps the tracking gate is greater than or equal to a preset value, the operating device200may display a corresponding target as a range-measurable shape. Alternatively, if the value of the hit rate H or the area where the laser beam overlaps the tracking gate is less than the preset value, the operating device200may classify and display the corresponding target as a range-immeasurable shape. If the hit rate determiner230determines that the laser beam from the range finder hits the target normally locked on, the range measurement value determiner240determines a range measurement value as a true value and determines other range measurement values as wrong measurement values. If the correlation determiner220determines a range measurement value as being false in a first stage, the range measurement value determiner240immediately determines the corresponding range measurement value as a false value or a wrong measurement value. If the correlation determiner220determines the range measurement value as being true in the first stage, in the second stage, the hit rate determiner230determines that the laser beam from the range finder hits the target normally locked on, and thus the range measurement value determiner240finally determines the range measurement value as being true. Otherwise, the range measurement value determiner240determines the range measurement value as being false. If the range measurement value is determined as being false through the range measurement value determiner240, the replacer250subsequently replaces the range measurement value determined as being false with a range measurement value that is determined as being true. An example of replacing a range measurement value that is incorrectly measured will now be described with reference toFIGS.5and7. If a range finder measures a range to a target at the firing time point L1511, an operating device aims at the target in operation S710. In operation S712, the operating device adjusts a size of a tracking gate to a size of the target. The operating device may manually or automatically adjust the size of the tracking gate to the size of the target in a tracking image received from an imaging device. Also, if the operating device performs a continuous measurement mode, the size of the tracking gate may be automatically adjusted on the basis of a field of view of the imaging device, horizontal and vertical resolution of an image sensor used in the imaging device, and horizontal and vertical pixel number of the tracking gate. If the imaging device starts to track the target in operation S714, the operating device may select a mode for measuring a range to the target in operation S716. The operating device may provide an interface for selecting the mode to measure the range and may provide an interface for selecting the continuous measurement mode. If an operator does not select the continuous measurement mode in operation S720, and a range measurement value acquired by measuring a range in operation S770is not greater than 0 in operation S772, the range finder outputs a range-immeasurable message in operation S776, and if the range measurement value is greater than 0 in operation S772, the range finder updates the range measurement value in operation S774. If the operator selects the continuous measurement mode in operation S720, and a range measurement value measured in operation S722is greater than 0 in operation S724, the operating device determines whether a two-dimensional correlation value of a lock-on target exceeds a preset threshold value Th1in operations S730and S732. If the two-dimensional correlation value of the lock-on target exceeds the preset threshold value Th1, the operating device calculates a hit rate H of a laser beam that hits the target exceeding the threshold value Th1in operation S740. If the hit rate H is greater than or equal to a preset threshold value Th_hin operation S742, the operating device updates the range measurement value in operation S750. However, if the hit rate H is less than the preset threshold value Th_hin operation S742, the operating device applies a range measurement value in a previous cycle in operation S726. In operation S760, the operating device determines whether to end the continuous measurement. If the operating device determines to continue the continuous measurements, the operating device starts the range measurement (operation S722) again and repetitively performs the determination as to whether the range measurement value is true or false and a replacement of the range measurement value. However, if the operating device determines to end the continuous measurements, the continuous measurement ends in operation S780. Referring toFIG.5, the range finder measures a range to a target at the firing time point L1511, calculates a correlation and a hit rate of the laser beam, respectively, and, if a range measurement value satisfies both the correlation and the hit rate, the range finder determines the range measurement value as being true511a. According to an embodiment, the range finder may remeasure a range to the target at the next firing time point L2512, if a range measurement value satisfies the correlation of the target, but does not satisfy the hit rate. Here, the range finder may determine the range measurement value as being false512a, and replace the range measurement value with the range measurement value measured at the previous firing time point L1511to update the range measurement value in S512. Furthermore, the range finder may also remeasure a range to the target at the firing time point L3513, if a range measurement value satisfies the correlation of the target, but does not satisfy the hit rate. Here, the range finder may determine the range measurement value as being false513a, and replace the range measurement value with the range measurement value used at the previous firing time point L512to update the range measurement value in S513. Furthermore, the range finder may remeasure a range to the target at the firing time point L4514. For example, if obstructions pass in front of the target and thus a range measurement value does not satisfy the correlation of the target, the range finder may immediately determine the range measurement value as being false514aregardless of the determination of the hit rate, and replace the range measurement value with the range measurement value used at the previous firing time point L3513to update the range measurement value in S514. FIG.6illustrates a flowchart of a method of remotely controlling an arming device by an operating device according to an embodiment. In operation S610, the operating device receives a range measurement value measured by a range finder and an image captured by an imaging device and confirms a tracking image through a tracking image confirmer at each firing time point of a laser beam transmitted by the range finder. In operation S620, if an image correlation value of a target locked on by a tracking gate in the tracking image exceeds a threshold value, the operating device determines through a correlation determiner that the target is normally locked on. In operation S630, if the target is normally locked on, a hit rate determiner determines whether the laser beam from the range finder hits the target determined as being normally locked on by the correlation determiner, by using a center value of the tracking image and a center value of the tracking gate. In operation S640, if the laser beam from the range finder hits the target normally locked on, the operating device determines a range measurement value measured by the range finder as a true value of the target and determines other range measurement values as wrong measurement values or false values through a range measurement value determiner. In addition, the embodiments of the disclosure may include computer instructions stored in a non-transitory computer readable medium for performing the method of remotely controlling the arming device by the operating device. The non-transitory computer-readable medium may not be a medium that stores data therein for a while, such as a register, a cache, a memory, or the like, but means a medium that semi-permanently stores data therein and is readable by the machine. A specific example of the non-transitory computer-readable medium may include a compact disc (CD), a digital versatile disc (DVD), a hard disk, a Blu-ray disk, a universal serial bus (USB), a memory card, or a ROM. However, the embodiments are not limited thereto. According to an embodiment, a remote weapon system has an effect of enhancing reliability of range measurement values in a remote weapon device that is mounted on a mobile platform, a mobile vehicle, a mobile robot, or a mobile aircraft to perform range measurements with a laser range finder while tracking a static or moving target. According to one or more embodiments, an operating device has an effect of enhancing reliability of range measurement values by not using wrong measurement values when measuring ranges due to internal disturbances or external disturbances in a remote weapon device. According to one or more embodiments, an effect of enhancing reliability of range measurement values may be acquired by not using wrong measurement values if a range finder incorrectly measures ranges as obstructions appear in front of a target tracked by a remote weapon device. It should be understood that embodiments described herein should be considered in as examples to explain the embodiments of the disclosure and do not limit the scope of the disclosure. Descriptions of features or aspects within each embodiment should be considered as available for other similar features or aspects in other embodiments. While one or more embodiments have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the following claims. | 26,167 |
11859949 | DETAILED DESCRIPTION In one embodiment of the system of the present disclosure, a Radio Frequency (RF) Orthogonal Interferometry (also referred to as Orthogonal Interferometer) (OI) illuminator or transmitter is located at some position from the weapon system (e.g., at 0 to 100 km) and an RF receiver is mounted on an asset and receives the OI waveforms (distinguishable waveforms referenced to respective phase centers) to determine azimuth and elevation and to receive range information from an RF communications link in order to guide the asset to a target. In some embodiments, the azimuth and elevation information has an accuracy of about 100 to 300 μrads depending on the transmitter configuration. In some cases, the system range information has an accuracy of about +/−20 to 40 meters depending on various system operating parameters. In certain embodiments, the asset is given the target's location prior to launch or via RF or other communications link after launch within the RF/OI frame of reference. The asset in one example has on-board processing capability and calculates the trajectory for the target intercept using on-board guidance laws on and on-board processor. The approach to local domain guidance control of the present disclosure allows the user to deploy an RF/OI illumination system anywhere in the world given the portability of the system (e.g., it fits on a small utility trailer), the system's range >100 km, and the system's accuracy. This system's performance is similar in some respects to the GPS systems, but has the added benefit of jam resistance due to features such as the use of custom coding of the RF/OI waveform, the illuminator's signal strength, the deployment geometry, and the antenna configurations. Unlike the GPS navigation waveforms which are published, the RF/OI illumination system would not be public. The system operator could select frequency, Pulse Repetition Interval (PRI) and pulse duration, and other parameters. For example, the control of the waveform properties including pulse width, frequency and/or frequency hopping are used by the illuminator to mitigate jamming. Assuming a 100 nanosecond pulse, frequency hopping with varying PRI could be utilized in a code format loaded prior to launch or during flight. In addition, the rearward looking antenna on the projectile provides receiver isolation from any jammers forward or below the projectile. The combination waveform control and antenna spatial selectivity provides counter measure immunity or mitigation. The RF/OI illumination system is also difficult to detect. As an example, ground based jammers have the additional burden of being direct line of sight of the RF/OI illuminator, thereby making detecting its presence difficult due to the curvature of the earth. Referring toFIG.1A, a diagram of one embodiment of the system of the present disclosure is shown. More specifically, at least one asset115is launched from a launch area104and the at least one asset115is directed at a target100some distance away118from the launch area104. In some cases, the distance118is about 200 km. After launch, the asset115(e.g., munition, projectile, etc.) travels along a trajectory106toward the target100. A circular error probable (CEP-50)102is defined as a circular area having a radius that encompasses where 50% of the assets land. CEP-50 is a common measure of accuracy for ballistics. In certain embodiments of the system of the present disclosure, the CEP-50102is about 30 m. In some cases, the CEP-50102is limited by the performance of the air frame, its limited control authority, the asset's ability to perform high G maneuvers, and the like. Still referring toFIG.1A, a radio frequency (RF)/Orthogonal Interferometry (OI) illuminator108is used to guide the one or more assets to the target. In one embodiment, the RF array comprises three active electronically scanned array (AESA) panels109, where an AESA is one type of phased array antenna that is computer-controlled. There, the RF waves may be electronically steered to point in different directions without physically moving the antenna such as by leveraging the many antenna elements in the array. In some embodiments of the system of the present disclosure, the array panels can also move. In one embodiment, the RF array is compact, with dimensions110of about 1.5 m×1.5 m×0.75 m. The AESA panels109are typically located proximate each other with some separation. The number of panels can vary depending upon the desired accuracy and redundancy. In some embodiments, the RF array108guides (and tracks if equipped with fire control system) the one or more assets115along the trajectory106with accuracy of about ±5 m range and ±10 m azimuth and elevation112. In certain embodiments, the RF array uses orthogonal interferometry (OI) methods to project a reference frame, or a projected grid, which is analogous to a polar coordinate azimuth and elevation for the three dimensional space. The polar coordinates can be can be mapped to standard grid coordinates—latitude and longitude. In one example, the RF/OI illuminator system108produces a reference frame that is aligned using a north finding device such as a gyro, or the like, such that the one or more projectiles or assets115do not require separate north finding capabilities. In this case a single north finding device can be leveraged for multiple assets such as a swarm. The north finding device is intended to obtain a reference point for the further processing. This also tempers the need for precise alignment of the assets—center mass aiming—and thus, minimizes operator processing time and resources. In certain embodiments, the RF/OI system can provide 10°, 20°, or 30° fields of engagement. In some embodiments, the system provides for adjustable accuracy/guidance precision based, in part, on the RF/OI transmit power, antenna spacing, and deployment angle, where the cross range accuracy is equal to angular resolution times range. Thus the present system operates in GPS denied environments with minimal likelihood of being jammed or spoofed Additionally, the system of the present disclosure provides a means to precisely measure and subsequently correct trajectory variations due to the varying energetics and the cross wind impact of each of the one or more projectiles by maintaining the desired trajectory using the RF/OI system array as a stable and precise frame of reference for long range position and projectile guidance. This technique reduces the complexity and the cost of the control actuator system (CAS) by simplifying the components needed on the projectiles. The control actuation system in one example provide fins or canards with controllers that enable changes to the flight of the asset. In some cases, an RF receiver and RF apertures are present on each round. In some cases, by using the RF/OI system, no azimuth aiming is required and minimal elevation adjustment is needed for each projectile, thus allowing the flight navigation system to make the course corrections accounting for the range differential due to energetics and aiming errors. The projectile in one example is a small rocket or artillery round having a warhead, a fuse, a control actuation system, guidance and navigation system, and a rocket engine. The guidance and navigation section in one embodiment includes a rear facing antenna/aperture, RF receiver, control actuation system, and a short range guidance system. The short range guidance system detector can include at least one detector such as a semi-active laser seeker or imaging system. Alternatively, the short range guidance system can be an inertial measurement unit that provides orientation and enables the asset to continue its trajectory to the target. In one embodiment of the system of the present disclosure, the RF/OI system108“hands off” the positioning and guidance of the one or more projectiles at a certain hand-off point114. Hand off refers to a transition point from the use of the RF/OI guidance to a secondary form of guidance, to increase the accuracy of the projectile. In some cases, the hand-off point114is about 6 km to about 10 km from the target100along the flight path. In some cases, the hand-off point114is located a distance above a plane116within which the target is located. In some cases the distance116is about two km to about three km above the plane. In some cases, the target is on land. In some other cases the target is on the surface of water. The hand-off can be accomplished as a timed event starting from launch or the hand-off can be event driven. In certain embodiments, an event driven hand-off may be when a short range guidance system (e.g., a semi-active laser or image seeker) detects the target and initiates terminal guidance. The navigation approach of the present disclosure can be adapted for airborne targets, such as UAVs in certain embodiments of the present disclosure. In some cases, the tracking system (e.g., EO/IR or RF—RADAR) on the ground provides target location updates to the airframe/weapon. The fire control system tracks the UAV, providing Azimuth, Elevation and range information in the RF/OI reference frame, which is uplinked to the guided projectile/weapon to complete the guidance loop. In one embodiment, the uplink can be accomplished by either an EO/IR or RF modality. For an application of artillery firing as a grid pattern, to provide for maximum effect on a designated area, the system utilizes the RF/OI illuminator to fly about 95% of the flight path until Line of Sight (LOS) limitations necessitate a handoff to a terminal guidance system, due in part to multipath limitation of the RF/OI system, to flight ballistics. The RF/OI illuminator can be used to mitigate the wind and the energetics variability of launch that can affect the trajectory of a munition. The RF/OI system is used to determine a navigation correction, in part, to put the munition on the correct path to the target. In one embodiment, a prelaunch initiation would determine the impact point of the LOS limitation threshold and then the control features would be trimmed and flight ballistics would be used the last one to three km. The short ballistic flight incurs very little additional error since the heavy projectile (e.g., 30 to 65 lb.) reaches the ground in three to ten seconds and is hard for the cross winds to blow it off course in that limited time. FIG.1Bdepicts a diagram of one embodiment of the system of the present disclosure. More specifically, in this figure the RF/OI system is co-located with the launch point104for the one or more projectiles (one flight path is shown106). In some cases, the RF/OI system is located well behind the launch point104to provide protection for the RF array. In some cases, the RF/OI system can be located a distance118from the target having a known CEP-50102. In certain embodiments, the distance118is about100km and the CEP-50102is about 30 m. In contrast, a conventional radar system has range limitations for two-way radar, and may need to be forward deployed, thus placing radar in front of the launch area endangering the equipment by subjecting it to crossfire and or direct targeting by enemy forces. As seen inFIG.1B, the RF/OI system produces a RF reference frame120. The munition trajectory106is located within that reference frame120. The reference frame does not require active scanning and thus provides for simplified flight control management. The reference frame also provides for tracking of multiple rounds at the same time by projecting a grid in the air as a reference frame. The transition point114, e.g., when the projectile begins a glide slope to the target, is also shown. In certain embodiments, RF communication links on each round allows for programming the trajectory during flight for each round. In some cases, the guidance for the asset begins at the moment of firing or early in the flight trajectory. With the present system, no pre-firing program or precise aiming of the weapon system is needed. Instead, guidance can be handled directly from a mission computer. Still referring toFIG.1B, the line of sight (LOS)122is limited over the distance118due to the curvature of the earth. In one embodiment, the distance above the plane of the target116for the LOS122is about 800 m. In certain embodiments, the distance above the plane of the target116for the base of the RF reference frame is about 1400 m, thus making the transition point114important for grid targeting. In some cases, a magnetometer inertial measurement unit (IMU) is also used to supplement the guidance of the one or more projectiles. The LOS122prevents the weapon from seeing the RF/OI illuminator below the horizon. In addition, the RF/OI receiver's wave form is controlled to mitigate multipath due to the earth and influencing the accuracy of the position measurement. Waveforms allow multipath mitigation and allow the receiver to post process the impact of multipath out of the position results. These techniques yield a safe zone of navigation that corresponds to a slant angle of about one degree126from the RF/OI illuminator or a height restriction116which is range dependent. FIG.2AandFIG.2Bcompare the path lengths and system components of a conventional interferometer (CI)200and an Orthogonal Interferometer (OI)202for a notional two-dimensional case. For a CI measurement, a transmitter204illuminates the target206and the phase of the returns at two separate receivers208a,208bprovides a differential path length difference (Δϕ), shown as216a,and216bthat leads to a target angle estimate of θ210. In the case of OI202, two phase centers212a,212beach transmit orthogonal transmissions which are individually decorrelated on respective receptions. The fundamental concept behind the orthogonal interferometer is the use of at least two coherent transmit/receive antennas212a,212bthat transmit nearly orthogonal coded waveforms. For example the orthogonal transmission from212atravels to target206and returns to both transmit/receive antennas212a,and212b,this is shown by path218b.Additionally an orthogonal transmission from212btravels to target206and returns to both212aand212b,shown by path218b.On reception, the separation of the signals is achieved by decoding against a particular code and exploiting the cross-correlation suppression of the orthogonal coded waveforms. Orthogonal coding in this sense can entail some combination of time, frequency and/or code modulation—as long as the receiver can perform a decorrelation and form an estimate of the received signal keyed to a particular transmit phase center. As depicted the CI case200has a common transmit and distinct receive paths216a,216bwhile the OI case202has distinct transmit and receive paths218a,218bat each receiver212a,212b.Decoding OI has achieved a double path length dependency which provides twice the target angle210sensitivity as compared to CI with an equivalent SNR. The phase difference relationship of an interferometry is defined as Δϕ=KϕD2πλsin(θ);Kϕ=1(CI),2(OI) where D is the interferometer baseline (array phase center separation), λ is the nominal operating wavelength), and Kϕrepresents the phase gain factor that depends on path length. This expression highlights the physical advantage of a system with an electrically large baseline (Dλ) in that it yields a greater Δϕ for the same target offset θ; the geometric “gain” of the larger interferometric baseline yields a larger Δϕ relative to SNR dependent phase estimation noise σΔϕ2and provides a more precise measurement of θ. In many signal processing applications the localized performance of an estimator can be bounded by the Cramer-Rao Lower Bound (CRLB). This bound on the θ estimation error for a CI radar or an OI radar is σθCI,OI=λKϕ2πD√SNRKϕ=1(CI),2(OI) Note that for the same interferometer baseline (D) and same SNR the OI angle accuracy is a factor of 2 better than the CI angle accuracy. FIG.2Cdepicts the reduction in angle error with an OI compared to the CI with equivalent SNR; the OI radar achieves twice the precision (or the effective baseline) as compared to the CI radar.FIG.2Ccompares CI case, D=50 λ224, and two OI cases, D=50 λ226, D=100 λ228against with the ambitious angular precision goal σθ=25 μrad,230. It should also be noted that with respect to precision, a factor of two improvement in λ/D is worth a factor of four improvement in SNR. This increase in the local precision of the angular estimate of θ due to an increased Dλ comes at the cost of an increased chance of an ambiguous θ estimate. Angle ambiguity is a fundamental tradeoff that must be resolved for the potential of this increased estimator precision to have a real world benefit. There are a range of techniques used to suppress interferometer ambiguity. Depending on the particular application a combination of these techniques (discussed briefly herein) can provide effective angle disambiguation. For interferometer baselines with D>>λ, Δϕ can greatly exceed 2 π so the determination of angle-of-arrival using phase difference sin(θ)=λΔϕ4πD+2πN will be ambiguous by N 2 π wraps where N is the ambiguity number. FIG.2DandFIG.2Edepict a typical product of a real beam pattern232and an electrically large interferometric ambiguity234. Note that there are many closely spaced λD lobes within the main lobe—all reflecting the same Δϕ (modulo 2 π measurement). Two important points should be taken from the “zoom” portion shown inFIG.2E: First, σθCI,OI, the angular precision of a local radius of a λD lobe234trace is much finer than the physical beam pattern. Trying to disambiguate these closely spaced lobes based on a model of the amplitude difference from the main lobe's much broader response will require very high SNR and a highly consistent signal model that is unlikely to be available in a tactical system. Still referring toFIG.2D, the236trace represents a prior probability that would be part of a recursive tracking filter. CRLB is the radius of the local lobe. Trace232represents the array beam pattern and trace234represents the interferometer lobes. The figure shows large interferometer baselines D=100 λ gain precision with increased ambiguity. Another approach to ambiguity mitigation for the OI-tracer application would exploit the high prior information on the projectile trajectory, which provides the opportunity to incorporate accurate kinematic models. In this case, the236trace can be interpreted as a prior estimate in a non-linear estimation/tracking formulation where a specific λD lobe's probability is updated via a Bayesian recursion and the local covariance is update via a Kalman Filter. A physical example of exploiting prior information would involve an OI radar with λD=1100 or 1 meter at 100 m range which is still extremely coarse as compared to the “close-in” CEP of the projectile. For a projectile guidance application, where all the projectiles are cooperative, and there are well timed targets, this approach would be naturally integrated into a tracking filter that can be incorporated the aero-ballistic modeling. A final approach to ambiguity suppression involves multiple measurements at distinct λ/D values forming multiple interferometric baselines. For each available λ/D baseline, the relationship among feasible ambiguity numbers scales (by λ/D) but since the true target angle θ is independent of λD the unwrapped 0thlobe experiences no shift. FIG.2Edepicts (for θ=0) the interaction of two lobe spacings whose product yields a substantial reduction in lobe amplitude. Ambiguity can be suppressed by combining different λD measurements. sin(θ)=λ1Δϕ14πD1+2πN1andsin(θ)=λ1Δϕ24πD2+2πN2 This lobe-wise product will only admit an θ ambiguity where the two lobe spaces overlap closely; in the combination of λD=1125238 and of λD=1100240 or the 125/100 case, the first significant overlap242occurs at the 5th λD=1125 lobe and the 4th λD=1100 lobe. Hence, there is another ambiguity suppression approach that involves the projectile priors and the interferometer design. In sum, achieving very high precision angle and trajectory estimates via large baseline interferometry incurs the additional complexity of angle ambiguity. Successful mitigation of the ambiguity challenge in an operational system requires substantial integration of the interferometer, array, and aero-ballistic modeling—the details each depending on the particular system configuration under consideration. In certain embodiments of the system of the present disclosure, the RF system via an orthogonal interferometry (OI) reference frame operates at a frequency of about 5-10 GHz and has a signal-to-noise ratio (SNR) of about 20 dB. In some embodiments, the antenna gain is about 15-20 dB. In some cases, the baseline is about 1.5 m with an angular precision of less than 1 mrad. In some cases, the angular accuracy is about 0.45 mrad. This accuracy is in contrast to conventional radar systems that have angular accuracy of about one to two degrees. Conventional radar systems are also limited by bandwidth. Additionally, radar has cross range accuracy at 100 km of about 2.5 km (1.5° beam width) as compared to a 45 m cross range accuracy for the RF/OI system disclosed herein. At 50 km, the present system has 22 m accuracy. This accuracy provides for accurate hand-off positioning. The present system provides actual location within GPS norms. In contrast, conventional radar systems produce a beam that is too broad to implement an angle transfer as described herein. In some cases, the power requirement for the system ranges from about 100-200 W. The power needed is much lower than for a conventional radar system (e.g. 100 kW). Additionally, the RF/OI system is preferred due to inherent jamming resistance as compared to radar systems. In some embodiments, the projectiles have rear looking antennas for use with the RF/OI system. In some cases, the RF/OI illuminator can control multiple weapon batteries or UAV against multiple targets. The RF/OI reference frame is analogous to the localized GPS where several weapons platforms, air vehicles, and weapons can use the same RF/OI reference frame for navigation. In some cases, the pulse width for the system of the present disclosure is about 1.7 μsec which encapsulates the RF/OI waveform. Referring toFIG.3, a flow chart300of some of the functional elements for one embodiment of the system of the present disclosure is shown. More specifically, in this embodiment, a fire control system initiates fire commands for multiple assets302. In this case, the assets are artillery rounds (AR) or other munitions. In one example the ARs are then loaded and fired304. The ARs are powered up after launch306or they may be active prior to launch to enable transfer of mission and target data. In some embodiments, the ARs have a rear-facing RF detector. In one example, the ARs have a communications module for receiving and/or transmitting information to a fire control system. In the example, the ARs have an on-board processor, memory, and/or additional detectors for use in guidance of the ARs to a target. In some cases, the RF/OI illuminator powers up after the launch of the ARs308or may be already powered-up if already in use. The ARs receive unique target information and waveform mission codes from RF communication links, or the like310. The RF detectors on the ARs collect the RF/OI waveform data from the RF/OI illuminator and determine the azimuth, the elevation, along with the range information from each asset to the target312. The ARs calculate target navigation waypoints314as each navigates to the target316. In some embodiments, multiple rounds are coordinated in one RF/OI reference frame. In some cases a full battery of Howitzers, or the like, are used and each round has customized trajectories for the particular target type or for masking the round's location. In some cases this is limited to the weapons control authority and its ability to fly an azimuth arc from the point of launch thereby disguising it's launch point from enemy counter fire radar. In certain embodiments, the RF/OI reference frame is extended to about 100 km and provides location to within about 100 m. In some cases the reference frame is extended to about 50 km and provides location to within about 50 m. The system utilizes one way illumination with rear-looking antennas on the projectiles. The system of the present disclosure has RF jam hardening. In some cases the round may be programmable during the initial flight path, which can reduce the time to fire. By equipping the RF/OI reference frame with a high quality north seeker, the system allows for “on the go” alignment for all of the rounds. No azimuth aiming is required with the RF/OI reference frame, and only minimal elevation adjustment is needed to account for a range differential. The RF/OI can be designed to cover various fields of engagement. In some cases, the field of engagement may be 10, 20 or 30 degrees. The RF/OI method requires only minimal electronics costs embedded into each round, such as an RF receiver and RF apertures. In certain cases, the system transitions the guidance for the multiple rounds to a glide slope at about 6-10 km from the target. In some cases, the detonation for each AR is signaled by a fire control system. In some cases, detonation is signaled at a certain distance from the target. In some cases, detonation is signaled at a certain time or if the round is equipped with Height of Burst (HOB) sensor. In this application the rounds are being distributed in a grid pattern guided by the OI, the HOB is a standard fuse that measure its distance to ground and the end of the flight (last 15 seconds) to generate an air burst or ground, target dependent. In one embodiment of the system of the present disclosure, a RF/OI precision guidance system is used to provide for counter swarm use, where a two-way RF/OI reference frame is used for flight management with a 10 to 20 round “in flight” capability. In some cases, this can be done with a continuous rate of fire of about 1-2 Hz. In certain embodiments, the RF/OI is also used for terminal guidance with a 200 μrad guidance loop. In yet another embodiment of the system of the present disclosure, a RF/OI guidance system is used to provide for area suppression, where the RF/OI reference frame is used for flight management with an “in flight” capability of 6 to 10 rounds and the RF/OI system is used for terminal guidance with a 1 mrad guidance loop for area suppression. In some cases, the RF/OI system has 5 GHz diversity at 100 steps—50 MHz each. This maintains a 200 MHz ADC sampling rate at a low cost. The RF/OI system has accuracy at a distance of 5 km of about ±5 m range, and ±1 mrad for azimuth and elevation. In certain embodiments, the RF system provides top cover flight management for about 85%-90% of the range to target and then transitions to a glide slope for the remaining 5 to 20 seconds prior to impact, depending on the initial target range and LOS limitation. For sequential round control, the munitions receive the RF identification code for unique control of each round. The munition receives range codes for range calibration and determines the range by 1000 m to an accuracy of about ±3 m in the RF reference frame. The munition receives target OI position in the reference frame and its range from the munition. The munition confirms all instructions with fire control. Referring toFIG.4, a diagram of one embodiment of the use of orthogonal interferometry to track a projectile and target in a partial reference frame is shown. More specifically, a projectile or munition402is fired from a launch area400and is tracked via an RF/OI reference frame404(partial view). The partial reference frame404is shown for convenience but normally resembles the reference frame shown inFIG.1B. There are continuous updates for the current target location, which accounts for the OI frame structure motion, a moving target, and vehicle movement. Delta (δ) is determined by knowing the current location of the projectile versus the target angular location, which is decomposed into azimuth and elevation vectors. The bandwidth of one embodiment of the system is 200 Hz. The projectile OI coordinate406and the target OI coordinate408are shown. In one embodiment, the OI lines/bins are 11° by 7° FOC coverage, thus 11°*17.4/1=191 bins by 7°17.4/1=122 bins. The number of bins can be adjusted or larger or smaller FOVs may be used to provide the needed precision for the given application. These systems can be coupled together to provide additional FOV coverage with a simple temporal and frequency method and in theory provide for 360 degrees of coverage. Referring toFIG.5, a perspective view of the projectile500is shown that employs the RF/OI processing for navigation and guidance to the target. The projectile500can be a missile, rocket, artillery round or similar guided munition. The projectile has a front portion505that that typically houses the warhead and fuze elements such that the fuze detonates the warhead at the appropriate point for the desired result. On the rear or tail portion of the projectile510is an optional rocket engine that can be deployed to provide thrust to extend the range of the projectile and can be used to guide the projectile. In one example, the projectile is launched without a rocket engine such as from a launch platform that achieves a certain altitude and is guided to the target. Examples of launch platforms include anti-tank guns, mortars, howitzer, field guns and railguns. The projectiles from the launch platforms may or may not have a rocket engine. Referring again toFIG.5, the midsection tends to house the electronics, communications, and guidance/navigation systems. A rear facing antenna525is typically use to obtain the RF/OR waveforms for the reference frame that enable determination of the azimuth and elevation with respect to the illumination system. In one example, the processing involving firmware/software is performed on one or more processors that execute software residing on memory that is coupled to the processors. While labels are placed on certain items for descriptive purposes, the processing may be all done on a circuit card for have the processing technology. In this example an RF receiver530is coupled to the antenna525. The RF receiver530has a downconversion stage to process the analog inputs from the antenna and may include mixer(s), filter(s) and low noise amplifier(s) to process the analog signals. The downconverted signals are input to an analog-to-digital converter (ADC) to provide digital information that is then processed by one or more processing units such as in a digital signal processor. A short range guidance section540is used when the projectile reaches a hand-off point near the terminal end of the trajectory near the target area. The short range guidance section540in one example is a SAL seeker that receives a signal such as reflected laser signal from the target. Another example is an imaging section that uses a camera to view the target area and compares the captured image to stored images to identify the target and. In yet a further example, since the projectile is close to the target and was tracking to the target, an inertial measurement unit (IMU) can be used to keep the projectile in a proper orientation and path to the target. A guidance, navigation and control section550is the digital processing section and is coupled to memory containing various instruction and routines and controls certain operation of the projectile. The signal processing of the OI includes decoding against a particular code and exploiting the cross-correlation suppression of the orthogonal coded waveforms. The azimuth and elevation data is obtained from the decoding. The RF communications such as the mission data and range data are also processed by the digital signal processor. Guidance information from the short range guidance section540is processed and control instructions are generated to direct the projectile to the target. A control actuation system (CAS)560receives guidance controls and instructions to manipulate fins and canards (not shown) to steer the projectile. If the projectile has a rocket engine, that can also be employed to assist in reaching the target. It will be appreciated from the above that portions of the invention may be implemented as computer software, which may be supplied on a storage medium or via a transmission medium. It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying Figures can be implemented in software, the actual connections between the systems components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings of the present invention provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention. It is to be understood that the present invention can be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof. In one embodiment, the present invention can be implemented in software as an application program tangible embodied on a computer readable program storage device. The application program can be uploaded to, and executed by, a machine comprising any suitable architecture. The computer readable medium as described herein can be a data storage device, or unit such as a magnetic disk, magneto-optical disk, an optical disk, or a flash drive. Further, it will be appreciated that the term “memory” herein is intended to include various types of suitable data storage media, whether permanent or temporary, such as transitory electronic memories, non-transitory computer-readable medium and/or computer-writable medium. While various embodiments of the present invention have been described in detail, it is apparent that various modifications and alterations of those embodiments will occur to and be readily apparent to those skilled in the art. However, it is to be expressly understood that such modifications and alterations are within the scope and spirit of the present invention, as set forth in the appended claims. Further, the invention(s) described herein is capable of other embodiments and of being practiced or of being carried out in various other related ways. In addition, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items while only the terms “consisting of” and “consisting only of” are to be construed in a limitative sense. The foregoing description of the embodiments of the present disclosure has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise form disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the present disclosure be limited not by this detailed description, but rather by the claims appended hereto. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the scope of the disclosure. Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. While the principles of the disclosure have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the disclosure. Other embodiments are contemplated within the scope of the present disclosure in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present disclosure. | 36,573 |
11859950 | A person of ordinary skills in the art will appreciate that elements of the figures above are illustrated for simplicity and clarity, and are not necessarily drawn to scale. The dimensions of some elements in the figures may have been exaggerated relative to other elements to help understanding of the present teachings. Furthermore, a particular order in which certain elements, parts, components, modules, steps, actions, events and/or processes are described or illustrated may not be actually required. A person of ordinary skills in the art will appreciate that, for the purpose of simplicity and clarity of illustration, some commonly known and well-understood elements that are useful and/or necessary in a commercially feasible embodiment may not be depicted in order to provide a clear view of various embodiments in accordance with the present teachings. SUMMARY Pursuant to the various embodiments, the present disclosure provides a binocular non-lethal dazzling device. In particular, the disclosed binocular non-lethal dazzling device comprises a pair of substantially cylindrical optical housings. Each of the optical housings includes an eye piece, which can include a first lens, a mechanical focal element, such as a system of prisms, and an objective lens. The optical housings are coupled by an articulated bridge. The articulated bridge includes a focus knob, which is coupled to the focal elements of the substantially cylindrical optical housings. Operatively coupled to the articulated bridge is a dazzling module. The dazzling module comprises a laser drive circuit, at least one activation method such as a pushbutton, and a dazzling laser. The pushbutton is operatively coupled to the laser drive circuit, and causes the laser drive circuit to generate a suitable laser drive power, which is used to activate the dazzling laser. The dazzling laser is adapted to produce a dazzling laser beam which will dazzle a hostile actor without causing irreversible retinal disorder. In an additional embodiment, the dazzling module of the disclosed binocular non-lethal dazzling further includes a first power programming circuit that is coupled to the pushbutton, which, when activated, causes the laser drive circuit to be programmed to produce a first predetermined laser power level. In addition, the dazzling module could include a second pushbutton that would activate a second power programming circuit that would cause the laser drive circuit to be programmed to produce a second predetermined laser power level. For example, the first predetermined laser power level could correspond to a low (short-range, i.e., less than 50 yards) laser power level, while the second predetermined laser power level could correspond to a high (long-range, i.e., more than 50 yards) laser power level. In one embodiment, the dazzling module could be coupled to the top or bottom of the articulated bridge. In such an embodiment, the articulated bridge could comprise a pair of hinges, with one coupled to each of the substantially cylindrical optical housings, along with a platform section, which the dazzling module could sit upon. In a separate embodiment, the dazzling module could be integrated into the articulated bridge. In such an embodiment, the articulated bridge could comprise a single uni-directional hinge disposed between the first substantially cylindrical optical housing and the second substantially cylindrical optical housing. In addition, the dazzling module itself could comprise a substantially cylindrical section disposed with its centerline substantially beneath the unidirectional hinge. In an additional embodiment, the dazzling module could further comprise a processor coupled to the pushbutton and the laser drive circuit. When the pushbutton is pressed the processor programs the laser drive circuit to produce an appropriate power level. The power level could be set 1) at a predetermined level based on the pushbutton, 2) at a level based on the setting of the focus knob, or 3) at a level based on a rangefinder, or 4) at a level based on the setting of the focus knob and confirmed with a distance based on the rangefinder. With regards to the first possibility; i.e., the power level of the laser being set to a predetermined level based on the pushbutton, multiple pushbuttons could be used, with each resulting in a different power level being generated. With regards to the second possibility, i.e., the power level being based on one or more settings of the focus knob, the different focus settings would be mapped to different ranges, and a power appropriate to the range would be selected. Finally, with regards to the power being set based on a range reported by a range finder or in combination with the focus knob and rangefinder, such an embodiment is explained in more detail below. In a rangefinder embodiment, the dazzling module would further include a rangefinder transmitter and a rangefinder receiver coupled to the processor. When activated, the rangefinder transmitter would generate a beam and the receiver would monitor reflections from that beam, which would allow the rangefinder receiver to generate a signal (analog or digital) that was proportional to the range from the dazzling module and report that range to the processor. The processor would then program the laser drive circuit to generate a laser drive power that was appropriate for the reported range. When used in combination, the binocular would be manually focused and the rangefinder would then be activated to confirm a match of the distance to the target as focused with the distance measured by the electronic rangefinder within a predetermined tolerance. As an example, if the focused range to a target is 20 yards, within a 30% focus tolerance, the focal range would be 14 to 26 yards. The distance confirmation would be considered successful if the electronic range finder reported an actual distance value between 14 and 26 yards, resulting in a power adjustment at 20 yards of 0.2993 milliwatts using a 532 nm (green) 1.0 milliradian beam divergence laser using the guidelines from ANSI Z136.1. DETAILED DESCRIPTION Turning to the Figures and toFIG.1in particular, the underside of a non-lethal dazzling device1constructed in accordance with this disclosure is depicted. A first housing10is joined to a second housing20via a bridge30. The embodiment ofFIG.1does not incorporate a focal adjustment within the bridge30; however, it does incorporate a low-powered dazzling laser40. For example, the dazzling laser40can be a Class 3R laser with a power output of, for example, 2.5 milliwatts (mW). A Class 3R laser will generally not cause irreversible retinal disorder during a momentary exposure of less than 0.25 seconds at distances greater than 40 feet, which is within the aversion response; i.e., where a person turns away or blinks to avoid bright light. While a Class 3R laser will not cause retinal disorder, it generally can serve as a distraction, glare, or flashblind hazard. Each of the housing parts10,20contain an eyepiece16,17and an objective18,19. The eyepiece16,17is disposed closest to the user's eye, while the objective lens18,19collects light and brings it into focus for the user. Objective lens18,19may be specially coated to reduce received laser energy that may have been reflected from the target. As the embodiment ofFIG.1is intended to be simple and low-cost, no mirror or inversion system is used, and no mechanism of focusing the image is provided. The primary component of a low power laser is a laser diode45. Such a laser diode can have, for example, a maximum power output of 2.5 mW, a wavelength of 670 nm (nanometers), which would make it a red laser, and be adapted to operate on application of approximately 3V. Diodes with such specifications are readily available; in addition, lasers with similar specifications can be readily substituted. To aid in quickly and easily finding the target, the laser spot size may be expanded from the standard pencil dot size to a larger diameter of about 4 inches. The laser spot size may be controlled to be directly proportional or inversely proportional to the distance to the target. In an embodiment where the laser spot size is inversely proportional to the distance to the target, say the spot size at 10 yards to the target equals 4 inches in diameter, while at 20 yards to the target the spot size equals 3 inches, and at 30 yards to the target the spot size equals 2 inches. On the other hand, in an embodiment where the laser spot size is directly proportional to the distance to the target, say the spot size at 10 yards to the target equals 2 inches in diameter, while at 20 yards to the target the spot size equals 3 inches and at 30 yards to the target the spot size equals 4 inches. The laser at the target may also be rectangular or any other shape that ensures coverage on a face size target, or other predetermined target groups. Other support circuitry is required as well, such as, for example, one or more batteries, a voltage regulator, a capacitor to handle current surges, and a current limiting resistor can all be used. However, other circuit configurations can be used to equal effect. In addition, the dazzling laser40includes a trigger. The trigger can be, for example, a simple pushbutton switch disposed in a position accessible to the user. Generally, on activating switch, the laser diode45is coupled to the power source (not shown) and laser light is generated and directed down the center axis50of the bridge30. For example, the trigger can be disposed on top of the bridge30so that it is easily accessible to a user's fingers when naturally gripping the dazzling device1. The embodiment ofFIG.1is intended as a simple-to-use, low cost non-lethal dazzling device1. A user simply picks up the dazzling device1, aims it at a hostile target's face by looking down the ocular housings10,20through the eyepieces16,17and activates the dazzling laser40, resulting in the hostile actor being stunned and temporarily neutralized. Turning toFIG.2, a more complicated embodiment of the disclosed non-lethal dazzling device is disclosed. In particular, the embodiment ofFIG.2allows for the user to focus on the hostile target's face while simultaneously adjusting the power of the laser. FIG.2depicts a sectional illustration of the underside of a non-lethal dazzling device100. A first housing110is coupled to a second housing120by a bridge130. Bridge130may be jointed, fixed or releasably raised above, inline or below the optics. Each of the housing elements110,120includes an eyepiece116,117, an objective118,119, and an axially displaceable focusing element121,122. In addition, both housings110,120may include identical prism systems108for image inversion. Prism systems108may be specially coated to reduce received laser energy that may have reflected from the target. The segment130includes a dazzling laser140. The dazzling laser can optionally be a low-power laser, such as a Class 3R laser diode with a power output of 1.00 mW, a wavelength of 650 nm (making it a red laser) and adapted to operate off of approximately 5V. Such a laser diode is readily available, and provides the advantage of providing sufficient power for a reasonable range of 25 yards, while not providing sufficient power to cause irreversible retinal disorder to a hostile target's eyes at distances greater than 11 yards. However, given that the output power is adjustable, a higher power output laser can be safely used as long as care is taken to ensure that only a safe power level for a particular range is used. For example, a 250 mW laser having a wavelength of 532 nm (making it a green laser) and adapted to operate off of 5V. Such laser diodes are readily available, and provide the advantage of a far greater range exceeding 500 yards. However, an adjustment mechanism must be used ensure that the power level that is directed at a hostile target's eyes is low enough to not cause irreversible retinal disorder. In this case, a knob152is coupled to the focal components of the optical housing housings110,120; i.e., the prism systems108and the focus elements121,122using any of the methods known in the prior art, such as transmission rods, etc. In addition, the knob152is also coupled to a power adjustment for the laser140. The power adjustment can be, for example, as simple as a potentiometer, a voltage input to a microcontroller, etc. The power adjustment of the laser is calibrated so that at all distances, when an image is in focus, the power of the laser140operable on the hostile target is insufficient to cause irreversible retinal disorder to the target's eyes. Other components are required for the laser140to operate properly. In particular, a power source, such as batteries, and support circuitry, including voltage regulators, current sources, transistors, capacitors, and resistors can be required as well. As with the embodiment ofFIG.1, a push-button switch can be used to activate the laser, and the switch can be mounted on top of the jointed segment130. The embodiment ofFIG.2is intended to provide a longer range non-lethal dazzling device100. In particular, the dazzling device100ofFIG.2can be operated by a user that must aim the device at a hostile target's face and bringing the same into focus, and activating the dazzling laser140, resulting in the hostile actor being stunned and temporarily neutralized. To aid in the clear identification of the device to friendly team members, housing110and120may be painted, molded or otherwise coated in bright or distinctive colors such as blaze orange. Objectives118and119may be oversized to obscure the identifying marking or color housing from the hostile actor located in front of the device. Housing110and120may also be flared, expanded, or otherwise modified near objectives118and119to further mask the bold housing from front view, while still being identifiable from a side view. Turning toFIGS.3and3a, a still more complicated embodiment of the disclosed non-lethal dazzling device is disclosed. In particular, the embodiment ofFIG.3integrates electronic circuitry to perform a number of functions. First, the embodiment ofFIG.3integrates a range finder. A range finder is a laser-based device that typically operates in a non-visible spectrum, such as infrared. The range finder incorporates a transmitter; i.e., a laser diode, and a receiver, such as a silicon avalanche photodiode, or an InGaAs PIN avalanche photodiode (collectively referred to hereafter as a receiver). The output of the receiver is coupled to a microcontroller or microprocessor (collectively referred to hereafter as “processor”), which then adjusts the power level of a coupled dazzling laser using, for example, a digitally controlled potentiometer, pulse width modulation, delta modulation, the manipulation of aperture size, lens adjustments potentially including beam spreading, polarization plates, an algorithm for rapidly enabling and disabling the laser (other than PWM or DM), bias control and other methods known in the art. In a further electronics-based embodiment, a confirmation can be required prior to activation; i.e., the user would have to go through range finding and activation stages as set forth below to ensure that a hostile target's eyes were not exposed to a power level sufficient to cause irreversible retinal disorder to the target's eyes. In addition, another feature that can be incorporated is the use of facial recognition functions that can inhibit the dazzle effect unless a person's head or face is recognized. The facial recognition could be enhanced by electronically placing a box or other highlight around the potential target(s) in a display for the user. The facial recognition could be further enhanced with electronic muzzle flash location and highlighting. The functions could also include biometric measurements such as verifying pupil to pupil distance or that the target silhouette size matches within a predetermined tolerance (say 15%) to the distance reported by the range finder. As an alternative embodiment to rangefinder260, the target silhouette size can be compared electronically to a table of silhouette sizes at known distances to determine the range to the target. In addition, as discussed herein, video recording can also be incorporated and stored on the onboard flash memory267or external flash memory (not shown). In addition, the target area size can be appreciably increased and the need for aiming accuracy decreased by incorporating laser scanning methods known in the art. In an embodiment, a refraction element is moved in front of the emitter. In another embodiment, galvanometers or electric motors can move a diffraction grating, lensing or the laser diode with relation to a diffraction grating, mirrors, prisms or other methods known in the art to allow the laser to scan a larger target area. In another embodiment, the need for aiming accuracy can be further reduced by electronically designating a target with a lower power laser or electronic highlight displayed to the user and steering the laser to the optimum target location using the aforementioned beam steering in a “fire and forget” process. In another embodiment, multiple emitters in a grid like pattern are mounted on a substrate that simultaneously or sequentially emit to increase the targeting beam area at the target. An example construction incorporates 10000 laser emitters mounted in a 100×100 pattern that would cover an area of two feet by two feet at 20 yards. The example construction could have center lasers mounted at 90 degrees to the substrate and outermost lasers mounted at +1.035 degrees offset to the center lasers. Lasers approaching the center would be progressively less offset than +1.035 degrees until parallel to a centerline of 90 degrees to the substrate. The mounting angle may be mixed or reversed from the above arrangement to allow for a variety of manufacturing techniques. In another embodiment, one or more lasers may be used with a light pipe that diverges into several exit apertures. An example construction incorporates a one or more adjustable power output 532 nm lasers emitting into one or more light pipe(s) with 5000 exit points. The exits would have exit angles formed into a grid like pattern to provide coverage of 2 feet by 2 feet at 20 yards. A photolithographic process may have the devices angled in random locations, while a machined base may have regular angles as determined by standard machining processes. The substrate could also be edge emitting wherein the lasers are mounted on the substrate edge providing the necessary offset. This wide aiming angle would make the device suitable for non-steady platforms such as drones or other vehicles in motion. The embodiment ofFIGS.3and3ais similar to the embodiment ofFIG.2, except that its dazzling and optical functions are now electronically controlled. In particular,FIG.3depicts a non-lethal dazzling device200. A first housing210is coupled to a second housing220by a bridge230. Each of the housing elements210,220includes an eyepiece216,217and an objective218,219. Other optical elements may be included as described with regards to the embodiment ofFIG.2, or as known in the prior art. Within the bridge230, a range finder260is disposed. As discussed above, the rangefinder includes a transmitter, which is generally a laser diode that is adapted to produce non-visible light, such as infrared. It is anticipated that other methods known in the art of range finding will be suitable including passive autofocus, phase detection, and contrast detection. In addition, the rangefinder may include a receiver, and other components as is known in the art. In addition, the bridge230incorporates a dazzling laser240. The dazzling laser240generally will have a power output of tens or more milliwatts, which would generally make the device banned by treaty. However, as disclosed herein, the power adjustment circuit will ensure that the power level that the target is exposed to is low enough so that no irreversible retinal disorder will be done to the target's eyes. In an embodiment, range finder260is mounted to determine range to the target, Range finder260may also be oriented toward the target and a second range finder (not shown) may be oriented toward the user to ensure the user is holding the device in the correct orientation with216and217toward the user and218and219toward the target. In an embodiment, the forward facing range finder would need to detect a range greater than the longest arm's length of about four feet, and the rear range finder would need to detect a range less than 1 foot to ensure the dazzler is in the correct orientation to prevent self-dazzling of the operator Both the dazzling laser240and the rangefinder260are coupled to a processor265. The processor265requires certain support circuitry, including RAM266and FLASH267. It should be noted that other types of storage, such as magnetic RAM, may be viable in the future, and the specific type of short-term and long-term memory that is utilized is not intended as a limitation of the disclosure unless it is expressly claimed. The processor265is coupled to a power adjustment245circuit, which controls the power level of the dazzling laser240. In addition, the processor265is coupled to a photosensor270, to record video of the image that the user would observe from one of the optical lenses, such as the eyepiece216of the left housing210. The video display may also be used for electronic target designation where a box or other highlight could be placed on electronically recognized targets using image recognition techniques known in the art, such as muzzle flash, firearms, or other suitable targets or conditions. The user may scroll through the highlighted targets by touching trigger280for a second predetermined time period or scroll using an additional target selector control similar to280such as a joystick, spin-wheel, or the like may be added. It is anticipated that targeting may also be completely under device software or remote-control using wireless communication methods such as 5G or similar protocols known in the art. A similar mechanism would allow a video display275from photosensor270to be shown to one of the optical lenses, such as the eyepiece216of the left housing210. A focus dial252is disposed in the jointed bridge230as well; the method of operation of the focus can be similar to that ofFIG.2, or can operate in any other way known in the art, including entirely digitally, thus minimizing potential effects to the operator should reflective surfaces be targeted. Finally, the processor265is coupled to a trigger280, which can be, as previously described, a push button switch disposed on the top of the bridge230where a user's fingers would naturally be disposed when handling the device non-lethal dazzling device200. In addition, a battery284provides power to the electronic components. In operation, a user would pick up the non-lethal dazzling device200and aim the device200at a hostile target (not shown). Once the hostile target's face was in focus (after electronic focus or using the focus dial252), the user would press the trigger280a first time activating the transmitter (not shown) of the range finder260. The receiver (not shown) of the rangefinder260would report a range to the processor265. The processor265would then update the display275so as to notify the user that the dazzling laser is going to be activated. This will allow the user to ensure that the hostile target is still at approximately the same range as when s/he activated the rangefinder, and that no targets are closer than the hostile target, and therefore in danger of suffering irreversible retinal disorder. If the user presses the trigger280a second time within some predetermined amount of time, such as 5.0 seconds, the processor activates the dazzling laser240after programming the power adjustment circuit245so as to ensure that the equivalent power disposed on the hostile target's eyes is at a level that will dazzle the hostile target without causing irreversible retinal disorder to the hostile target's eyes. The above process may also be software controlled whereby the processor inhibits the dazzling laser until range is confirmed by the processor and the laser is turned on at eye safe power levels as soon as the processer confirms distance at the first button press. Additional embodiments may inhibit the laser until a beam steering mechanism can be electronically confirmed to be optimally on target to say 0.1 inches at 100 yards. Other embodiments include multiple single button presses or the pressing of a number of buttons, say 5, in a predetermined sequence. The potentially high-power output of the dazzling laser240allows the device200to be used at long ranges, such as more than 100 yards. In addition, the high power output of the dazzling laser240can also be useful if countermeasures, such as special glasses, are used, or if the environment contains smoke or dust that would affect the received power level. In such a case, a high power override can be incorporated, allowing the user to manually to set the power level by, for example, holding the trigger280while adjusting the focus dial252. In an additional embodiment, the built-in optics, electronics, and/or video processing may autodetect faces, muzzle flashes, weapons or the like and provide the user with electronic highlighting around the target using a display. The electronics may also detect the presence of countermeasures or airborne contaminants and automatically adjust the power, frequency, frequency hopping, beam steering, or other beam properties to a predetermined different, but still eye safe, profile for the current environment or countermeasures. Turning toFIGS.4-7, the underside of an additional non-lethal dazzling device600constructed in accordance with this disclosure is depicted. A first housing610is joined to a second housing620by a bridge630. The bridge630incorporates a focal mechanism that can be constructed similar to those that were disclosed with previous embodiments, and which can be controlled by knob652. As depicted, bridge630is jointed, so that the first housing610and second housing620can be collapsed into a smaller space as depicted inFIG.7. Both housing elements610and620include elements similar to those shown in the embodiment ofFIG.2, including an eyepiece616,617, an objective618,619, focal elements (not shown), and prism systems (also now shown) if image inversion is required. Mounted on top of the bridge630is a dazzling module640. The dazzling module640includes a power switch626, a first button642, and a second button643. The power switch626turns the dazzling module640“on” or “off.” As explained below, the first button642activates the dazzling module640in low power mode, while the second button643activates the dazzling module640in high power mode. The dazzling module further includes a dazzling laser650. This particular embodiment could employ a wavelength of 532 nm, making it a green laser, with a power output of 4.9 mW. The drive circuit of the laser is adapted to limit the actual power output of the laser so that the effective safe dazzling range of the laser would be limited to 50 yards when activated in low power mode (the first button642), and more than 150 yards when activated in high power mode (the second button643). Typically, the way that a user would utilize the non-lethal dazzling device600disclosed inFIGS.4-7would be to point the non-lethal dazzling device600at the hostile actor and use the focal knob652to acquire the hostile actor's face. Once the hostile actor's face is in focus, the user would then press either the first button642, if the user is less than 50 yards away, or the second button643if the user is more than 200 yards away. The distances and power levels are example distances, and it is anticipated that dazzlers would be made with ranges for typical structures such as churches or shopping malls. Turning toFIG.8, a simplified schematic block diagram for the embodiment of dazzling module640disclosed inFIGS.4-7is illustrated. A switch626couples a battery718to a power circuit702, which provides power to the remaining components of the dazzling module. The power circuit702can be implemented in a variety of means known in the art, such as a switching power supply, or a simple linear supply circuit. A first pushbutton switch642serves to couple a low power drive circuit704to Laser Drive710, while a second pushbutton switch643couples a high power drive circuit706to the Laser Drive710. The low power drive circuit704programs the Laser Drive710to limit the power to the laser650, while the high power drive circuit706allows the Laser Drive710to provide the maximum permissible power to the laser650. The low power drive circuit704and high power drive circuit706may provide analog inputs or digital inputs to the Laser Drive710, whose operation is similarly bound only by the prior art. Finally, the Laser Drive710powers the Laser650, which will produce an appropriate intensity beam. FIGS.9and10discuss an embodiment that is similar to that disclosed inFIGS.3and3a. In particular,FIG.9depicts a non-lethal dazzling device800that includes a first housing element810and a second housing element820. The first housing element810is coupled to the second housing element820by a bridge830. As depicted, the bridge830is jointed and incorporates an articulating hinge, which allows the non-lethal dazzling device800to be compressed into a smaller form factor for storage. The first housing element810includes an eyepiece816and an objective818. The second housing element820includes an eyepiece817and objective819. The first housing element810also includes a diopter focus853, and the second housing element820includes a diopter focus854. Other optical elements can be included as described with regards to the other embodiments disclosed herein, or as known in the prior art. The bridge830includes a range finder860. The rangefinder860can be similar to that disclosed with regards to the embodiment ofFIGS.3and3a. The bridge830also includes a dazzling laser840. The laser840can have, for example, a power output of 4.9 mW with a wavelength of 532 nm making it a green laser. It is anticipated to minimize the profile of the complete device that Laser840, rangefinder860, photosensor270and all other associated components could be contained inside of the first housing element810or the second housing element820with the corresponding image displayed to the user on video display275. Turning toFIG.10, a simplified schematic diagram of a circuit for use with the non-lethal dazzling device800disclosed inFIG.9is illustrated. A battery918provides power to a power regulator902, which provides power to power saving processor904which keeps the dazzler in very low power mode until activated, say five microwatts, yielding a typical lithium battery cell standby life of more than five years. Lower standby power modes can be achieved through the use of isolating electronics, mechanical switching or the use of relays or other similar mechanisms. The power processor904accepts inputs from a laser trigger880, a range finder trigger878, and a video recorder trigger882. The power processor904is coupled to the main processor965. The main processor965can incorporate its own storage, including random access memory for computations and short-term storage, and FLASH memory for long term storage. The main processor965can also incorporate its own support circuitry. However, given the ability to record video, at least some external memory966will be required. The external memory966can include FLASH memory, magnetic RAM, or other types of storage. The main processor965further controls a laser power control circuit970. The laser power control circuit970can be programmed via analog inputs generated by the main processor965, or via digital commands. The laser power control circuit970controls a number of laser drive circuits (there are two illustrated). In the illustrated embodiment, the laser power control circuit970controls two laser drive circuits; a first laser drive circuit974which drives a first laser975and a second laser drive circuit978which drives a second laser979. For example, the first laser drive circuit974and first laser975may be adapted for close range dazzling, while the second laser drive circuit978and second laser979may be adapted for longer range dazzling. Alternately, second laser979may be adapted for transmission and range finding the target in conjunction with range finding receiver984. Alternately, second laser979may be adapted to supplement first laser975by being offset by a typical interpupil distance, or may provide a more divergent or less divergent beam then first laser975. The main processor965is also connected to a range finder receiver984which functions as discussed previously with other embodiments. The main processor965can automatically program the laser drive circuit based on input from the focus977, the rangefinder984, or a combination thereof. As discussed above, the rangefinder receiver984could report a range of an object, and the main processor965could set the power via the laser power control970appropriately. Alternatively, the main processor965could monitor the setting of the focus977and use that as the primary means to program the laser power control970. In such a case, the video processing circuitry982could implement a Gaussian Filter, or other mechanism known in the art to ensure that the object being aimed at is actually in focus—this would prevent accidental or intentional irreversible retinal disorder. The main processor965also controls a video recording circuit, which can comprise a camera980as well as video processing circuitry982. Camera980and video processing circuitry982may also be used to detect rapid movement of the dazzling device using well known video processing techniques. The video processing circuitry982and main processor965would reduce or turn off laser driver974until the dazzler stabilized and range finder984could report stable distance to processor965. It should be noted that digital cameras and image processing are well known in the art at this point, and any suitable prior art mechanism can be used. The video processing circuitry982can also be used to detect when the non-lethal dazzling device800is quickly moved; for example, a user may have focused on a hostile actor 200 yards away, and then suddenly turned to her left to focus on a potential hostile actor 10 yards away—if the laser is maintained at the same laser drive power level, this would result in a greater intensity laser spot at a distance to target of 10 yards than at 200 yards, which could damage the potential hostile actor's eyes, so the video processing circuitry982could act to disable the laser until a proper range is calculated using the mechanisms discussed herein. Additionally, any suitable inertial sensor such as an electronic compass, accelerometer, electronic gyroscope or the like could be used and incorporated into device safety switches986, thus preventing the operation of the dazzling laser unless the rangefinder and power level was at a correct level. The main processor also couples to input/output port983, which can be used to access recorded video or to program the non-lethal dazzling device800with software updates, settings, etc. The port may operate in a wired fashion say USB, JTAG, RS488 or wirelessly, say Wi-Fi, 5G, Bluetooth or inductive coupling. Similarly, the main processor965monitors an anti-theft device990, which, when active, will cause the processor965to prevent any functioning of the non-lethal dazzling device800. Anti-theft device990may use any of the anti-theft features known in the art. For example, anti-theft device990may allow a remote device, such as a smartphone or a server, to send a signal to the anti-theft device990over a wireless network that would disable the non-lethal dazzling device800. Alternatively, anti-theft device990may only operate if it detects a signal or response from a second device, such as a base station or an RFID device. Alternatively, anti-theft device990may utilize geo-fencing; i.e., it will only operate if it is placed in a particular bounded area or areas. Anti-theft device990can make use of various biometric authentication mechanisms, such as a finger print reader, voice recognition, face recognition, etc. It should be noted that various other means known in the art can also be used by the anti-theft device990. In addition, the main processor965monitors one or more device safety switches986such as housing interlocks used to turn off the laser and associated circuitry if any user service covers are opened. In practice, a user would pick up the non-lethal dazzling device800and aim the device800at a hostile target (not shown). The user would then use the center focus dial852or the diopter focus dials853,854to bring the hostile target's face into focus. The user then presses the range finder trigger878activating the transmitter of the range finder979. The range finder receiver984would then a report a range to the main processor965. The main processor965may notify the user that the dazzling laser is going to be activated by, for example, flashing an LED, or activating an audible chirp using a speaker (not shown). This will allow the user to ensure that the hostile target is still at approximately the same range as when s/he activated the rangefinder, and that no targets are closer than the hostile target, and therefore in danger of suffering irreversible retinal disorder. The user would then press the laser trigger880to activate the non-lethal dazzling device800. The main processor965then programs the laser power control970and activates the appropriate laser drive circuit and the appropriate laser. In one embodiment, the main processor965selects the laser drive circuit and laser based on the range information received by the range finder receiver984. In another embodiment, focusing operations are fully automatic using well known automatic focus techniques. The potentially high-power output of the non-lethal dazzling device800allows the device800to be used at long ranges, such as more than 100 yards. In addition, the high power output of the non-lethal dazzling device800can also be useful if countermeasures, such as dark sunglasses, are used, or if the environment contains smoke or dust that would affect the received power level. The previously disclosed non-lethal dazzling device embodiments are targeted to military and law-enforcement personnel, as well as other trained users. In particular, the previously disclosed embodiments are designed to be used at range by trained users that are able to target a hostile actor's face. However, the principles of a non-lethal dazzling device can also be applied to a device intended for use by the general populace. The advantage of such a device are readily apparent. In particular, a general-purpose device could be used by a person in a typical self-defense situation, i.e., when unexpectedly confronted by a hostile actor. The non-lethal dazzling device disclosed inFIG.11is one potential embodiment of a personal non-lethal dazzling device1000. In particular, the personal non-lethal dazzling device1000incorporates a back housing1002and a front housing1004. The front housing1004is slideably coupled to the back housing1004, so that the front housing1004can slide away from the back housing as depicted inFIG.11a. When the front housing1004is slid away from the back housing1002an internal panel1010is exposed. When the personal non-lethal dazzling device1000is in its most compact form, it could be sized to be the same size as a credit card when laid flat, and of the same thickness as 2-4 typical credit cards laid on top of one another. This will allow the personal non-lethal dazzling device1000to be stored in a pocket, common wallet or money clip, so that the personal non-lethal dazzling device1000can be concealed from view until needed. The front panel1004includes a trigger1006, which in this case is a simple button. The front panel also includes a lanyard hole1008. Turning to the user panel1010, the user panel includes an aiming aid1014, which is disposed above a laser array1016. In an embodiment, aiming aid1014is a simple cutout window. In other embodiments, aiming aid may be a lens, electronic viewfinder, camera, or other targeting aids known to the arts. The laser array1016could comprise an array of a number, such as forty-nine, separate lasers, although a different number of lasers could also be used. In such a case, each of the lasers could be, for example, a class 1 laser, or a class IIa laser operating at 532 nm with a total power output of less than 1 milliwatt each. Alternatively, a single higher power laser along with a lensing system, such as a beam-spreader, light pipes or other beam expanding techniques discussed herein, could be used as the laser array1016. In such a case, a 532 nm laser with a power output of 4.9 milliwatts or greater could be used, along with a suitable beam-spreader technique. The user panel1010also includes a proximity sensor1012. The proximity sensor1012can be, for example, an infrared or ultrasonic proximity sensor. The proximity sensor1012is primarily intended to prevent operation of the personal non-lethal dazzling device1000when a person is within close proximity to the device. For example, the proximity sensor1012may inhibit operation when any object is detected within 0.5 meters of the proximity sensor. Proximity sensor1012may also be duplicated on the back side and operate in conjunction with front side proximity sensor to prevent operator self-dazzling. These minimum distances front to the target and rear to the user of say a minimum of four feet to the front and a maximum of one foot to the rear can help prevent a user from dazzling him or herself with the personal non-lethal dazzling device1000or from operating the personal non-lethal dazzling device1000in circumstances where it could cause irreversible retinal disorder. Turning toFIG.12, an exemplary simplified circuit diagram that implements the personal non-lethal dazzling device1000is depicted. In particular, the circuit includes a battery1064. The battery1064is sized to allow for a reasonable number of uses, such as, for example, 500 uses, and will have suitable durability, such as a ten-year life. The battery1064could be replaceable or permanent. A slider switch1062is coupled between the battery1064and a power circuit1052. The power circuit1052is adapted to provide conditioned power to the remaining components of the circuit, and can operate using any of the ways known in the art, such as via a linear regulator or a switching power supply. A pushbutton1006operates to activate the personal non-lethal dazzling device1000. The proximity sensor1012acts as a switch, disabling the device when an object is detected in close proximity. Finally, a laser drive1054powers a laser array1016. The laser drive1054can operate as previously disclosed herein. In an alternative embodiment, proximity sensor1012may be a range finder as previously disclosed herein and provide ranging information to adjust the output of dazzling laser as previously disclosed herein. In operation, a user will take the personal non-lethal dazzling device1000out of storage; i.e., out of the user's purse, wallet, money clip, pocket, etc., and slide the front housing1004away from the back housing1002. The user will then use the aiming aid1014to target the hostile actor. The proximity detector1012will allow operation of the personal non-lethal dazzling device1000as long as no object is within 0.5 meters of the proximity detector1012in the direction the proximity detector1012is facing. Once the hostile actor's face is targeted, the user will use the trigger1006to activate the laser array1016, which will either dazzle or at least warn the hostile actor, depending on the range from the hostile actor to the activated device. The foregoing description of the disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. The description was selected to best explain the principles of the present teachings and practical application of these principles to enable others skilled in the art to best utilize the disclosure in various embodiments and various modifications as are suited to the particular use contemplated. It should be recognized that the words “a” or “an” are intended to include both the singular and the plural. Conversely, any reference to plural elements shall, where appropriate, include the singular. It is intended that the scope of the disclosure not be limited by the specification, but be defined by the claims set forth below. It should also be noted that a variety of the features discussed herein. may be combined with other features discussed herein. In addition, although narrow claims may be presented below, it should be recognized that the scope of this invention is much broader than presented by the claim(s). It is intended that broader claims will be submitted in one or more applications that claim the benefit of priority from this application. Insofar as the description above and the accompanying drawings disclose additional subject matter that is not within the scope of the claim or claims below, the additional inventions are not dedicated to the public and the right to file one or more applications to claim such additional inventions is reserved. | 45,883 |
11859951 | DETAILED DESCRIPTION OF THE INVENTION Referring now to the figures of the drawing in detail and first, in particular, toFIG.1thereof, there is shown a basic configuration of a first exemplary embodiment of an electronic irritation device according to the invention. The irritation device10comprises a plurality (here: three) of electronic irritation signal modules12. Each of these irritation signal modules12contains a plurality of emitters16, preferably at least one optical emitter16and at least one acoustic emitter16. The optical emitters16contain electronic illuminants such as, for example, LEDs, LED arrays, laser diodes or laser arrays, and the acoustic emitters16contain electronic sound generators such as piezo sound transducers, for example. In addition, each of these irritation signal modules12contains a control device18, which is connected to the emitters16in a wired or wireless manner for the purpose of controlling the emitters16. As illustrated inFIG.1, each of the irritation signal modules12is substantially cylindrical in shape and they are stacked one above another. In addition, two of the irritation signal modules12in each case are connected to one another via a for example mechanical or electromagnetic connection element32, such that all the irritation signal modules12are coupled to one another and form a unit that can be deployed as a common projectile. As illustrated inFIG.1, the irritation signal modules12are each provided with at least one (e.g. mechanical and/or electrical) unlocking mechanism34, by which one of the connection elements32can be unlocked, such that the irritation signal modules12can be released from one another and thus distributed spatially. In the exemplary embodiment inFIG.1, moreover, in each case a mechanical expansion mechanism36comprising a spring composed of metal or plastic is provided between each two irritation signal modules12. These springs are tensioned in the initial state of the irritation device10, in which the irritation signal modules12are coupled to one another via the connection elements32. If the connection elements32are unlocked and the irritation signal modules12are thus released from one another, then the expansion mechanisms35force the irritation signal modules12apart and thereby assist and accelerate the spatial distribution thereof. Referring now toFIG.2, there is shown a more detailed construction of an irritation signal module12fromFIG.1. The components of the irritation signal module12are arranged in/on a module housing14. The optical/acoustic emitters16are positioned for example on the cylinder circumference of the module housing14. They are controlled by the control device18, preferably via an interposed driver20, in order to set in particular the amplitudes, frequencies, phases and signal patterns of the optical/acoustic irritation pulses emitted. The control device18is additionally connected to an activation switch22and/or a disengaging mechanism23, which can be actuated before the irritation device is launched, for example. Moreover, the control device18preferably contains a timer24. Furthermore, the irritation signal module12comprises a (for example, electromagnetic or acoustic or optical) distance sensor25for detecting a distance between the irritation signal module12and an object, a position sensor26for detecting a position of the irritation signal module12(for example, by means of a GPS or GNSS system) and/or an acceleration sensor27for detecting launching or impact of the irritation signal module12. In the exemplary embodiment inFIG.2, the irritation signal module12, in an optional addition, contains a communication device28. The sensors25,26,27and the communication device28are likewise connected to the control device18. The activation switch22, the disengaging mechanism23, the sensors25,26,27and the communication device28serve as an activation mechanism for activating the control device18in order that the latter, directly upon the activation or—with the aid of the timer24—a predetermined time after the activation, actuates the unlocking mechanism34for unlocking the connection element32in order to release the irritation signal modules12from one another. In this regard, the irritation signal modules12can be released from one another for example a predetermined time duration after an actuation of the activation switch22or of the disengaging mechanism23, upon the object distance detected by the distance sensor25falling below a predetermined limit value, upon a predetermined position being reached by the irritation device10, a predetermined time duration after launching or impact of the irritation device10. The object distance detected by the distance sensor25can additionally be used by the control device18to adapt the optical or acoustic irritation signals emitted by the emitters16to the object distance. By way of example, the brightness of light pulses can be adapted to the object distance. The communication device28can be used for receiving an activation signal from a remote control. In addition, the communication device28can be used for the communication of the control devices18of the irritation signal modules12of the irritation device10with one another. In this regard, for example, an activation effected at one irritation signal module12(e.g., by way of an actuation of the disengaging mechanism23) can be communicated to the other irritation signal modules12or the control devices18thereof, such that the unlocking mechanisms34of all the irritation signal modules12can be actuated synchronously. In this way, moreover, the irritation signals of the emitters16of the various irritation signal modules12can be coordinated with one another. By way of example, the light pulses or sound pulses can be emitted synchronously or in a well-defined pattern. As illustrated inFIG.2, the irritation signal module12is preferably also equipped with an energy storage device30, preferably a rechargeable energy store, for supplying energy to the electronic components of the irritation signal module12. Moreover, the irritation signal module12is optionally also provided with a self-destruction device48. If third parties gain possession of an irritation signal module12and there is the risk of their using the irritation signal module12against oneself, then the self-destruction device48can be activated by remote control via the communication device28in order to destroy the irritation signal module12, in particular the control device18thereof. FIG.3shows the basic construction of a second exemplary embodiment of an electronic irritation device of the invention. Identical or functionally corresponding components are provided with the same reference numerals as inFIG.1. In the exemplary embodiment inFIG.3, a plurality (here: three) of substantially cylindrical irritation signal modules12are accommodated in a substantially spherical or cylindrical housing38. Accommodation in a common housing38has the result that the irritation signal modules12are coupled to one another and a unit serving as a projectile is formed. In the exemplary embodiment inFIG.3, the housing38is composed of a plurality of housing segments40in the form of housing shells. The housing segments40are connected to one another by for example mechanical or electromagnetic connection elements42. In the region of the connection elements42, moreover, for example mechanical or electrical unlocking mechanisms44are provided, by which the connection elements42can be unlocked and the housing segments40are thereby released from one another, such that the housing38opens and frees the irritation signal modules12. In order to assist or accelerate the spatial distribution of the freed irritation signal modules12, the irritation device10optionally contains an expansion mechanism46for example in the form of a gas cartridge. In the exemplary embodiment inFIG.3, the control device18of one of the irritation signal modules12is used as a master controller. This master controller detects an activation by an activation mechanism and then controls the unlocking mechanisms44for unlocking the connection elements42. The activation mechanism can be part of the irritation signal module12with the master controller18(seeFIG.2above) or can be provided separately in/on the housing38of the irritation device10. Moreover, instead of a master controller of the control devices18of the irritation signal modules12, a separate controller can be provided in the irritation device10. For the rest, the second exemplary embodiment inFIG.3corresponds to the first exemplary embodiment inFIG.1. In particular, the irritation signal modules12of the irritation device10inFIG.3can also be configured in accordance withFIG.2. In a further exemplary embodiment, as an embodiment variant of the second exemplary embodiment, a housing38can be used which can be destroyed by a mechanism in order to free the irritation signal modules12. In a further exemplary embodiment, as a further embodiment variant of the second exemplary embodiment, a housing38can be used which breaks up in the event of impact and thus frees the irritation signal modules12. For this purpose, the housing38is shaped from a brittle material, for example. The following is a summary list of reference numerals and the corresponding structure used in the above description of the invention:10Electronic irritation device12Electronic irritation signal modules14Module housing16Emitter18Control device20Driver22Activation switch23Disengaging mechanism24Timer25Distance sensor26Position sensor27Acceleration sensor28Communication device30Energy store32Connection element34Unlocking mechanism36Expansion mechanism38Housing40Housing segments42Connection elements44Unlocking mechanism46Expansion mechanism48Self-destruction device | 9,824 |
11859952 | DETAILED DESCRIPTION In the following detailed description, references are made to the accompanying drawings that form a part hereof and that show, by way of illustration, specific embodiments or examples. It must be understood that the disclosed embodiments are merely illustrative of the concepts and technologies disclosed herein. The concepts and technologies disclosed herein may be embodied in various and alternative forms, and/or in various combinations of the embodiments disclosed herein. The word “illustrative,” as used in the specification, is used expansively to refer to embodiments that serve as an illustration, specimen, model, sample, or pattern. Additionally, it should be understood that the drawings are not necessarily to scale, and that some features may be exaggerated or minimized to show details of particular components. In other instances, well-known components, systems, materials or methods have not been described in detail in order to avoid obscuring the present disclosure. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure. Referring now to the drawings, in which like numerals represent like elements throughout the several figures, aspects of an armored plate assembly will be described. Referring first toFIGS.1A-1B, an armored plate assembly100is shown, according to an illustrative embodiment of the concepts and technologies disclosed herein. It should be understood that the shape of the armored plate assembly100as illustrated inFIGS.1A-1Bis merely illustrative of one contemplated embodiment, as other shapes, configurations, dimensions (absolute and/or relative), and/or sizes (absolute and/or relative) are possible and are contemplated. As such, it should be understood that this example polygonal shape is illustrative, and therefore should not be construed as being limiting in any way. The armored plate assembly100is illustrated as a polygonal structure having a top edge102, a bottom edge104, a first side edge106, a second side edge108, a first angled edge110, and a second angled edge112. It should be understood that the polygonal shape illustrated inFIGS.1A-1Bis purely illustrative, as other polygonal and/or non-polygonal shapes having any number of edges are possible and are contemplated, as are other shapes such as circular, elliptical, irregular, curved, etc. As such, the illustrated embodiment is illustrative and should not be construed as being limiting in any way. The armored plate assembly100also can have a first surface114(illustrated as the surface facing the viewing plane inFIG.1A), which can be referred to herein as a “front side,” “front surface,” or “strike face.” The armored plate assembly100also can have a second surface116, which can be on the opposite side of the armored plate assembly100relative to the first surface114, and which can be referred to herein as a “back side,” “back surface,” or “body face.” The second surface116is not visible inFIG.1A, but is visible as the surface facing the viewing plane inFIG.1B. A line A-A is also illustrated inFIG.1A, and a cross-sectional view of the armored plate assembly100viewed along line A-A will be illustrated and described in more detail below with reference toFIG.3. It should be understood that the illustrated embodiment of the armored plate assembly100shown inFIGS.1A-1Bis illustrative and should not be construed as being limiting in any way. Turning now toFIG.2, an exploded view of an armored plate assembly100will be illustrated and described, according to an illustrative embodiment of the concepts and technologies disclosed herein. As shown inFIG.2, the illustrated embodiment of the armored plate assembly100can be provided in some embodiments by the assembly of multiple components and/or materials. In particular,FIG.2illustrates an example embodiment of the armored plate assembly100, where the armored plate assembly100can be formed by an armor plate subassembly and a coating of a material, as will be explained in more detail herein. The armor plate subassembly can include a base plate200, a gap layer202, and a secondary plate204. This armor plate subassembly can be coated by a coating206, which in the illustrated embodiment can include a layer of polyurea or another material. As shown inFIG.2, the secondary plate204can have fold lines208. These fold lines may or may not be visible, and are shown for purposes of illustrating and describing embodiments of the concepts and technologies disclosed herein. As such, it should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, the base plate200can be configured to stop a bullet or other projectile. Thus, in some embodiments, the base plate200can correspond to a plate for body armor, a vehicle, or the like. As is generally understood, the base plate200can be formed, in some embodiments, from steel. In some other embodiments, the base plate200can be formed from other metals, alloys, ceramics, polymers, composite materials, combinations thereof, or the like. In some embodiments, the base plate200can be formed from a member of the SSAB® HARDOX® family of steels (e.g., the SSAB® HARDOX®600brand steel); other abrasion resistant steels (e.g., AR600 steel, AR500 steel, etc.); and/or other military-rated and/or non-military-rated ballistic steel. According to various embodiments, the base plate200can have various thicknesses. In some embodiments, the thickness can be included in a range of thicknesses from 4.5 mm to 7 mm. In the illustrated embodiment, the base plate200is a steel plate having a thickness of 5 mm. Because other thicknesses are possible and are contemplated, it should be understood that the above listed example steels and thicknesses are illustrative, and therefore should not be construed as being limiting in any way. In some embodiments of the concepts and technologies disclosed herein, the base plate200also can be finished and/or coated. For example, a corrosion preventative treatment and/or coating can be applied to the base plate200and/or the base plate200can be sanded and/or smoothed for various applications. It should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. The gap layer202can be provided in some embodiments by a continuous and/or non-continuous layer of material. In one contemplated embodiment, the gap layer202can be provided by a layer of natural cork having a thickness in a range of thicknesses from 1 mm to 7 mm, though other materials and/or other thicknesses are possible and are contemplated. In the illustrated embodiment, the gap layer202can be formed from natural cork having a thickness of 0.25 inches (˜6.35 mm). Because other thicknesses are possible and are contemplated, it should be understood that this embodiment is only one contemplated embodiment and therefore should not be construed as being limiting in any way. In some embodiments, for example, the gap layer202can be formed from one or more polymers, woods, epoxies, ribbing, honeycomb structures, foams (e.g., formed from polymers, metals, or ceramics, etc.), combinations thereof, or the like, without departing from the scope of this disclosure. In some embodiments, the gap layer202can be configured to create a distance between the secondary plate204(if included as shown in the embodiment illustrated inFIG.2) and the base plate200, to contain spalling, and/or prevent the spalling from easily exiting the armored plate assembly100. Because the gap layer202can accomplish and/or fill other functions and/or provide other benefits, it should be understood that these examples functions/benefits are illustrative, and therefore should not be construed as being limiting in any way. According to some embodiments of the concepts and technologies disclosed herein, the secondary plate204can correspond to a layer of KEVLAR® brand material, other aramids, and/or other materials such as metals, polymers, fiber/resin composites, combinations thereof, or the like. In one contemplated embodiment, the secondary plate204can include only one layer of KEVLAR® brand material. Thus, unless more than one layer of material is specifically recited, the secondary plate204can include only one layer of KEVLAR® brand material or another material. In some embodiments, the secondary plate204(if included) can be configured to help arrest or otherwise stop projectile fragments (e.g., spalling) from exiting the armored plate assembly100. In particular, when the projectile engages the base plate200, fragments of the projectile may tend to be “sprayed” in a radial direction spreading parallel or substantially parallel to the engaged surface of the base plate200, where the spalling may spread radially from the impact site of the projectile. In some other instances, fragments and/or the projectile itself may spread in other directions (e.g., angularly, spherically, etc.). Because the projectile may ricochet and/or fragment in almost any direction, it should be understood that the above examples are illustrative, and therefore should not be construed as being limiting in any way. In some embodiments, the secondary plate204(if included) can be wrapped around the base plate layer and the gap layer202, thereby containing the spalling and/or ricocheting or deflected projectile in the gap layer202and preventing secondary injury and/or damage from the projectile that engaged the base plate200. Because the secondary plate204can accomplish and/or fill other functions and/or provide other benefits, it should be understood that these example functions/benefits are illustrative, and therefore should not be construed as being limiting in any way. In some embodiments, the coating206can be applied to seal the armor plate subassembly of the base plate200, the gap layer202(if included), and the secondary plate204(if included), and/or to provide additional protection from spalling. In some embodiments, the coating206can correspond to a layer of polyurea or other material. According to embodiments of the concepts and technologies disclosed herein, the coating206can correspond to a substantially continuous coating around the armor plate subassembly, wherein the armor plate substantially continuous coating can have a thickness in a range of thicknesses from approximately one sixteenth of an inch to about one quarter of an inch (i.e., about 1.5875 mm to about 6.35 mm) of polyurea. In the illustrated embodiment, the coating206can correspond to a substantially continuous coating of polyurea that can have a non-uniform thickness that can range from approximately 0.07 inches (˜1.9 mm) to about 0.15 inches (˜3.8 mm). It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. In some embodiments, the coating206also can be included to provide physical protection of the secondary plate204(e.g., if the secondary plate204is provided by aramid fibers, the coating206may cover and protect exposed aramid fibers). It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. In some embodiments, the coating206also can be used as a decoration layer (e.g., with labeling, brand names, colors, etc.). Because the coating206can accomplish and/or fill other functions and/or provide other benefits, it should be understood that these examples functions/benefits are illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, the base plate200can be joined to the gap layer202by tape, adhesives, mechanical fasteners, combinations thereof, or the like. The secondary plate204can be attached to the base plate200and/or the gap layer202by tape, adhesives, mechanical fasteners, combinations thereof, or the like. In some other embodiments, the secondary plate204can be formed as a sleeve and/or as a blank with fold lines208(as shown inFIG.2), and the secondary plate204can enwrap or encompass the base plate200and/or the gap layer202. In particular, in some embodiments the secondary plate204can be folded along one or more fold lines208to wrap the secondary plate204around the gap layer202and base plate200. This will be more apparent with reference toFIG.3below. At any rate, it should be understood that the illustrated embodiment is illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.3, a cross-sectional view of the armored plate assembly100as illustrated and described with reference toFIGS.1A-2is illustrated, according to an illustrative embodiment of the concepts and technologies disclosed herein. As noted above,FIG.3illustrates a cross-sectional view of the armored plate assembly100as viewed along the line A-A illustrated inFIG.1. As can be seen inFIG.3, the base plate200can be attached to or can border the gap layer202and the secondary plate204can wrap the gap layer202and the base plate200. If the gap layer202is provided by a material (e.g., cork), for example instead of an air space, the gap layer202can be attached to the base plate200. If the gap layer202corresponds to an air gap or chamber, the gap layer202can border the base plate200. It should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. A layer of the coating206is also visible inFIG.3, where the coating206wraps or encompasses, in some embodiments, at least a portion of the armor plate subassembly formed from the base plate200and the gap layer202. In some other embodiments, the coating206can coat the entire subassembly of the base plate200and the gap layer202. The coating206can serve multiple purposes. In addition to providing additional strength and/or protection against spalling, the coating206also can protect against corrosion and/or other physical damage during shipping, storage, and/or use. Because the coating206can cover any amount of the armor plate subassembly, and because the coating206can provide additional and/or alternative benefits, it should be understood that the above examples are illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.4, aspects of a method400for forming one embodiment of an armored plate assembly100will be described in detail, according to an illustrative embodiment of the concepts and technologies disclosed herein. It should be understood that the operations of the method400disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations of the method400have been presented in the demonstrated order for ease of description and illustration. Operations of the method400may be added, omitted, and/or performed simultaneously, without departing from the scope of the concepts and technologies disclosed herein. In some embodiments of the concepts and technologies disclosed herein, the operations of the method400illustrated and described herein can be performed by a computer, for example a control module for an armored plate assembly fabrication machine. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The method400can begin at operation402. At operation402, a base plate200can be obtained. As noted above, the base plate200can be formed from a suitable material such as a metal or metal alloy such as steel. As noted above with reference toFIGS.1A-3, the base plate200can be formed from other materials in various embodiments. Because the base plate200can be formed from other materials as illustrated and described herein, it should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation402, the method400can proceed to operation404. At operation404, a secondary plate204can be obtained. As explained above, the secondary plate204can be formed from a suitable material such as a metal or metal alloy, a polymer, a resin, an epoxy, an aramid fiber, and/or other materials. In one contemplated embodiment, the secondary plate204can be formed from a layer of a KEVLAR® brand material. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation404, the method400can proceed to operation406. At operation406, the base plate200can be assembled with a gap layer202and the secondary plate204. In some embodiments, the gap layer202can correspond to an air chamber and in some other embodiments, the gap layer202can correspond to a layer of material such as a ceramic, a wood (e.g., cork), a polymer, combinations thereof, or the like. Thus, the gap layer202can be formed by disposing one or more spacers between the secondary plate204and the base plate200in some embodiments, or by locating a material such as cork between the secondary plate204and the base plate200in some other embodiments. After encompassing or enwrapping the base plate200and the gap layer202with the secondary plate204, an armor plate subassembly that includes the base plate200, the gap layer202, and the secondary plate204, can be obtained. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation406, the method400can proceed to operation408. At operation408, the coating206can be applied to the armor plate subassembly obtained in operation406. As explained above, in some embodiments, the coating206can correspond to polyurea, which can be sprayed onto or otherwise disposed to the armor plate subassembly obtained in operation406. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation408, the method400can proceed to operation410. The method400can end at operation410. Turning now toFIGS.5A-5E, some additional aspects of an armored plate assembly100will be illustrated and described, according to another illustrative embodiment of the concepts and technologies disclosed herein. In particular,FIG.5Aillustrates the assembly of a base plate200with a gap layer202(e.g., without including a secondary plate204). In the embodiment illustrated inFIG.5A, the base plate200can be formed from steel or another material (e.g., including the materials set forth with respect to the base plate200illustrated and described above with reference toFIGS.1A-3), and the gap layer202can be formed from natural cork or another material (e.g., including the materials set forth with respect to the gap layer202illustrated and described above with reference toFIGS.1A-3). In some other embodiments of the concepts and technologies disclosed herein, gap layer202of the embodiment shown inFIG.5Acan be formed from materials illustrated and described with reference to the secondary plate204of the embodiment shown inFIGS.1A-3, though this is not a preferred embodiment. Thus, for purposes of the claims, the recitation “gap layer” excludes steel, aramids such as KEVLAR® brand materials, or the like unless specially recited in the claims. In various embodiments of the concepts and technologies disclosed herein, the base plate200can be attached to and/or otherwise assembled with the gap layer202, and then additional structures can be attached to this assembly, as will be shown inFIG.5B. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.5B, one or more containment structures or layers of material (hereinafter referred to as a “containment layer”)500can be applied to the assembled base plate200and gap layer202. In some embodiments, the containment layer500can correspond to multiple pieces of KEVLAR® brand tape or other materials than can wrap around the edges of the assembly of the base plate200to the gap layer202. In some other embodiments, the containment layer500can correspond to one or more pieces of plastic, rubber, or other materials. It should be understood that these example embodiments are illustrative, and therefore should not be construed as being limiting in any way. As shown inFIG.5C, an armored plate subassembly502in accordance with the embodiment illustrated inFIGS.5A-5Bcan be obtained by applying the containment layer500to the assembled base plate200and gap layer202. A coating of polyurea or other embodiment of the coating206can be applied to the armored plate subassembly502(which as noted above can include the base plate200, the gap layer202, and the containment layer(s)500), in some embodiments. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning toFIG.5D, a portion of a cross-sectional view of the armored plate assembly100formed using the methodology schematically illustrated inFIGS.5A-5Ccan be seen, as viewed along the line B-B shown inFIG.5C. As shown inFIG.5D, the containment layer500can wrap around some, a portion of, and/or all of the assembled base plate200and gap layer202, and the entire resulting armored plate subassembly502(of the base plate200, gap layer202, and containment layer500) can be coated by the coating206. In the illustrated embodiment, the containment layer500is illustrated as wrapping around edges of the armored plate subassembly502(where the edges can correspond to edges of the base plate200and the gap layer202). It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. As shown inFIG.5E, another embodiment of the armored plate assembly100can include an assembly of the base plate200to the gap layer202, with a cork layer or other material acting as an edge structure (referred to herein as an “edge structure”)504, and a containment layer500as illustrated and described above; which collectively can form another subassembly, which can be coated by polyurea or other embodiment of the coating206. According to various embodiments, the edge structure504can be formed from cork, metal, a KEVLAR® brand material, a rubber, a plastic or other polymer (e.g., manmade rubber, nitrile rubber, or other materials) or other materials. In some embodiments, the edge structure504can be included to reinforce edges of the armor plate subassembly (of the base plate200and the secondary plate204), to contain spall, and/or for other purposes. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.6, aspects of a method600for forming an armored plate assembly100will be described in detail, according to another illustrative embodiment of the concepts and technologies disclosed herein. It should be understood that the operations of the method600disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations of the method600have been presented in the demonstrated order for ease of description and illustration. Operations of the method600may be added, omitted, and/or performed simultaneously, without departing from the scope of the concepts and technologies disclosed herein. In some embodiments of the concepts and technologies disclosed herein, the operations of the method600illustrated and described herein can be performed by a computer, for example a control module for an armored plate assembly fabrication machine. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The method600can begin at operation602. At operation602, a base plate200can be obtained. As noted above, the base plate200can be formed from a suitable material such as a metal or metal alloy such as steel. As noted above with reference toFIGS.1A-5E, the base plate200can be formed from other materials in various embodiments. Because the base plate200can be formed from other materials as illustrated and described herein, it should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation602, the method600can proceed to operation604. At operation604, a gap layer202can be obtained. As explained above, the gap layer202can correspond to a layer of material such as a ceramic, a wood (e.g., cork), a polymer, a resin, an epoxy, and/or other materials. According to various embodiments of the concepts and technologies disclosed herein, the gap layer202can be provided by a layer of natural cork. It should be understood that these example materials are illustrative, and therefore should not be construed as being limiting in any way. From operation604, the method600can proceed to operation606. At operation606, the base plate200can be assembled with the gap layer202and, in some embodiments, one or more edge structures504, and this subassembly (including or excluding the edge structures504) can be further assembled with one or more containment layer(s)500. In some embodiments, the containment layer(s)500can correspond to a layer of KEVLAR® brand tape, etc., which can be used to reinforce or strengthen the edges of the armor plate subassembly and/or for other reasons. In some other embodiments, as noted above, the containment layer500can include and/or can be provided by a layer or portion of rubber, polymer, and/or other materials such as a nitrile rubber. In some embodiments, the containment layer500can be wrapped around the edges of the base plate200and the gap layer202(and, optionally, the edge structure504), as shown inFIGS.5A-5E, for example. After encompassing or enwrapping some, a portion of, and/or all of the base plate200and the gap layer202(and optionally the edge structure504) with the containment layer500, an armored plate subassembly502that includes the base plate200, the gap layer202, the edge structure504(if included), and the containment layer500can be obtained. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation606, the method600can proceed to operation608. At operation608, the coating206can be applied to the armored plate subassembly502obtained in operation606. As explained above, in some embodiments, the coating206can correspond to polyurea, which can be sprayed onto the armored plate subassembly502obtained in operation606. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation608, the method600can proceed to operation610. The method600can end at operation610. Turning now toFIGS.7A-7D, some additional aspects of an armored plate assembly100will be illustrated and described, according to another illustrative embodiment of the concepts and technologies disclosed herein. In particular,FIG.7Aillustrates the assembly of a base plate200with a gap layer202(e.g., without a secondary plate204). In the embodiment illustrated inFIG.7A, the base plate200can be formed from steel or another material (e.g., including the materials set forth with respect to the base plate200illustrated and described above with reference toFIGS.1A-6), and the gap layer202can be formed from natural cork or another material (e.g., including the materials set forth with respect to the gap layer202illustrated and described above with reference toFIGS.1A-6). In various embodiments of the concepts and technologies disclosed herein, a subassembly can be obtained by assembling the base plate200to the gap layer202, in some embodiments. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. As shown inFIG.7B, one or more containment structures or layers of material (hereinafter referred to as a “containment structure”)700can be applied to an armor plate subassembly that can be formed by the assembly of the base plate200to the gap layer202. It should be understood that in some embodiments, the containment structure700can correspond to one or more pieces of natural or synthetic rubbers or other polymers, and/or other materials than can be located along edges of the assembly of the base plate200to the gap layer202to reinforce these edges and/or to attempt to prevent spall from exiting the armored plate assembly100. In the illustrated embodiment, the functionality of the containment structure700can be provided by one or more pieces of 70 durometer nitrile rubber (“NBR-70”) or other materials. In the illustrated embodiment, the containment structure700can have a thickness of approximately 0.25 inches (˜6.35 mm). Because other materials and/or thicknesses of materials for the containment structures700are possible and are contemplated, it should be understood that this example embodiment is illustrative, and therefore should not be construed as being limiting in any way. As shown inFIG.7B, the subassembly formed by the assembling the base plate200to the gap layer202can have a facing surface702, a rear surface (not visible inFIG.7B), and one or more periphery edges704A-B (hereinafter collectively and/or generically referred to as “edges704”). It can be appreciated that the additional two edges704of the armor plate subassembly are not visible in the view illustrated inFIG.7B. According to various embodiments of the concepts and technologies disclosed herein, the containment structure(s)700can be applied to the armor plate subassembly such that the containment structure(s)700can be located at or near, or can engage, the edges704. In some embodiments, the containment structure(s)700can be configured to reinforce or strengthen the edges704. In some other embodiments, such as the embodiment shown inFIG.7B, the containment structure(s)700can be included to prevent (or at least reduce) penetration of spall from within the armored plate assembly100to outside the armored plate assembly100. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, the containment structure(s)700can be joined to the edges704of the assembled base plate200and gap layer202using an adhesive such as a glue or epoxy. In some embodiments, the glue can include a urethane-based glue and/or other types of adhesives. In the illustrated embodiment, the containment structure(s)700can be joined to the edges704of the assembled base plate200and gap layer202using an adhesive referred to as ADTHANE 1800. Because other adhesives and/or mechanical fasteners are possible and are contemplated, it should be understood that this example embodiment is illustrative, and therefore should not be construed as being limiting in any way. As shown inFIG.7C, an armored plate subassembly706can be obtained by applying the containment structure(s)700to the edges704of the assembled base plate200and gap layer202, as illustrated and described above with reference toFIG.7B. As shown inFIG.7C, the containment structure700can cover one or more and/or all edges704of the assembled base plate200and gap layer202, though only two edges704are visible in the perspective view shown inFIG.7C. A coating206(e.g., a layer of polyurea or other embodiment of the coating206) can be applied to the armored plate subassembly706, in some embodiments, thereby encompassing the base plate200, the gap layer202, and the containment structure(s)700. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning toFIG.7D, a portion of a cross-sectional view of the armored plate assembly100formed using the methodology schematically illustrated inFIGS.7A-7Cis illustrated, as viewed along view line C-C illustrated inFIG.7C. As shown inFIG.7D, the containment structure(s)700can run along or next to the edges704of the assembled base plate200and gap layer202, and the entire armored plate subassembly706(including the containment structure(s)700) can be coated by the coating206. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.8, aspects of a method800for forming an armored plate assembly100will be described in detail, according to another illustrative embodiment of the concepts and technologies disclosed herein. It should be understood that the operations of the method800disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations of the method800have been presented in the demonstrated order for ease of description and illustration. Operations of the method800may be added, omitted, and/or performed simultaneously, without departing from the scope of the concepts and technologies disclosed herein. In some embodiments of the concepts and technologies disclosed herein, the operations of the method800illustrated and described herein can be performed by a computer, for example a control module for an armored plate assembly fabrication machine. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The method800can begin at operation802. At operation802, a base plate200can be obtained. As noted above, the base plate200can be formed from a suitable material such as a metal or metal alloy such as steel. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation802, the method800can proceed to operation804. At operation804, a gap layer202can be obtained. As explained above, the gap layer202can be formed from a suitable material such as a polymer, a resin, an epoxy, wood, natural or manmade cork, steel wool, fibers, and/or other materials. In the embodiment shown inFIGS.7A-7D, the gap layer202can be formed from natural cork. It should be understood that these example materials are illustrative, and therefore should not be construed as being limiting in any way. From operation804, the method800can proceed to operation806. At operation806, the base plate200can be assembled with the gap layer202and, in some embodiments, one or more containment structure(s)700. In some embodiments, the containment structure700can correspond to a layer or piece of plastic, rubber, nitrile, a polymer, KEVLAR® brand tape, other materials, combinations thereof, or the like; which can be used to attempt to prevent (or at least reduce) penetration of spall from within the armored plate assembly100to outside of the armored plate assembly100as illustrated and described herein. In some embodiments, the containment structure(s)700can be located next to the edges704of assembled base plate200and gap layer202, as shown inFIGS.7A-7D, for example. After locating the containment structure(s)700along edges704of the assembled base plate200and gap layer202, an armored plate subassembly706that includes the base plate200, the gap layer202, and the containment structure(s)700can be obtained. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation806, the method800can proceed to operation808. At operation808, the coating206can be applied to the armored plate subassembly706. As explained above, in some embodiments, the functionality of the coating206can be provided in various embodiments by a coating of polyurea, which can be sprayed onto the armored plate subassembly706. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation808, the method800can proceed to operation810. The method800can end at operation810. Turning now toFIGS.9A-9E, some additional aspects of an armored plate assembly100will be illustrated and described, according to yet another illustrative embodiment of the concepts and technologies disclosed herein. In particular,FIG.9Aillustrates how a gap layer202can be assembled with a containment structure900. As can be appreciated with reference toFIG.9A, the outer edges or outer perimeter (“outer edges”)902of the gap layer202can be configured and dimensioned to substantially match the inner edges or inner perimeter (“inner edges”)904of the containment structure900, though this is not necessarily the case. It should be understood that in various embodiments of the concepts and technologies disclosed herein, the containment structure900can have a thickness that is substantially similar to a thickness of the gap layer202, though this is not necessarily the case. As such, it should be understood that the illustrated example is illustrative, and therefore should not be construed as being limiting in any way. In some embodiments of the concepts and technologies disclosed herein, the dimensions of a perimeter of the inner edges904can be slightly smaller than the dimensions of the perimeter of the outer edges902. In such embodiments, the gap layer202and the containment structure900can be held together (after assembly) by a force generated between the outer edges902and the inner edges904. Because other structures, chemicals, and/or materials can be used to hold the gap layer202and the containment structure900together (e.g., adhesives, staples, etc.), it should be understood that this example embodiment is illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, the containment structure900can correspond to one or more pieces of natural or synthetic rubbers or other polymers, and/or other materials than can be located along edges of the assembly of the gap layer202. In the illustrated embodiment, the functionality of the containment structure900can be provided by a substantially continuous gasket formed from 70 durometer nitrile rubber (“NBR-70”) or other materials. In the illustrated embodiment, the containment structure900can have a thickness (the dimension from the visible plane of the containment structure900inFIG.9Ato the rear plane (not visible inFIG.9A)) of approximately 0.25 inches (˜6.35 mm). Because other materials and/or thicknesses of materials for the containment structures900are possible and are contemplated, it should be understood that this example embodiment is illustrative, and therefore should not be construed as being limiting in any way. In the illustrated embodiment ofFIG.9A, the gap layer202can be formed from a natural or synthetic cork, steel wool, polymers, and/or other materials. In the embodiment ofFIG.9A, the gap layer202is formed from natural cork. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. As shown inFIG.9B, a spall containment subassembly906is illustrated, according to an example embodiment of the concepts and technologies disclosed herein. As shown inFIGS.9A-9B, the spall containment subassembly906can be formed by the assembling the gap layer202with the containment structure900. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.9C, additional aspects of the armored plate assembly100will be illustrated and described in more detail, according to one example embodiment of the concepts and technologies disclosed herein. As shown inFIG.9C, the spall containment subassembly906can be assembled with a base plate200. As noted above, the base plate200can be formed from a suitable material such as a metal or a metal alloy such as steel. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, the spall containment subassembly906can be joined to the base plate200using an adhesive such as a glue or epoxy. In some embodiments, the glue can include a urethane-based glue and/or other types of adhesives. In the illustrated embodiment, the spall containment subassembly906can be joined to the base plate200using an adhesive referred to as ADTHANE 1800. Because other adhesives and/or mechanical fasteners are possible and are contemplated, it should be understood that this example embodiment is illustrative, and therefore should not be construed as being limiting in any way. As shown inFIG.9D, the armored plate assembly100can be obtained by applying a coating206(e.g., a layer of polyurea or other embodiment of the coating206as illustrated and described herein) to the assembled spall containment subassembly906and base plate200. According to various embodiments of the concepts and technologies disclosed herein, the coating206can encompass some or the entirety of the assembled base plate200and some or all of the spall containment subassembly906. In the illustrated embodiment, the coating206can coat substantially all of the exposed exterior of the assembled base plate200and spall containment subassembly906. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning toFIG.9E, a portion of a cross-sectional view of the armored plate assembly100formed using the methodology schematically illustrated inFIGS.9A-9Dis illustrated, as viewed along view line D-D illustrated inFIG.9D. As shown inFIG.9E, the containment structure900can be located in the plane of the gap layer202. Although the containment structure900and the gap layer202are shown as having the substantially the same thickness t, it should be understood that this is not necessarily the case in all embodiments. As shown inFIG.9E, the entire exterior surface of the assembled base plate200and spall containment subassembly906can be coated by the coating206. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Embodiments of the armored plate assembly100as shown inFIG.9Ehave been tested and determined to reduce or even prevent edge failure of the armored plate assembly100that results from spalling or other causes when engaged by projectiles fired from firearms in the calibers of .223 Remington, 556 NATO, .308 Winchester Magnum, 9 mm, and other calibers. In some embodiments, the armored plate assembly100has been proven to prevent edge failure of the armored plate assembly100that results from spalling as long as projectiles engaging the armored plate assembly100are at least two inches from the edge. As such, embodiments of the concepts and technologies disclosed herein have been determined to reduce or even prevent spalling from escaping the armored plate assembly100. It should be understood that these example embodiments are illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.10, aspects of a method1000for forming an armored plate assembly100will be described in detail, according to another illustrative embodiment of the concepts and technologies disclosed herein. It should be understood that the operations of the method1000disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations of the method1000have been presented in the demonstrated order for ease of description and illustration. Operations of the method1000may be added, omitted, and/or performed simultaneously, without departing from the scope of the concepts and technologies disclosed herein. In some embodiments of the concepts and technologies disclosed herein, the operations of the method1000illustrated and described herein can be performed by a computer, for example a control module for an armored plate assembly fabrication machine. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The method1000can begin at operation1002. At operation1002, a base plate200can be obtained. As noted above, the base plate200can be formed from a suitable material such as a metal or metal alloy such as steel. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation1002, the method1000can proceed to operation1004. At operation1004, a spall containment subassembly906can be obtained. As explained above, the spall containment subassembly906can include the gap layer202and a containment structure900. According to various embodiments of the concepts and technologies disclosed herein, the gap layer202can be formed from a suitable material such as a polymer, a resin, an epoxy, wood, natural or manmade cork, steel wool, fibers, and/or other materials, and the containment structure900can be formed from plastic, rubber, nitrile (e.g., nitrile rubber), a polymer, other materials, combinations thereof, or the like. In the embodiment illustrated inFIGS.9A-9E, the spall containment subassembly906can be formed from a gap layer202formed from natural cork and a containment structure900formed from nitrile rubber. It should be understood that these example materials are illustrative, and therefore should not be construed as being limiting in any way. From operation1004, the method1000can proceed to operation1006. At operation1006, the base plate200can be assembled with the spall containment subassembly906. In some embodiments, the spall containment subassembly906can be glued to the base plate200and/or otherwise connected or attached to the base plate200. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation1006, the method1000can proceed to operation1008. At operation1008, the coating206can be applied to the assembled base plate200and spall containment subassembly906. As explained above, in some embodiments, the functionality of the coating206can be provided in various embodiments by a coating of polyurea, which can be sprayed onto the exposed exterior surfaces of the assembled base plate200and the spall containment subassembly906. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation1008, the method1000can proceed to operation1010. The method1000can end at operation1010. According to various embodiments disclosed hereinabove, it should be understood that while the coating206(e.g., formed from polyurea) is mostly illustrated as being a separate component from the other components of the armored plate assembly100(e.g., the base plate200, the gap layer202, the secondary plate204, the containment layers500, the edge structures504, the containment structures700, etc.), it should be understood that some of these components of the armored plate assembly100can be formed from polyurea. In particular, in some embodiments of the concepts and technologies disclosed herein, the secondary plate204can be formed from polyurea. In one contemplated embodiment, this approach can obviate the need for KEVLAR® because a thicker layer of polyurea can serve a dual purpose (e.g., to function as a secondary plate204and to function as the coating206). It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. In some embodiments of the concepts and technologies disclosed herein, a spacer layer can extend over the edges of the armor plate subassembly. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Based on the foregoing, it can be appreciated that an armored plate assembly has been disclosed herein. Although the subject matter presented herein has been described with respect to various structural features and/or methodological and transformative acts for forming the armored plate assembly and/or the various features thereof, it is to be understood that the concepts and technologies disclosed herein are not necessarily limited to the specific features or acts described herein. Rather, the specific features and acts are disclosed as example forms of implementing the concepts and technologies disclosed herein. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes may be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the embodiments of the concepts and technologies disclosed herein. | 49,654 |
11859953 | As discussed above, there are numerous disadvantages associated with existing apparatus and methods for engaging underwater targets. These range from the limited range of some existing munitions used for such purposes, to the limited accuracy of existing munitions, or the significant expense associated with existing munitions. In general, there is exists no relatively inexpensive, rapidly deployable, and yet long-range and accurate, munition, or related assembly or methodology, for engaging or generally interacting with underwater objects (e.g. targets). According to the present invention, it has been realised that the problems associated with existing approaches can be overcome in a subtle but effective and powerful manner. In particular, the present invention provides a munition. The munition comprises an explosive charge and a fuze. The munition is adapted to be launched, into the air. Significantly, the munition is adapted to be launched from a gun barrel. This means that the munition typically (and practically likely) includes, or is at least used in conjunction with, a propelling explosive, and is capable of being explosively propelled and withstanding such explosive propulsion. This is in contrast with, for example, a depth charge, or torpedo. Being launched from a gun barrel, this is also in contrast with a mortar bomb. The munition is adapted to be launched and then enter a body of water, typically within which body of water a target or object to be engaged would be located. The fuze of the munition is adapted to trigger the explosive charge of the munition under water, for example in accordance with pre-set criteria. The use of a gun barrel also ensures high degree of accuracy in terms of ranging and general targeting. The invention is subtle but powerful. The invention is subtle because it perhaps takes advantage of some existing technologies, in the form of firing a munition from a gun barrel. This means that the range of the munition would be hundreds of metres, or even kilometres, overcoming range problems associated with existing apparatus or methodology. At the same time, the munition will typically be a projectile, therefore being unpropelled and/or including no form of self-propulsion. This means that the munition is relatively simple and inexpensive. Altogether then, this means that the munition according to example embodiments can be used to accurately, cheaply, effectively, and generally efficiently engage with targets located at quite some distance from an assembly (e.g. a platform, vessel, vehicle, and so on, or a related gun) that launches the projectile. Also, the use of a munition that is capable of being launched from a gun barrel means that multiple munitions can be launched very quickly in succession from the same gun barrel, or in succession and/or in parallel from multiple gun barrels, optionally from different assemblies, or optionally being targeted onto or into the same location/vicinity of the same body of water. Again then, target engagement efficiency and effectiveness may be increased, in a relatively simple manner. FIG.1schematically depicts an assembly in accordance with an example embodiment. In this example, the assembly comprises a vessel2located on a body of water4. The vessel comprises a gun6having a gun barrel8. In another example, the assembly need not include a particular vehicle, and could simply comprise a gun. The munition10is shown as being explosively launched into the air. As discussed above, this gives the munition10significant range, and accuracy at range. Prior to being launched into the air, the munition10(or more specifically its fuze) might be programmed in some way. The programming might take place within the gun6, within the barrel8, or even within a particular range after launch of the munition10, for example by a wireless transmission or similar. The programming might be undertaken to implement or change particular fuze criteria, for example to trigger explosive within the munition10in accordance with particular criteria. This will be explained in more detail below. Typically, in order to achieve this programming, the munition10will comprise a programmable fuze. That is, the fuze is able to be configured. As is typical for munitions fired from a gun barrel, the munition will typically be arranged to be launched from a smooth bore gun barrel. Optionally, the munition may be fin-stabilised. Alternatively, the munition may be arranged to be launched from a rifled bore. The exact configuration will be dependent on the required application. As discussed throughout, care will need to be undertaken to ensure that the combination of munition properties (e.g. size, weight, shape and so on) and launch specifications (e.g. explosive propulsion) is such that the munition10does not explode on launch. This might require particular care to be given to the explosive resistance of the munition10, or at least constituent parts located within the munition, typically associated with initiating an explosion of the munition10. Such concepts will be known or derivable from munitions technologies typically involved in gun-based launching. FIG.2shows the munition as it is directed to and is about to enter the body of water4. Having been explosively launched from a gun barrel8, the munition10will enter the body of water4with significant speed. In a practical implementation, care will need to be undertaken to ensure that the combination of munition properties (e.g. size, weight, shape and so on) and impact speed with the water4is such that the munition10does not explode on impact. This might require particular care to be given to the impact resistance of the munition10, or at least constituent parts located within the munition, typically associated with initiating an explosion of the munition10. In one example, a simple but effective feature which may assist in this regard is the head or tip20of the munition being ogive-shaped or roundly-shaped or tapering, in accordance with the typical shape of munitions. Again, this is in contrast with a depth charge or similar. However, this may not be sufficient in isolation, or even in combination with structural impact-resistant features of a munition, to prevent explosion of the munition10on impact with the water, or to damage the munition such that it does not work satisfactorily under the water4. FIG.3shows that in addition to, or alternatively to, an impact resistant or accommodating structure of the munition10, the munition10may be provided with a deployable configuration that is arranged, when deployed, to slow the munition10in the air before entry into the water4. In order to successfully engage with an underwater target described herein, the speed of decent of the munition down, through the water4to the target may be less important than the speed of delivery of the munition from the gun to the location at/above the target. In other words, the munition10does not need to enter the water4at a particularly high velocity. Therefore, deceleration of the munition10prior to entering the water4is acceptable, and may actually be desirable. That is, slowing the munition10prior to entering the water4may be far simpler or easier to achieve than designing the munition to withstand high speed impact with the water4. This is because such a design might mean that the cost of the munition is excessive, or that the weight of the munition is excessive, or such that the space within the munition for important explosive material is reduced. In other words, some form of air brake might be advantageous. FIG.3shows that, in one example, the deployable configuration could comprise a parachute30. The parachute could be deployed after a certain time from launch of the munition10, or could, with appropriate sensing or similar, be deployed upon particular distance proximity sensing with respect to the water4. In another example, a similar munition32is shown. However, this similar munition32comprises a different deployable configuration in the form of one or more deployable wings or fins34. These deployable wings or fins34may be deployed in the same manner as the parachute30previously described. The wings or fins34might optionally provide a degree of auto rotation to slow or further slow the munition32. As discussed above, it is desirable for the munition to reach the location of the target object, or its surrounding area quickly and effectively, while at the same time being relatively inexpensive and having maximum effectiveness. It is therefore desirable not to pack the munition with complicated or advanced guiding or directionality mechanisms, which might be used to control the directionality of the descent of the munition. However, in some examples the fins and/or wings34previously described may be controllable to provide directional control of the descent of the munition32, for example via a movable control surface provided in or by the fins or wings. Such control is typically not to be used during projectile-like flight of the munition32, for example immediately after launch, but instead might be used for a degree of tuning control of the descent of the projectile into the body of water. This might improve engagement accuracy and effectiveness with a target located in the body of water4. However, as alluded to above, in other examples the munition according to example embodiments may be free of such directional (descent) control, to ensure that the cost and complexity of the munition is minimised, and such that any related cost or space budget is taken up with more core aspects, such as volume of explosive. After entering the body of water, the munition may be arranged to retract or dispose of the deployable configuration, so that the deployable configuration does not slow (or slow to too great an extent) the descent of the munition toward the target. For similar reasons, the munition might be free of any such deployable configuration, such that there is no impact on descent in the water. Descent through the water may need to be as fast as possible (e.g. to avoid the object moving to avoid the munition). After entering the body of water, the munition will descend within the body of water. The fuze within the munition is adapted to trigger the explosive charge within the munition in the water (that is under the water surface). This triggering can be achieved in one of a number of different ways.FIGS.4to6give typical examples. FIG.4shows that the fuze may be adapted to trigger40explosive within the munition10in order to successfully and effectively engage an underwater target42. This might be achieved by triggering the explosive charge after a particular time44, for example from one or more of a combination of launch from the gun barrel described above, and/or a predetermined time period after entering the water4. This time period will typically equate to a particular depth46within the water4(e.g. based on expected or calculate rate of descent). Alternatively, the triggering40may occur at the particular depth46in combination with or irrespective of the timing44. For example, an alternative or additional approach might involve the direct detection of depth (via one or more sensors or similar). Depth may be detected based on time, as above, or perhaps based on water pressure under the surface, the salinity of the water, the temperature of the water, or even at a predetermined speed-of-sound in the water. All of these may be indicative of depth within the water, for example which may be known in advance from mapping of the area, and/or sensed by the munition10via one or more sensors when descending through the water. Of course, the fuze may be also be adapted to trigger the explosive charge upon impact with the target42. However, it may be safer to employ some form of depth-activation, so that the munition explodes at/near the depth of the target, avoiding possible unintentional explosions at or near objects that are not targets. As above, the fuze may be programmed with such criteria, or related criteria necessary for the fuze to trigger the explosive as and when intended. FIG.5shows a different adaptation for triggering40an explosive charge of the munition10under the water, this time upon magnetic detection50of a target magnetic signature52. In a crude sense, the target magnetic signature could simply be the detection of anything magnetic, indicating the presence of a magnetic or magnetisable structure. For instance, once a detected magnetic a field strength is above a relevant threshold, the munition10might explode. In a more sophisticated manner, it may be known or derivable in advance to determine what the expected magnetic signature52of the particular target42might be, might look like, or might approximate to. This might equate to field strength, or field lines, or changes therein. In this example, the munition10might not be triggered40to explode until the magnetic detection50detects a very particular magnetic signature52, and not simply any magnetic field or change therein. WhileFIG.5discusses the use of magnetic fields, much the same principle may be used to detect electric field signatures.FIG.6shows another example of triggering. In this example, the triggering40of the explosive charge in the munition10is undertaken based on the detection of pressure waves in the water4, thereby implementing a sonar-like system60. The system may be implemented in one of a number of different ways. In one example, the munition10may be arranged to detect a pressure wave62emanating from target object42. This could be a sonar pulse62originating from the object42, or simply detection of sound generated by the object42, or could instead be a reflection62of a sonar pulse64originating from the munition10. That is, the projectile10may not only detect pressure waves, but may emit pressure waves. As with the magnetic field examples given above, the explosive charge may be triggered40when a target sonar signature is detected60, and this could be when any pressure wave is detected, or more likely when a pressure wave above a certain threshold is detected, or when a particular pressure wave or a series of pressure waves is detected which is indicative of the presence of a particular target42. In general, the munition may be able to detect or infer entry into the water, or making contact with the water. This might be useful in initiating or priming fuze activity, for example starting a timer, taking a base or initial reading of pressure, salinity, temperature, and so on (or any relevant criteria), or anything which may assist in the subsequent use of the fuze to trigger the explosive. This sensing or inference could be via an environmental sensor or similar that is (already) present in order to perform another function, for example those discussed or alluded to above. Alternatively, the sensing or inference could be via a dedicated sensor, for example a dedicated impact or water/moisture sensor, or temperate sensor, pressure sensor, salinity sensor, and so on. In general terms, the munition may be able to detect or infer entry into the water, or making contact with the water, for safety reasons, where some (e.g. explosive) function is prevented prior to water contact/entry. As discussed above, a main principle discussed herein is that the munition is adapted to be launched, into the air, from a gun barrel. This gives good range, and good targeting accuracy, good engagement speed, all at relatively low cost. To this extent, the munition may be described as, or form part of, an artillery shell.FIG.7shows such an artillery shell70. The artillery shell70comprises a munition10according to any embodiment described herein. The munition10will typically comprise a fuze72(likely a programmable fuze, as discussed above), which is adapted to trigger an explosive charge74also located within a munition. The artillery shell70will also comprise a primer76and an explosive propellant78which may be cased (as shown) or bagged. A casing80might also be provided, to hold the munition10, explosive78, and primer76. In another example, and typical in munitions, the fuze could be located in the nose of the munition (e.g. as opposed to behind the nose as shown inFIG.7). It is envisaged that a practical presentation of the invention would take the form of the artillery shell ofFIG.7, or something similar to that depiction, as opposed to a munition in isolation. In any event, as discussed above, the munition according to the present invention is capable of withstanding explosive propulsion-based launch from a gun barrel, in contrast with for instance a depth charge or torpedo. The munition and/or artillery shell (which could be the same thing) will typically have a diameter of 200 mm or less, in contrast with depth charges. The gun barrel-munition/artillery shell assembly typically will be such that the munition has a range of well over 100 metres, typically over 1000 metres, and quite possibly in excess of 20 to 30 kilometres. Again, this is in contrast with a depth charge and a mortar bomb. Balanced with the ranging and target accuracy that launching from a gun barrel gives, the munition will be projectile-like, that is not including any self-propulsion, in contrast with a torpedo or similar. To summarise, then, the approach described above allows for relatively cheap, accurate, rapid, effective and efficient engagement of underwater targets at a significant range. One or more assemblies can be used to launch one or more munitions with such range and effectiveness, in contrast with the launching of depth charges, helicopters including such depth charges, or multiple torpedoes. FIG.8schematically depicts general principles associated with the method of launching a munition according to an example embodiment. As discussed above, the munition comprises an explosive charge, and a fuze. The munition is adapted to be launched, into the air, from a gun barrel, and enter a body of water. The fuze is adapted to trigger the explosives charge under the water. Accordingly, the method comprises launching the munition into the air, from a gun barrel90. The launch is configured such that the munition is launched into the body of water92, such that, as discussed above, the fuze may then be adapted to trigger the explosive charge under the water92. In the embodiments discussed above, a munition has been described and detailed. The munition includes an explosive charge. However, in accordance with alternative embodiments, many of the principles discussed above can still be taken advantage of, but without using a projectile including an explosive charge. That is, the above principles can be used to ensure that a projectile can be launched from a gun barrel and into a body of water, when the projectile is then arranged to interact or engage with an object in the water, but without necessarily including an explosive charge to disable or damage that object. In particular, the present invention additionally provides a reconnaissance projectile. The reconnaissance projectile is adapted to be launched, into the air, from a gun barrel, and then into contact with a body of water (onto the water surface, or to descend below the surface). Again then, the projectile may be launched at a high range, with a high degree of accuracy, relatively cheaply and quickly. The reconnaissance projectile is arranged to initiate a reconnaissance function when in contact with the body of water (which includes when impacting the water, when on the body of water, or, as above, typically when located under the surface of the water). The reconnaissance function could be anything of particular use in relation to the particular application, but would typically comprise emission and/or detection of a pressure wave in the body of water, in a manner similar to that discussed above in relation toFIG.6. FIG.9shows a reconnaissance projectile100in accordance with an example embodiment. The reconnaissance projectile100comprises a sensor102. The sensor may be used to detect when the projectile100has come into contact with a body of water, and/or provide some other sensing functionality, for example one or more of the sensing or initiation criteria described above in relation to the munition. For example, the sensor102may be arranged to detect a particular passage of time, or a particular pressure change, or particular depth, and so on. The reconnaissance projectile100also comprises a transceiver104, in this example. The transceiver may be arranged to emit and/or detect pressure waves in the body of water. The sensor102may initiate or process transmission or detection of the waves by transceiver104. The sensor102might, instead or additionally, be or comprise a processor for processing implementing one or more of these functions. Of course, it will be appreciated that the reconnaissance projectile may take one of a number of different forms, similar or different to that shown inFIG.9.FIG.9is shown simply as a way of schematically depicting what such a projectile100might look like. Much as with the munition described above, the reconnaissance projectile100might be used or fired or launched in isolation in some way. However, it is likely that the projectile, being explosively propelled, might take the form of, or form part of, an artillery shell110. The artillery shell110might comprise much the same primer112, explosive114and casing116as is already described above in relation to the arrangement ofFIG.7. Referring back toFIG.9, a difference here is that the artillery shell110comprises a non-explosive projectile100, as opposed to an explosive-carrying munition. As might now be understood, it will be appreciated that some embodiments described above might be a combination of both explosive-concept, and reconnaissance-concept. For instance, it will be appreciated that the embodiments ofFIGS.5and6, at least, already have a degree of in-built reconnaissance, assisting in the initiation of the explosives charge. It will be appreciated that the above explosive-recon examples could be used in isolation or combination. For instance, a reconnaissance projectile may be launched into a body of water in order to perform a reconnaissance function in relation to a target. That reconnaissance projectile may be provided with a transmitter for transmitting reconnaissance information back to the assembly from which the projectile was launched. This reconnaissance information or data may then be used in the programming of subsequently fired or launched explosive munitions according to example embodiments. Indeed, a volley of projectiles may be launched toward an underwater target in accordance with an example embodiment. One or more of those projectiles may be a munition as described herein, and one or more of those projectiles may be a reconnaissance projectile as described herein. The munitions projectile and the reconnaissance projectile may be arranged to communicate with one another. This means that, for instance, a first-fired reconnaissance projectile may enter the body of water and detect or otherwise the presence of a target, whereas a subsequently fired munitions projectile, which may be in the air or in the body of water at the same time as a reconnaissance projectile, may receive reconnaissance information from a reconnaissance projectile and use this in the initiation (or otherwise) of the explosive charge of the munitions projectile. This may mean that the munitions projectile does not need to carry sophisticated (or as sophisticated) transmission or sensing equipment, which could reduce overall cost or system complexity. Alternatively, the reconnaissance projectile described above could actually be a munitions projectile, for example one of those shown in relation toFIGS.5and6. One or more munitions projectiles may be arranged to perform a reconnaissance functionality, but not necessarily initiate the explosive charge. Any acquired information on the target may be used to initiate the explosives charge of subsequently launched munitions projectiles. Or, or more reconnaissance projectiles may be arranged to perform an explosive function, but not necessarily use the reconnaissance function. FIG.10shows a projectile120with reconnaissance functionality122,124entering the body of water4in the vicinity of the target42. Reconnaissance functionality122,124might include emission122and/or detection124of pressure waves. As discussed previously, the reconnaissance functionality122,124may be completely independent of any explosives charge that the munition120is, or is not, provided with. That is, the projectile120might have explosive capability, reconnaissance functionality, or a combination of both. Different projectiles120launched into the water may have different combinations of such explosive/reconnaissance functionality. Details of the explosive, fuze and general structure of the munition will vary depending on the required application. For example, the explosive charge could be cartridged or bagged charge. The casing could be reactive. Any explosive might be dependent on how the system is to be used, for example getting the munition near the target, or simply close enough. In the former, an explosive yielding a high bubble effect might be useful. In the latter, simply the level of blast might be more important. As alluded to earlier in the disclosure, the invention also relates to very closely related concepts, but in submunition or sub-projectile form, as in a munition or projectile carried by and then expelled from another (carrier) projectile. This is because further advantages can be achieved, by applying all of the above principles, but in an assembly where the munition or reconnaissance projectile is more particularly a submunition of a munition assembly, or a reconnaissance sub-projectile of a reconnaissance projectile assembly. The submunition or reconnaissance sub-projectile is the object for which controlled entry into, and functionality in, the water is achieved, whereas a carrier of the assembly is simply a tool to get the submunition or reconnaissance sub-projectile to, or proximate to, a target location. One of the main advantages is that the assembly as a whole, and particularly an outer carrier for carrying the submunition or sub-projectile, can be well or better configured for launch from a gun, with the range and accuracy that such configurations brings. For example, the assembly or the carrier can be bullet-shaped, ogive-shaped or roundly-shaped or tapering, in accordance with the typical shape of munitions. However, and at the same time, the submunition or sub-projectile can then have any desired shape, since the submunition or reconnaissance sub-projectile does not need to be configured for being fired from a gun. This means that the submunition or reconnaissance sub-projectile can then be more easily and readily configured for controlled descent toward and into the water, reducing or preventing damage that might otherwise occur if the munition was fired directly into the water. Whereas expulsion of the submunition or reconnaissance sub-projectile from its carrier could be achieved underwater, greater benefits are achieved by expulsion in the air, since delicate submunition or reconnaissance sub-projectile components are then not subjected to the force of entry into the water from a natural ballistic, gun-launched, trajectory. Also, the submunition or reconnaissance sub-projectile will be traveling more slowly than a ‘conventional’ munition, and therefore the water entry shock loading should be reduced, accordingly. FIG.11shows a munition assembly130, arranged to be launched from a gun, much as with the munition of previous examples. The assembly130comprises a carrier132for a submunition134. A nose of the carrier132is ogive-shaped or roundly-shaped or tapering, for greater aerodynamic performance. The carrier132comprises (which includes defines) a cavity in which the submunition134is located. The cavity retains and protects the submunition134, and so shields the submunition134during launch and flight conditions of the assembly130. The assembly130may be launched and generally handled much as with the munition of earlier examples. However, in previous examples, controlled descent of the entire launched projectile, in the form of the (single-bodied) munition, is implemented. In the present examples, the submunition is expelled from its carrier, and controlled descent of the submunition is implemented, in the same manner as with the munition of previous examples. Again, then, the advantage of the present examples is that munition assembly can be tailored for launch and flight conditions, and the submunition can be tailored for descent and target engagement. The two-body approach allows for tailoring of a two-part problem. FIG.12shows that the submunition134, initially carried by the carrier132in the cavity, is arranged to be controllably expelled from the carrier. This might be achieved by use of a fuze and an expulsion charge, for example a carrier fuze154and a carrier expulsion charge. The carrier fuze154may operate on a timer, triggering the carrier expulsion charge to expel the submunition at or proximate to a target location, for example above a location of a target. As with the fuze of the (sub)munition, the carrier fuze may be programmed with a particular timing, or any other set of conditions, for example location-based activation, environmental sensing-based activation, and so on. The submunition134is expelled via a rear end of the carrier132. This is advantageous, as this might better ensure the maintenance of a predictable ballistic trajectory of the submunition134or carrier132, or prevent the carrier132from impacting upon the submunition134. As above, it is the submunition134for which slow, controlled descent is desirable, and so leaving the carrier132via a rear end allows for much more design and functional control, in implementing this. The submunition may be arranged to be expelled from a rear end of the carrier via a closure140. The closure might generally close or seal off the submunition134within the carrier132. This might be useful for handling or safety reasons, or assist in shielding the submunition from launch and flight conditions. The closure140is arranged to be opened before or during expulsion of the submunition134. This could be an active opening, for example via a controlled electronic or pneumatic switch or opening mechanism. However, it is likely to be simpler for this opening to be relatively passive or responsive, in that the closure140is arranged to open, for example via a shearing action, due to pressure of the expulsion charge on the opening, either directly, or indirectly via contact with the submunition134itself. As with the munition of previous examples, the submunition134comprises a deployable configuration142that is arranged, when deployed, to slow the submunition142in the air, after expulsion from the carrier132, and before entry to the water. The deployment could be active, for example based on sensing of air flow or submunition release, and an electrical or mechanical system actively deploying the configuration142. However, a more passive, automatic deployment may be simpler to implement, and more reliable. For example,FIG.12shows that wings or fins142might automatically deploy, to provide a degree of auto rotation to slow or further slow the munition134during its descent. The wings or fins142could be spring loaded, in a compressed or closed state, when in carrier132, and then automatically uncompress or open when expulsion is implemented. Alternatively, the act of air flow during or after expulsion may force the wings or fins142to deploy. FIG.13shows that the submunition134functions largely as the munition10of previous examples, descending toward and eventually onto or into the body of water4, for engagement with a target. A submunition fuze is then adapted to trigger a submunition explosive charge, under water. FIG.14shows a more detailed view of the munition assembly130. The munition assembly130is arranged to be launched from a gun. The assembly130comprises: a carrier132for a submunition134. The carrier comprises a cavity150in which the submunition134is located. The carrier132may be, or may form, a (carrier) shell. The submunition134, carried by the carrier132in the cavity150, is arranged to be controllably expelled from the carrier134. The carrier132comprises a carrier expulsion charge152and a carrier fuze154, the charge152being located in-between the submunition134and the fuze154. The fuze is typically located in a nose of the assembly130or carrier132. The carrier fuze154is adapted to trigger the carrier expulsion charge152to controllably expel the submunition134from the carrier132, via the closure140at the rear of the carrier132 The submunition134comprises wings or fins142, arranged to auto-deploy upon expulsion, so as to slow down the descent of the submunition toward and into the water. Such a deployable configuration is typically located at a rear (in terms of eventual descent direction) end of the submunition, to maintain descent stability. The submunition comprises a submunition (main) explosive charge156, and a submunition fuze158. The submunition fuze158is typically located at a rear (in terms of eventual descent direction) end of the submunition134, to reduce the risk of damage to any sensitive components, during impact with the water. The munition assembly130is adapted to be launched, into the air, from a gun barrel, where the submunition134is then arranged to be controllably expelled from the carrier132and enter a body of water, and the submunition fuze158is adapted to trigger the submunition explosive charge156under water. Again, descent of the submunition, and activation of its fuze, may be implemented as described above in relation to the munition embodiments. All of the principles described in relation to the submunition apply equally to a reconnaissance sub-projectile carried by a carrier of a reconnaissance projectile assembly. That is, the reconnaissance sub-projectile has the benefits of being carried and deployed like the submunition as described above, but also with the reconnaissance functionality, as described above. Any of the projectiles described herein, including munitions, submunitions, or reconnaissance projectiles or sub-projectiles, may be arranged to communication with, or transmit to, other objects. For example, munitions, submunitions, or reconnaissance projectiles or sub-projectiles, may be arranged to transmit a communication signal, external to and away from the submunition after entering the water, and optionally after a predetermined time period after entering the water; upon detection of a target sonar signature; upon detection of a target magnetic signature; upon detection of a target electric field signature; at a predetermined pressure under the water surface; at a predetermined depth under the water surface; at a predetermined salinity of water; at a predetermined temperature of water; at a predetermined speed-of-sound in water; or upon impact with a target under the water surface. The communication with, or transmission to, could be in relation to a remote weapon or platform, which could engage with the target depending on the communication or transmission. For instance, a submunition or reconnaissance sub-projectile may provide a warning shot, or a detection function, in advance of a more escalated engagement from the remote weapon or platform (e.g. a submarine, or torpedoes from a submarine). Although a few preferred embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims. Attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. The invention is not restricted to the details of the foregoing embodiment(s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. | 37,371 |
11859954 | DETAILED DESCRIPTION Reference will now be made to the example embodiments of the present general inventive concept, examples of which are illustrated in the accompanying drawings and illustrations. The example embodiments are described herein in order to explain the present general inventive concept by referring to the figures. The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the structures and fabrication techniques described herein. Accordingly, various changes, modification, and equivalents of the structures and fabrication techniques described herein will be suggested to those of ordinary skill in the art. The progression of fabrication operations described are merely examples, however, and the sequence type of operations is not limited to that set forth herein and may be changed as is known in the art, with the exception of operations necessarily occurring in a certain order. Also, description of well-known functions and constructions may be simplified and/or omitted for increased clarity and conciseness. Note that spatially relative terms, such as “up,” “down,” “right,” “left,” “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over or rotated, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the exemplary term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. According to various example embodiments of the present general inventive concept, a firearm ammunition projectile is provided with ventilation ports to allow air to pass therethrough during flight of the round. Various example embodiments provide an ammunition projectile with holes drilled, or otherwise formed, between the side walls of the projectile jacket and the interior of the hollow point of a hollow point round to reduce wind turbulence at the nose of the projectile, thus improving the flight characteristics of the round. FIG.1Aillustrates a perspective view of a conventional firearm cartridge having a hollow point, andFIG.1Billustrates a front end view of the cartridge ofFIG.1. The conventional cartridge10has a jacketed projectile12disposed in a casing14, and terminates in a hollow point tip16. The hollow point tip16may be formed by any of a host of different methods, such as by bending back a portion of the jacket into the hollow space of the hollow point tip16, which result in the “pocket” formed in the hollow point tip16. As previously discussed, air encountered during flight moves into the closed pocket to cause turbulence and affect flight characteristics. FIG.2Aillustrates a cross section of a hollow point projectile before the formation of ventilation ports, andFIG.2Billustrates a close-up view of the tip the hollow point projectile ofFIG.2Awith ventilation ports being formed according to an example embodiment of the present general inventive concept. In the example embodiment illustrated inFIG.2A, an initial form of the projectile20is formed by seating a core22in a jacket24, and forming the jacket24into the desired shape. This may include forming the base of the projectile20into, for example, a flat or boat tail configuration, and may also include tapering the leading end of the jacket24into a tapered ogive portion of the cartridge20. Various example embodiments may provide a core22that is formed of lead, pressed powders, etc., and a substantially cylindrical jacket24that is formed of one or more metals, such as copper, or even synthetic alloys, the jacket24being harder than the core22. As illustrated inFIG.2A, the jacket24extends past the forward end of the core22to form a hollow space26, which may be referred to herein as an open space26, in the forward end of the projectile20that characterizes the projectile as a hollow point round. InFIG.2Ba drill28is used to form a plurality of through holes that may be referred to herein as ventilation ports30that allow air to pass into the open space26and out through a side of the jacket24during flight of the projectile20. Various different example embodiments may include various different numbers of ventilation ports30in various different configurations and angles. While it is possible to form example embodiments with a single ventilation port30, such a configuration may have the unintended consequence of further affecting flight trajectory. As such, various example embodiments of the present general inventive concept may provide a plurality of ventilation ports30which are arranged so as to be equidistantly arranged around a longitudinal axis of the projectile20.FIGS.3A-3Cillustrated front end views of firearm projectiles formed according to various different example embodiments of the present general inventive concept. As illustrated inFIG.3A, a pair of ventilation ports30are provided on opposite sides from one another, inFIG.3Bthree ventilation ports30are provided that are equidistant from one another around the center axis of the projectile20, and inFIG.3Cfour ventilation ports30are provided that are equidistant from one another around the center axis. It is understood that various other quantities of ventilation ports30may be formed without departing from the scope of the present general inventive concept. Also, different configurations may be employed, such as two pairs of ventilation ports30in which a first pair are fairly close to one another on one side of the open space26, and a second pair is formed on the opposite side of the open space26in a mirrored arrangement relative to the first pair. In various example embodiments, any even number of ventilation ports30may be arranged so as to be symmetrical about a longitudinal axis of the projectile20. Also, while the ventilation ports30are illustrated inFIG.2Bas being formed by a drill bit, it is understood that various other methods or tools may be used to form the ventilation ports30without departing from the scope of the present general inventive concept. For example, various embodiments may employ a mechanical punch, a laser, etc., to form the ventilation ports through the jacket walls, and in some embodiments the bullet core itself. As illustrated inFIG.2B, the ventilation ports30may be formed at an angle relative to a longitudinal axis of the projectile20such that the ventilation ports30are angled back from a forward end of the projectile20. Various example embodiments of the present general inventive concept may provide a host of differently angled ventilation ports30. Additionally, different ventilation ports30in the same projectile20may be formed at different angles, but it may be beneficial to have symmetrical arrangements about the longitudinal axis of the projectile20for an improved flight path of the projectile20. Also, while the example embodiment illustrated inFIG.2bshows the ventilation ports30starting at a first opening on an inner surface of the jacket24forming the open space26, and ending at a second opening on an outer surface of the jacket24, various other example embodiments may be formed with ventilation ports that pass at least partially through the core22. Various other example embodiments may even have the first opening of one or more of the ventilation ports30located on the core22, with the second opening formed on the outer surface of the jacket24, and the second opening may be formed on a back half of the jacket24. Also, while the open space26illustrated inFIG.2Bis generally formed by folding back an forward end portion of the jacket24, a host of differently configured hollow points, such as that illustrated herein inFIG.5, may be utilized without departing from the scope of the present general inventive concept. FIG.4Aillustrates a projectile32formed according to an example embodiment of the present general inventive concept being disposed in a cartridge casing34, andFIG.4Billustrates a perspective view of the firearm cartridge36being formed inFIG.4A. As illustrated inFIGS.4A-4B, the resulting hollow point cartridge36has ventilation ports30to allow more aerodynamic flight of the projectile32.FIG.5illustrates air flow through the hollow point of a projectile formed according to an example embodiment of the present general inventive concept. As illustrated inFIG.5, air flowing into the hollow space26of the hollow point projectile32is vented through the ventilation ports30, decreasing turbulence and wind resistance encountered by the projectile32during flight. FIG.7is a perspective view of an alternate embodiment of a ventilated projectile200having a plurality of rib cuts202defined in the outer surface of the jacket204. The rib cuts202are designed to facilitate the expansion of the projectile200upon impact with a target. The rib cuts202begin proximate the forward end206of the jacket204and extend backwards toward the base206. The rib cuts202are spaced equally apart about the outer circumference of the jacket204. In preferred embodiments, the ventilation ports30are defined such that the ports do not intersect with the rib cuts202. In alternative embodiments, the ventilation ports may be defined to intersect with the rib cuts202. FIG.8is a magnified cross-section view of a portion of an alternate embodiment of a ventilated projectile300. The ventilated projectile300has ventilation ports which extend from the open space26to an outer surface of the jacket24. Preferably, the ventilation ports30extend through at least a portion of the core22. The depth that the ventilation ports30extend into and through the core22may be dependent on the desired ballistic performance of the projectile. Larger ventilation ports30which extend further into the core22may be useful in large caliber projectiles. FIG.6illustrates a method of forming a ventilated projectile for use in a firearm ammunition cartridge according to an example embodiment of the present general inventive concept. It is understood that the flow chart illustrating this method is simply one example embodiment of the present general inventive concept, and various other example embodiments may include more or fewer operations, and which may be performed in different orders and with various different components without departing from the scope of the present general inventive concept. In operation100, a cylindrical copper jacket is provided that has a closed rearward end and an open forward end. In operation110, a bullet core is disposed inside the jacket, and the core is seated in the bottom of the jacket. In operation120, the jacket, along with the core seated inside, is shaped such that the bottom or base has the desired form, and the forward end is tapered to define the ogive portion of the projectile. In operation130, a plurality of ventilation ports are formed in the forward end of the projectile so that air entering the hollow point of the projectile during flight may be vented out of the side of the jacket. Various example embodiments of the present general inventive concept may provide a projectile for use in a firearm ammunition cartridge, the projectile including a core, a jacket in which the core is disposed, the jacket having a closed rearward end and an open forward end, the forward end tapering inwardly toward a longitudinal centerline of the jacket to define an ogive portion of the projectile, and extending past a forward end of the core to form an open space inside the jacket between the forward end of the core and the forward end of the jacket, and a plurality of ventilation ports formed proximate the forward end of the jacket, each of the ventilation ports having a first opening on an inner surface of the jacket defining the open space, and a second opening on an outer surface of the jacket. The plurality of ventilation ports may be spaced equidistantly from one another about the longitudinal centerline of the jacket. The ventilation ports may each have a longitudinal axis that angles back from the longitudinal centerline of the jacket. The first openings of the ventilation ports may be formed adjacent the forward end of the core. The ventilation ports may pass through a portion of the core. The core may be formed with material softer than the jacket. An outer surface of the jacket adjacent the forward end of the jacket may be continuous. The outer surface of the jacket adjacent the forward end may include a plurality of rib cuts extending back from the forward end to facilitate expansion of the jacket upon impact of the projectile. The ventilation ports may be arranged so as to not intersect the rib cuts. The jacket may be comprised of copper. Various example embodiments of the present general inventive concept may provide a method of forming a projectile for use in a firearm ammunition cartridge, the method including providing a jacket having a closed rearward end and an open forward end, disposing a core inside the jacket, tapering the forward end of the jacket inwardly toward a longitudinal centerline of the jacket to define an ogive portion of the projectile such that the forward end of the jacket extends past a forward end of the core to form an open space inside the jacket between the forward end of the core and the forward end of the jacket, and forming a plurality of ventilation ports in the forward end of the jacket, each of the ventilation ports having a first opening on an inner surface of the jacket defining the open space, and a second opening on an outer surface of the jacket. The method may further include forming the plurality of ventilation ports so as to be spaced equidistantly from one another about the longitudinal centerline of the jacket. The method may further include forming the plurality of ventilation ports with a punch, drill, or laser. The method may further include forming the ventilation ports to each have a longitudinal axis that angles back from the longitudinal centerline of the jacket. The method may further include forming the first openings of the ventilation ports to be adjacent the forward end of the core. The method may further include forming the ventilation ports to pass through a portion of the core. The method may further include forming a plurality of rib cuts extending back from the forward end to facilitate expansion of the jacket upon impact of the projectile. The method may further include forming the ventilation ports so as to not intersect the rib cuts. Various example embodiments of the present general inventive concept may provide a projectile for use in a firearm ammunition cartridge, the projectile including a core, a jacket in which the core is disposed, the jacket having a closed rearward end and an open forward end, the forward end tapering inwardly toward a longitudinal centerline of the jacket to define an ogive portion of the projectile, and extending past a forward end of the core to form an open space inside the jacket between the forward end of the core and the forward end of the jacket, and a plurality of ventilation ports formed proximate the forward end of the jacket, each of the ventilation ports having a first opening on a surface defining the open space, and a second opening on an outer surface of the jacket. The first opening of each of the ventilation ports may be formed on the forward end of the core. Numerous variations, modifications, and additional embodiments are possible, and accordingly, all such variations, modifications, and embodiments are to be regarded as being within the spirit and scope of the present general inventive concept. For example, regardless of the content of any portion of this application, unless clearly specified to the contrary, there is no requirement for the inclusion in any claim herein or of any application claiming priority hereto of any particular described or illustrated activity or element, any particular sequence of such activities, or any particular interrelationship of such elements. Moreover, any activity can be repeated, any activity can be performed by multiple entities, and/or any element can be duplicated. It is noted that the simplified diagrams and drawings included in the present application do not illustrate all the various connections and assemblies of the various components, however, those skilled in the art will understand how to implement such connections and assemblies, based on the illustrated components, figures, and descriptions provided herein, using sound engineering judgment. Numerous variations, modification, and additional embodiments are possible, and, accordingly, all such variations, modifications, and embodiments are to be regarded as being within the spirit and scope of the present general inventive concept. While the present general inventive concept has been illustrated by description of several example embodiments, and while the illustrative embodiments have been described in detail, it is not the intention of the applicant to restrict or in any way limit the scope of the general inventive concept to such descriptions and illustrations. Instead, the descriptions, drawings, and claims herein are to be regarded as illustrative in nature, and not as restrictive, and additional embodiments will readily appear to those skilled in the art upon reading the above description and drawings. Additional modifications will readily appear to those skilled in the art. Accordingly, departures may be made from such details without departing from the spirit or scope of applicant's general inventive concept. | 18,047 |
11859955 | DETAILED DESCRIPTION OF THE INVENTION Before explaining the present invention in detail, it is to be understood that the invention is not limited in its application to the details of the construction and arrangement of parts illustrated in the accompanying drawings. The invention is capable of other embodiments, as depicted in different FIGURES as described above and of being practiced or carried out in a variety of ways. It is to be understood that the phraseology and terminology employed herein is for the purpose of description and not of limitation. FIG.1illustrates the layout of the ceramic bullet. As shown inFIG.1, the entire bullet includes1. Solid ceramic body part and2. Metal base cap part The solid ceramic body bullet is made of ceramic material. The composition of ceramic material depends based on the size and its application of the ceramic bullet. For the solid ceramic body bullet, the selection of the compound and related properties can vary based on the size and its application. The solid ceramic body is made by various processes and finally machined to get the required tolerance. The shape of the tip can also vary based on the requirement of design and application. The metal base cap part (also referred to as a metallic cap) is the distal part of the bullet. T cap part is made of metal so that the thrust from the trigger is directly hit on this cap of metal. This is a solid ceramic bullet with a metallic cap at the other end where the trigger is hit. When the proposed ceramic bullet strikes on any object, the metal part gets triggered with a force to strike the object. The main object of the present invention is to provide a ceramic bullet that is safer and can be easily entered into the target object. Further, the ceramic bullet is made of the ceramic material that is composite/coated either with a sleeping drug or any poison so, according to the requirement of the target object. The coating of sleeping drug or the poison releases sedatives to either make the object unconscious or to kill the object with less impact force. The ceramic bullet does not contain any explosive material, therefore causing minimum infection when striked on the body and make the bullet easy transportable. While, the invention has been described with respect to the given embodiment, it will be appreciated that many variations, modifications and other applications of the invention may be made. However, it is to be expressly understood that such modifications and adaptations are within the scope of the present invention, as set forth in the following claims. | 2,598 |
11859956 | Similar numbers refer to similar parts throughout the drawings. DETAILED DESCRIPTION A precision guidance munition assembly (PGMA), also referred to as a precision guidance kit, (PGK) in the art, in accordance with the present disclosure is shown generally at10. As shown inFIG.1, the PGMA10is operatively coupled with a munition body12, which may also be referred to as a projectile, to create a guided projectile14. In one example, the PGMA10is coupled to the munition body12via a threaded connection; however, the PGMA10may be coupled to the munition body12in any suitable manner. In one example, such as the APWKS precision guided kit, the PGMA is coupled between the munition body and front end assembly thereby turning a projectile into a precision guided projectile. FIG.1depicts that the munition body12includes a front end16and an opposite tail or rear end18defining a longitudinal direction therebetween. The munition body12includes a first annular edge20(FIG.1A), which, in one particular embodiment, is a leading edge on the munition body12such that the first annular edge20is a leading annular edge that is positioned at the front end16of the munition body12. The munition body12defines a cylindrical cavity22(FIG.1A) extending rearward from the first annular edge20longitudinally centrally along a center of the munition body12. The munition body12is formed from material, such as metal, that is structurally sufficient to carry an explosive charge configured to detonate or explode at, or near, a target24(FIG.3). The munition body12may include tail fins (not shown) which help stabilize the munition body12during flight. FIG.1Adepicts that the PGMA10, which may also be referred to as a despun assembly, includes, in one example, a fuze setter26, a canard assembly28having one or more canards28a,28b, a control actuation system (CAS)30, a guidance, navigation and control (GNC) section32having at least one guiding sensor32a, such as a global positioning system (GPS), at least one antenna32b, a magnetometer32c, a microelectromechanical systems (MEMS) gyroscope32d, an MEMS accelerometer32e, and a rotation sensor32f, at least one bearing34, a battery36, at least one non-transitory computer-readable storage medium38, and at least one processor or microprocessor40. Although the GNC section32has been described inFIG.1Aas having particular sensors, it should be noted that in other examples the GNC section32may include other sensors, including, but not limited to, laser guided sensors, electro-optical sensors, imaging sensors, inertial navigation systems (INS), inertial measurement units (IMU), timing sensors, or any other suitable sensors. In one example, the GNC section32may include an electro-optical and/or imaging sensor positioned on a forward portion of the PGMA10. In another example, there may be multiple sensors employed such that the guided projectile14can operate in a GPS-denied environment and for highly accurate targeting. The projectile, in one example, has multiple sensors and switches from one sensor to another during flight. For example, the projectile can employ GPS while it is available but then switch to another sensor for greater accuracy or if the GPS signal is unreliable or no longer available. For example, it may switch to an imaging sensor to hone in to a precise target. The at least one computer-readable storage medium38includes instructions encoded thereon that when executed by the at least one processor40carried by the PGMA10implements operations to aid in guidance, navigation and control (GNC) of the guided projectile14. The PGMA10includes a nose or front end42and an opposite tail or rear end44. When the PGMA10is coupled to the munition body12, a longitudinal axis X1 extends centrally from the rear end18of the munition body to the front end42of the PGMA10.FIG.1Adepicts one embodiment of the PGMA10as generally cone-shaped and defines the nose42of the PGMA10. The one or more canards28a,28bof the canard assembly28are controlled via the CAS30. The PGMA10further includes a forward tip46and a second annular edge48. In one embodiment, the second annular edge48is a trailing annular edge48positioned rearward from the tip46. The second annular edge48is oriented centrally around the longitudinal axis X1. The second annular edge48on the canard PGMA10is positioned forwardly from the first edge20on the munition body12. The PGMA10further includes a central cylindrical extension50that extends rearward and is received within the cylindrical cavity22via a threaded connection. The second annular edge48is shaped and sized complementary to the first annular edge20. In one particular embodiment, a gap52is defined between the annular edge48and the leading edge20. The gap52may be an annular gap surrounding the extension50that is void and free of any objects so as to effectuate the free rotation of the PGMA10relative to the munition body12. FIG.2depicts an embodiment of the precision guidance munition assembly, wherein the PGMA10includes at least one lift canard28aextending radially outward from an exterior surface54relative to the longitudinal axis X1. The at least one lift canard28ais pivotably connected to a portion of the PGMA10via the CAS30such that the lift canard28apivots relative to the exterior surface54of the PGMA10about a pivot axis X2. In one particular embodiment, the pivot axis X2 of the lift canard28aintersects the longitudinal axis X1. In one particular embodiment, a second lift canard28ais located diametrically opposite the at least one lift canard28a, which could also be referred to as a first lift canard28a. The second lift canard28ais structurally similar to the first lift canard28asuch that it pivots about the pivot axis X2. The PGMA10can control the pivoting movement of each lift canard28avia the CAS30. The first and second lift canards28acooperate to control the lift of the guided projectile14while it is in motion after being fired from a launch assembly56(FIG.3). The PGMA10may further include at least one roll canard28bextending radially outward from the exterior surface54relative to the longitudinal axis X1. In one example, the at least one roll canard28bis pivotably connected to a portion of the PGMA10via the CAS30such that the roll canard28bpivots relative to the exterior surface54of the PGMA10about a pivot axis X3. In one particular embodiment, the pivot axis X3 of the roll canard28bintersects the longitudinal axis X1. In one particular embodiment, a second roll canard28bis located diametrically opposite the at least one roll canard28b, which could also be referred to as a first roll canard28b. The second roll canard28bis structurally similar to the first roll canard28bsuch that it pivots about the pivot axis X3. The PGMA10can control the pivoting movement of each roll canard28bvia the CAS30. The first and second roll canards28bcooperate to control the roll of the guided projectile14while it is in motion after being fired from the launch assembly56(FIG.3). While the launch assembly56is shown as a ground vehicle in this example, the launch assembly may also be on vehicles that are air-borne assets or maritime assets. The air-borne assets, for example, includes planes, helicopters and drones. The canards28a,28bon the canard assembly28are moveable in order to guide or direct the guided projectile14during its flight in order to steer the guided projectile14relative to the target24on the ground. Due to the complex dynamics of the flight of the guided projectile14, moving the at least one canard28a,28b, causes the impact point of the guided projectile14to move in different directions and different distances relative to the target24depending on the time of flight of the guided projectile14. Thus, movement of the at least one canard28a,28b, in one direction at a first time will result in a movement of the impact point of the guided projectile14relative to the target24and movement of the at least one canard28a,28b, in the same direction at a later second time will result in a different movement of the impact point of the guided projectile14relative to the target24(i.e., the impact point of the guided projectile14could move in a different direction and a different distance relative to the target24). To properly and optimally account for this complex behavior, the guided projectile14utilizes maneuver envelopes32gto optimally determine the canard commands to guide the guided projectile14to the target24. FIG.3depicts the operation of the PGMA10when it is coupled to the munition body12forming the guided projectile14. As shown inFIG.3, the guided projectile14is fired from the launch assembly56elevated at a quadrant elevation towards the target24located at an estimated or nominal distance58from the launch assembly56. Guided projectiles14are typically limited in how much they can maneuver. Thus, the maneuver authority of the guided projectile14is a factor in launching the guided projectile14. The present disclosure provides a system and device to optimize the maneuvering of the guided projectile14based on its maneuver authority as determined by one of a plurality of maneuver envelopes32gstored in the memory38. Once the maneuver authority of the guided projectile14is known, a correction can be made by deflecting one or more of the canards28a,28b, to precisely guide the guided projectile14towards its intended target24. When the guided projectile14is launched from the launch assembly56or gun tube, the amount that the canards28a,28b, can move to steer the guided projectile14is based, at least in part, on the maneuver authority. The maneuver authority is a function of time of flight, launch speed and quadrant elevation. The maneuver envelopes account for the maneuver authority at each respective time interval to optimize steering commands that drive the canards28a,28bin order to guide the guided projectile14towards the intended target24. The guided projectile14employs one or more guiding sensors to assist in guiding the projectile to the target. In one example, the GNC section32employs GPS which uses satellites59that can provide precision data such as location, timing, speed and the like. The guided projectile14performs a corrective maneuver by adjusting one or more canards28a,28b, to adjust the predicted impact range or cross-range as needed to guide the guided projectile14towards the target24. In accordance with one aspect of the present disclosure, the range or cross-range correction maneuver (or both) begins early in flight of the guided projectile14. In one example, a maneuver envelope32gis generated for each quadrant elevation and launch velocity in which the launch assembly56may be positioned in order to fire the guided projectile14. For example, and not meant as a limitation, the maneuver envelopes32gmay be generated by an offline computer for any set of launch conditions, including, but not limited to, different launch speeds and quadrant elevations. The maneuver envelopes32gmay then be stored in or uploaded to the PGMA10or a single maneuver envelope32grepresenting a particular planned launch condition can be loaded into the guided projectile14prior to launch. Each one of the maneuver envelopes32gmay be generated through a computer simulation model. In one implementation, a system utilizes a seven degree-of-freedom (DOF) model to generate maneuver envelopes32gfor given quadrant elevations and launch speeds. In one example, the plurality of maneuver envelopes32gmay be loaded into the at least one non-transitory computer-readable storage medium38and executed by the at least one processor40based on the known quadrant elevation at which the launch assembly56is positioned and the launch velocity of the guided projectile14. In another example, a single maneuver envelope32grepresenting a particular launch condition may be loaded before launch of the guided projectile14. For example, and not meant as a limitation, the maneuver envelope32gmay be loaded into the PGMA10before launch of the guided projectile14. After launch, the processor40executes the instructions stored on the storage medium38in order to refer to the associated maneuver envelope32gfor that quadrant elevation and launch speed at which the guided projectile14was launched. The guided projectile14utilizes various logic to predict the nominal impact point of the guided projectile14relative to the intended target24. Then, canard logic or corrective maneuver logic uses the maneuver envelope32gto determine the canard command to steer the guided projectile14in the range direction or cross-range direction. Stated otherwise, the canard logic moves the at least one canard28a,28b, in response to a signal from the at least one processor40provided by the maneuver envelope32g. The maneuver envelopes32gmay also be referred to as “maneuverability tables” or control maps or control effectiveness map(s). The maneuver envelopes32gin this example are tables that provide the amount of ground maneuver per second at the maximum canard deflection. The maneuver envelope32gspecifies range and cross range translation on the ground that a one second maximum canard deflection (at roll=phi) at time T will produce. The maneuver envelope provides delta range (dR) and delta cross range (dXR) as a function of mission time (T) and canard roll angle, phi. Stated as an equation, the maneuver envelope is (dR, dXR)=Control_Map (T, phi). Wherein if the canard assembly is at roll angle=phi and has max deflection at time T it will cause the guided projectile14to change range by dR and cross range by dXR for each second. The details and features of the control maps depend on the launch angle and speed of the guided projectile14. The use of such control maps addresses the large variation of projectile dynamics and allows greater efficiency and control authority. Some exemplary maneuver envelopes32gare detailed inFIG.4AthroughFIG.6C. The maneuver envelope examples show some of the features, variations, and complexities that need to be accounted for in order to optimally use the limited control authority of the guided projectile14. Other features for different launch conditions are also represented by the maneuver envelopes. FIG.4AandFIG.4Bdepict range and cross range values from one maneuver envelope32gfrom the plurality of maneuver envelopes32g.FIG.4Adepicts an example control map32gwhere the X-axis represents the range in meters and the Y-axis represents the time in seconds. Line60represents the no maneuver nominal guided projectile14range with canards set to zero deflection. The range maneuver authority62is a function of time. The range maneuver authority62includes a maximum64and a minimum66per second as a function of time. For example, at about twenty seconds, the maneuver authority range per second is from about minus twenty to about minus five for a maximum canard deflection command. Both the minimum and maximum of the maneuver authority range62are below the nominal range line60, which means that any movement by the at least one canard28a,28b, will result in guiding or steering the guided projectile14in a manner that will shorten the distance of the guided projectile14from its predicted target impact. Stated otherwise, during the early portions of the flight, all movements of the at least one canard28a,28b, will shorten the range of the guided projectile14for this maneuver envelope32g, which is dependent on quadrant elevation of the launch assembly56and launch speed. It is only after a certain period of time that the range maximum64extends above and beyond the line60that the range of the guided projectile14can be extended. In this particular case, the period of time is about thirty-five seconds, shown at68in which the maximum64of the maneuver authority range62exceeds the nominal range line60. It is to be understood that details of the control authority are described by the maneuver envelope32g. The time in which the maximum64of the maneuver authority range62can increase range is shown generally after68. Thus, with reference to the first maneuver envelope32g, the guided projectile14would need to wait until after thirty-five seconds in order to deflect the at least one canard28a,28b, in a manner that would result in an increase in the range from the nominal range line60. Stated otherwise, a deflection or movement of the at least one canard28a,28b, occurring before the control reversal time68will decrease the range of the guided projectile14and the same movement of the at least one canard28a,28b, occurring after time68will result in an increase in range. FIG.4Bdepicts a maneuver envelope32gpertaining to the cross-range (i.e., meters or feet) maneuverability versus time (i.e., seconds). The cross-range maneuverability, according to the maneuver envelope32g, is greatest early in flight. In one embodiments as shown inFIG.4Bnear time equals zero or T=0, the roll canards28bcan maneuver the guided projectile14approximately one hundred meters per second either to the left or to the right of the target24. As time in flight increases, the ability of the roll canards28bto adjust the cross-range units decreases until the flight of the guided projectile14reaches its apogee70at about fifty seconds. Then, as the guided projectile14begins its downward trajectory, the roll canards28bagain increase in their ability to maneuver the guided projectile14within a cross-range maneuver authority72. Stated otherwise, the cross-range maneuver authority72extends between a rightmost cross-range74and a leftmost cross-range76wherein the maneuver authority of the cross-range is at its lowest near the apogee70. Furthermore, for the maneuver envelope32g, the greatest maneuver authority range72of the cross-range occurs at periods or intervals of time that are before the apogee70in the early part of flight. FIG.5Adepicts another maneuver envelope32gshowing range per unit time (i.e., meters per second) versus time in seconds. The simulation model for this maneuver envelope32grefers to a guided projectile14fired from launch assembly56at a quadrant elevation of 1200 mil. Notably, this is a high quadrant elevation wherein high quadrant elevations refer to those quadrant elevations above 800 mil (45°). As a result of the high quadrant elevation, the maneuver envelope has features that must be considered in order to generate a canard command. With continued reference toFIG.5A, the high quadrant elevation of 1200 mil results in a control reversal at a time of thirty-one seconds, denoted as68inFIG.5A. The control reversal time68occurring at approximately thirty-one seconds is indicative of the fact that a similar movement of the lift canard28awill affect the direction in which the guided projectile14moves towards or away from the target24, dependent on whether the movement occurs before or after the control reversal time68. Furthermore, in some instances, the control reversal time68is congruent with or after the apogee70and, in other situations, such as identified by maneuver envelope32g, the control reversal time may be before the apogee70of the flight of the guided projectile14. FIG.5Bdepicts the cross-range maneuverability versus time function of the maneuver envelope32g. Similar to the range function identified inFIG.5A, the cross-range maneuver per unit time versus time function of the maneuver envelope32gindicates that the maneuverability is greatest early in the flight (i.e., where the time equals twenty-five seconds or less). Then, as time progresses, the cross-range maneuverability fluctuates depending upon the time in flight of the guided projectile14. It should be noted that the lines shown inFIG.5AandFIG.5Bshow the maneuvers for different roll angles of the precision guidance munition assembly10where each line represents a specific roll angle. This shows that the required roll angle to obtain a maneuver in a specific direction (e.g., range, cross-range or a combination of range and cross-range) changes as a function of time. The maneuver envelope32gallows the correct roll angle to be selected given the direction of the desired maneuver. FIG.6Adepicts a maneuver envelope32gwith a plurality of command rings78that are defined by a three-dimensional combination of range maneuverability and cross-range maneuverability as a function of time. Each ring78defines the potential movement in both range and cross-range. Once the miss distance of the projectile is obtained, a roll angle can be chosen which will lessen the miss distance. The roll angle is determined by the position on the command rings78that defines the direction of the required maneuver in range, cross-range, or a combination of range and cross-range. The distance of a point on the ring from the origin, (the 0,0 location on the axis) represents the maneuver distance on the ground per unit command time. The location of the point on the ring is related to the direction of the maneuver on the ground relative to the target24. The total maneuver authority is defined by the full set of command rings80and can be computed by summing over all rings. The three-dimensional maneuver envelope32gindicates that at an early flight time, such as time equals twenty seconds or less, the cross-range maneuverability is greater than what it is later in time and may vary from about minus forty units to about forty units. Stated otherwise, the cross-range maneuver authority72generally decreases with slight fluctuations or blips of increases as time in flight increases. Further, early on in the flight, for this high quadrant elevation, the range maneuverability will generally be less than zero which refers to the fact that the guided movement of the at least one canard28a,28bon the precision guidance munition assembly10will result in a range correction maneuver that will always shorten the impact distance of the guided projectile14from the target24if control is attempted early in flight. It is only after a specific time68, which in a particular example, occurs around thirty seconds, that movements of the at least one canard28a,28b, will result in a positive directional movement of the guided projectile14relative to the target24on the ground. The apogee70of the guided projectile14impacts the maneuver envelope32gby reducing the control authority which occurs around fifty seconds as indicated at70and is best shown inFIG.6B. At the apogee70, there is low dynamic pressure acting on the guided projectile14. Stated otherwise, maneuverability and control is low at the apogee70. Thus, the shape of the plot of the maneuver envelope32gnarrows to a throat at the apogee70.FIG.6Cshows a portion of the maneuver envelope32gsubsequent to the apogee70. After the apogee70, the command rings get larger as the projectile speed increases. At80, where time equals seventy seconds, the command rings begin to shrink because the projectile is approaching the target and thus the time for making maneuver commands is getting small. FIG.7is a plot of an optimization function that evaluates the ability of the guided projectile14to maneuver in a specific direction on the ground as a function of time in the flight. This defines the alignment correlation value, which may also be referred to as a match ratio versus time. This alignment correlation is a function of time and direction. Thus, for example, the alignment correlation could refer to a maneuver to extend range and shift cross range to the left when viewed from behind the guided projectile14. A value close to one of the alignment correlation value, which, in one example, may be anything greater than approximately 0.85, indicates that a maneuver is possible while a low value of the alignment correlation value, which, in one example, may be anything less than approximately 0.85, indicates a limited ability to maneuver. For example, when the alignment correlation value is less than 0.85, a range increase maneuver might not be possible early in flight. Specifically,FIG.7is a dot product of normalized vectors that the precision guidance munition assembly10utilizes in order to optimize when to make corrections and the PGMA10will not attempt to maneuver in a direction when the alignment correlation value is low. Such cases may occur at apogee or times that control reversals occur. The use of the dot product optimizes the time when the at least one canard28a,28b, moves to effectuate the corrective maneuver. This optimization can consist of preventing or inhibiting the maneuver if the correlation is low and waiting until the correlation becomes large enough (greater than 0.85). By doing so, control actions that waste control energy by attempting to steer in a direction that has a low correlation are prevented. The dot product evaluates whether the command that results in the movement of the at least one canard28a,28b, to effectuate the maneuver is effective. For example, if the cross range is correct (i.e., on target) and the range is determined to be incorrect (i.e., off target), then the dot product will ensure that the range maneuver occurs at a point where range control can be effective. The dot product enables the guided projectile14to ensure that a maneuver in one direction (such as cross-range) will not come at the expense of maneuverability in the other direction (such as range). The system is encoded with threshold logic to indicate that if the match ratio of the dot product falls below a certain threshold, a corrective maneuver may not occur. For example, as indicated inFIG.7, around when time equals fifty-nine, the dot product of the match ratio falls to zero (off the page). The dot product threshold is typically around 0.85, but whenever the dot product value falls below 0.85, the logic in the precision guidance munition assembly10determines that a corrective maneuver should not be performed at that time. In accordance with one aspect of the present disclosure, the dot product representation enables the system to generate the optimal control commands which select a command that takes into account the maneuverability dead zones of the guided projectile14. The maneuverability dead zones are regions with low alignment correlation or low maneuverability. A command generator picks the command that has the highest alignment correlation. In one particular example for illustrative purposes, the alignment correlation must be greater than 0.85 for the guided projectile14to engage canard28a,28b, deflections. If the correlation is less than 0.85, the guided projectile14is unable to maneuver in its desired direction without sacrificing some other maneuver authority or causing undesired effects in the orthogonal direction. The control logic associated with generating the optimal command for the control map avoids canard28a,28b, deflections during the maneuver dead zones. In accordance with one aspect of the present disclosure, the precision guidance munition assembly10optimally uses the control or maneuver authority that the guided projectile14has based on the predetermined maneuver envelope32g. This is important because a guided projectile14is typically limited in its maneuver authority, unlike a missile that can be actively guided and steered when powered from its self-carried propulsion device. The system and device of the precision guidance munition assembly10enables an improved correction of range and cross-range of the guided projectile14after being fired from launch assembly56. The improvement is a result of more efficient use of the limited control authority. Various inventive concepts may be embodied as one or more methods, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. For example,FIG.8depicts one exemplary method in accordance with the present disclosure. The method ofFIG.8is shown generally at800. Method800may include selecting a maneuver envelope32gthat describes a control authority of the guided projectile14, which is shown generally at802. The method800may include predicting an impact point of the guided projectile14relative to a target24, which is shown generally at804. The method800may include determining a miss distance error based on the predicted impact point relative to the target24, which is shown generally at806. The method800may include determining a maneuver command based on the maneuver envelope32g, which is shown generally at808. The method800may include optimally applying the maneuver command to move the at least one canard28a,28b, on the canard assembly28at an optimal time based, at least in part, on the maneuver envelope32g, which is shown generally at810. Further this exemplary method or other exemplary methods may additionally include steps or processes that may include wherein the selecting the maneuver envelope32gthat describes the control authority of the guided projectile14is accomplished by selecting the maneuver envelope32gfrom a plurality of maneuver envelopes32gstored in the at least one non-transitory computer-readable storage medium38. This exemplary method or other exemplary methods may additionally include steps or processes that may include wherein the selecting the maneuver envelope32gthat describes the control authority of the guided projectile14is accomplished by uploading a predetermined maneuver envelope32gto the at least one non-transitory computer-readable storage medium38prior to firing the guided projectile14. This exemplary method or other exemplary methods may additionally include steps or processes that include optimally applying the maneuver command to move the at least one canard28a,28b, on the canard assembly28at an optimal time based, at least in part, on the maneuver envelope32gand the method further comprises producing a dot product for a match ratio versus time, and evaluating whether the maneuver command that results in the movement of the at least one canard28a,28b, to effectuate steering the guided projectile14is effective. This exemplary method or other exemplary methods additionally include steps or processes that include preventing movement of the at least one canard28a,28b, when threshold logic determines that the match ratio of the dot product falls below a certain threshold. This exemplary method or other exemplary methods additionally include steps or processes that include timing, via a timer carried by the precision guidance munition assembly14, a time in flight, and wherein optimally applying the maneuver command to move the at least one canard28a,28b, on the canard assembly28at an optimal time based, at least in part, on the maneuver envelope32gis accomplished by indicating, via a plurality of command rings at different time intervals, range maneuverability and cross-range maneuverability of the guided projectile14at a respective time interval. While various inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, munition assembly, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, munition assemblies, and/or methods, if such features, systems, articles, materials, munition assemblies, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure. The above-described embodiments can be implemented in any of numerous ways. For example, embodiments of technology disclosed herein may be implemented using hardware, software, or a combination thereof. When implemented in software, the software code or instructions can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Furthermore, the instructions or software code can be stored in at least one non-transitory computer-readable storage medium18. Also, a computer utilized to execute the software code or instructions via its processors may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format. Such computers or smartphones may be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks. The various methods or processes outlined herein may be coded as software/instructions that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. In this respect, various inventive concepts may be embodied as a computer-readable storage medium (or multiple computer-readable storage media) (e.g., a computer memory, one or more floppy discs, compact discs, optical discs, magnetic tapes, flash memories, USB flash drives, SD cards, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other non-transitory medium or tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the disclosure discussed above. The computer-readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present disclosure as discussed above. The term loaded as used herein refer to any type of uploading via software or loading via any computer readable storage medium. The terms “program” or “software” or “instructions” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the present disclosure need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present disclosure. Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments. Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements. All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms. “Guided projectile” or guided projectile14refers to any launched projectile such as rockets, mortars, missiles, cannon shells, shells, bullets and the like that are configured to have in-flight guidance. “Launch Assembly” or launch assembly56, as used herein, refers to rifle or rifled barrels, machine gun barrels, shotgun barrels, howitzer barrels, cannon barrels, naval gun barrels, mortar tubes, rocket launcher tubes, grenade launcher tubes, pistol barrels, revolver barrels, chokes for any of the aforementioned barrels, and tubes for similar weapons systems, or any other launching device that imparts a spin to a munition round or other round launched therefrom. “Precision guided munition assembly,” as used herein, should be understood to be a precision guidance kit, precision guidance system, a precision guidance kit system, or other name used for a guided projectile. “Quadrant elevation”, as used herein, refers to the angle between the horizontal plane and the axis of the bore when the weapon is laid. The quadrant elevation is the algebraic sum of the elevation, angle of site, and complementary angle of site. In some embodiments, the munition body12is a rocket that employs a precision guidance munition assembly10that is coupled to the rocket and thus becomes a guided projectile14. “Logic”, as used herein, includes but is not limited to hardware, firmware, software and/or combinations of each to perform a function(s) or an action(s), and/or to cause a function or action from another logic, method, and/or system. For example, based on a desired application or needs, logic may include a software controlled microprocessor, discrete logic like a processor (e.g., microprocessor), an application specific integrated circuit (ASIC), a programmed logic device, a memory device containing instructions, an electric device having a memory, or the like. Logic may include one or more gates, combinations of gates, or other circuit components. Logic may also be fully embodied as software. Where multiple logics are described, it may be possible to incorporate the multiple logics into one physical logic. Similarly, where a single logic is described, it may be possible to distribute that single logic between multiple physical logics. Furthermore, the logic(s) presented herein for accomplishing various methods of this system may be directed towards improvements in existing computer-centric or internet-centric technology that may not have previous analog versions. The logic(s) may provide specific functionality directly related to structure that addresses and resolves some problems identified herein. The logic(s) may also provide significantly more advantages to solve these problems by providing an exemplary inventive concept as specific logic structure and concordant functionality of the method and system. Furthermore, the logic(s) may also provide specific computer implemented rules that improve on existing technological processes. The logic(s) provided herein extends well beyond merely gathering data, analyzing the information, and displaying the results. Further, portions or all of the present disclosure may rely on underlying equations that are derived from the specific arrangement of the equipment or components as recited herein. Thus, portions of the present disclosure as it relates to the specific arrangement of the components are not directed to abstract ideas. Furthermore, the present disclosure and the appended claims present teachings that involve more than performance of well-understood, routine, and conventional activities previously known to the industry. In some of the method or process of the present disclosure, which may incorporate some aspects of natural phenomenon, the process or method steps are additional features that are new and useful. The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the claims (if at all), should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law. As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc. In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures. An embodiment is an implementation or example of the present disclosure. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” “one particular embodiment,” “an exemplary embodiment,” or “other embodiments,” or the like, means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the invention. The various appearances “an embodiment,” “one embodiment,” “some embodiments,” “one particular embodiment,” “an exemplary embodiment,” or “other embodiments,” or the like, are not necessarily all referring to the same embodiments. If this specification states a component, feature, structure, or characteristic “may”, “might”, or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element. Additionally, the method of performing the present disclosure may occur in a sequence different than those described herein. Accordingly, no sequence of the method should be read as a limitation unless explicitly stated. It is recognizable that performing some of the steps of the method in a different order could achieve a similar result. In the foregoing description, certain terms have been used for brevity, clearness, and understanding. No unnecessary limitations are to be implied therefrom beyond the requirement of the prior art because such terms are used for descriptive purposes and are intended to be broadly construed. Moreover, the description and illustration of various embodiments of the disclosure are examples and the disclosure is not limited to the exact details shown or described. | 47,619 |
11859957 | DETAILED DESCRIPTION The following discussion is directed to various exemplary embodiments. However, one skilled in the art will understand that the examples disclosed herein have broad application, and that the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to suggest that the scope of the disclosure, including the claims, is limited to that embodiment. Certain terms are used throughout the following description and claims to refer to particular features or components. As one skilled in the art will appreciate, different persons may refer to the same feature or component by different names. This document does not intend to distinguish between components or features that differ in name but not function. The drawing figures are not necessarily to scale. Certain features and components herein may be shown exaggerated in scale or in somewhat schematic form and some details of conventional elements may not be shown in interest of clarity and conciseness. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” Also, the term “couple” or “couples” is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices, components, and connections. In addition, as used herein, the terms “axial” and “axially” generally mean along or parallel to a central axis (e.g., central axis of a body or a port), while the terms “radial” and “radially” generally mean perpendicular to the central axis. For instance, an axial distance refers to a distance measured along or parallel to the central axis, and a radial distance means a distance measured perpendicular to the central axis. Any reference to up or down in the description and the claims is made for purposes of clarity, with “up”, “upper”, “upwardly”, “uphole”, or “upstream” meaning toward the surface of the borehole and with “down”, “lower”, “downwardly”, “downhole”, or “downstream” meaning toward the terminal end of the borehole, regardless of the borehole orientation. Further, the term “fluid,” as used herein, is intended to encompass both fluids and gasses. Referring now toFIG.1, a perforating gun or completion system10for completing a wellbore4extending into a subterranean formation6is shown. In the embodiment ofFIG.1, wellbore4is a cased wellbore including a casing string12secured to an inner surface8of the wellbore4using cement (not shown). In some embodiments, casing string12generally includes a plurality of tubular segments coupled together via a plurality of casing collars. Completion system10includes a surface assembly11positioned at a wellsite13of system10, and a tool string20deployable into wellbore4from a surface5using surface assembly11. Surface assembly11may comprise any suitable surface equipment for drilling, completing, and/or operating well20and may include, in some embodiments, derricks, structures, pumps, electrical/mechanical well control components, etc. Tool string20of completion system10may be suspended within wellbore4from a wireline22that is extendable from surface assembly11. Wireline22comprises an armored cable and includes at least one electrical conductor for transmitting power and electrical signals between tool string20and a control system or firing panel of surface assembly11positioned at the surface5. In some embodiments, system10may further include suitable surface equipment for drilling, completing, and/or operating completion system10and may include, for example, derricks, structures, pumps, electrical/mechanical well control components, etc. Tool string20is generally configured to perforate casing string12to provide for fluid communication between formation6and wellbore4at predetermined locations to allow for the subsequent hydraulic fracturing of formation6at the predetermined locations. In this embodiment, tool string20has a central or longitudinal axis25and generally includes a cable head24, a casing collar locator (CCL)26, a direct connect sub28, a pair of perforating guns or tools100A,100B, a reusable tandem sub200, a plug-shoot firing head (PSFH)40, a setting tool50, and a downhole or frac plug60. In other embodiments, the configuration of tool string20may vary from that shown inFIG.1. For example, in other embodiments, tool string20may include a fishing neck, weight bars, a release tool, and/or a safety sub selectably restricting electrical communication to one or more components of tool string20. Cable head24is the uppermost component of tool string20and includes an electrical connector for providing electrical signal and power communication between the wireline22and the other components (CCL26, perforating guns100A,100B, tandem sub200, PSFH40, setting tool50, etc.) of tool string20. CCL26is coupled to a lower end of the cable head24and is generally configured to transmit an electrical signal to the surface via wireline22when CCL26passes through a casing collar of casing string12, where the transmitted signal may be recorded at surface assembly11as a collar kick to determine the position of tool string20within wellbore4by correlating the recorded collar kick with an open hole log. The direct connect sub28is coupled to a lower end of CCL26and is generally configured to provide a connection between the CCL26and the portion of tool string20including perforating guns100A,100B and associated tools, such as the setting tool50and downhole plug60. A first or upper perforating gun100A of tool string20is coupled to direct connect sub28while a second or lower perforating gun100B of string20is coupled to tandem sub200which is positioned between the pair of perforating guns100A,100B. Perforating guns100A,100B are generally configured to perforate casing string12and provide for fluid communication between formation6and wellbore4. As will be described further herein, tandem sub200is configured to electrically connect perforating guns100A,100B while also providing pressure isolation between perforating guns100A,100B. Perforating guns100A,100B may be configured similarly to each other. Particularly, each perforating gun100A,100B includes a plurality of shaped charges that may be detonated by one or more electrical signals conveyed by the wireline22from the firing panel of surface assembly11to produce one or more explosive jets directed against casing string12. Each perforating gun100A,100B may comprise a wide variety of sizes such as, for example, 2¾″, 3⅛″, or 3⅜″, wherein the above listed size designations correspond to an outer diameter of the perforating gun100A,100B. PSFH40of tool string20is coupled to a lower end of the lower perforating gun100B. PSFH40couples the lower perforating gun100B of the tool string20to the setting tool50and downhole plug60and is generally configured to pass a signal from the wireline22to the setting tool50of tool string20. In this embodiment, PSFH40also includes electrical components to fire the setting tool50of tool string20. In this embodiment, tool string20further includes setting tool50and downhole plug60, where setting tool50is coupled to a lower end of PSFH40and is generally configured to set or install downhole plug60within casing string12to fluidically isolate desired segments of the wellbore4. Once downhole plug60has been set by setting tool50, an outer surface of downhole plug60seals against an inner surface of casing string12to restrict fluid communication through wellbore4across downhole plug60. Downhole plug60of tool string20may be any suitable downhole or frac plug known in the art while still complying with the principles disclosed herein. Referring toFIGS.2-5, embodiments of the perforating guns100A,100B, and tandem sub200of the tool string20ofFIG.1are shown inFIGS.2-4. In the embodiment ofFIGS.2-5, each perforating gun100A,100B generally includes an outer sleeve or housing102and a charge tube assembly120positionable within the outer housing102. The outer housing102of each perforating gun100A,100B includes a first or upper end103, a second or lower end105opposite upper end103, a central bore or passage104within which charge tube assembly120is received. A generally cylindrical inner surface106defined by central passage104may include a releasable or threaded connector108at each longitudinal end103,105of outer housing102. In some embodiments, a generally cylindrical outer surface of the outer housing102may include a plurality of circumferentially and axially spaced recesses or scallops110to assist with the firing of perforating gun100A,100B; however, in other embodiments, outer housing102may not include scallops110. For example, in other embodiments, outer housing102may comprise a plurality of annular openings or rings to permit shaped charges of perforating guns100A,100B therethrough regardless of the relative angular orientation between the shaped charge and the outer housing102. The charge tube assembly120of each perforating gun100A,100B generally includes a cylindrical charge tube122, a first or upper endplate130, and a second or lower endplate140. The upper endplate130is coupled to a first or upper end124of charge tube120while the lower endplate is coupled to a second or lower end126of the charge tube120opposite the upper end124. A plurality of circumferentially and axially spaced shaped charges150(only one of which is shown inFIGS.2-5) are positioned in the charge tube122of each charge tube assembly120. Particularly, each shaped charge150has an outer end oriented towards one of the scallops110of the outer housing102, and an inner end oriented towards the central axis of the perforating gun100A,100B. The charge tube122is configured to couple with and house each shaped charge150and orient the outer end of each shaped charge150towards one of the scallops110. Additionally, each perforating gun100A,100B includes det or detonating cord160which extends through the charge tube122of the perforating gun100A,100B. Each shaped charge150is configured to initiate an explosion and emit an explosive charge from the outer end thereof and through one of the scallops110of outer housing102in response to receiving a ballistic signal from the det cord160extending through the charge tube122to which the shaped charge150is coupled. Particularly, the det cord160contacts or is otherwise ballistically coupled to the inner end of each shaped charge150. In this configuration, det cord160of each perforating gun100A,100B may communicate a ballistic signal to each of the shaped charges150of the perforating gun100A,100B. Each perforating gun100A,100B additionally includes a pair of electrical signal conductors or cables162,164(shown inFIG.4) which extend through the charge tube122of the perforating gun100A,100B. A first electrical cable162of the pair of electrical cables162,164may be electrically connected to charge tube122and may facilitate the electrical grounding of one or more components of tool string20, as will be discussed further herein. Additionally, the upper endplate130of the charge tube assembly120of each perforating gun100A,100B comprises an upper electrical connector132that is electrically connected or otherwise in signal communication with a second electrical cable164of the perforating gun100A,100B. The upper electrical connector132may comprise a longitudinally translatable contact pin134that is biased outwardly from upper endplate130by a biasing member136. The lower endplate140of the charge tube assembly120of each perforating gun100A,100B similarly comprises a lower upper electrical connector142that is electrically connected or otherwise in signal communication with the second electrical cable164of the perforating gun100A,100B. The lower electrical connector142may comprise a longitudinally translatable contact pin144that is biased outwardly from lower endplate140by a biasing member146. In this configuration, an electrical signal may be passed between the upper electrical connector132and the lower electrical connector142via second electrical cable164. First electrical cable162may also be referred to herein as a ground cable162while second electrical cable164may also be referred to herein as a through-wire cable164. The through-wire cable164of each perforating gun100A,100B may be in signal communication with an addressable switch (not shown inFIGS.2-5) configured to selectably detonate or initiate a detonator166of the perforating gun100A,100B which is ballistically coupled to det cord160. Detonator166may be positioned within the charge tube122of the perforating gun100A,100B and may be electrically connected to the switch of the perforating gun100A,100B via a pair of electrical leads168extending therebetween. Detonator166of each perforating gun100A,100B may be selectably detonated by surface assembly11. For example, surface assembly11may transmit a first firing signal addressed to the switch of lower perforating gun100B through wireline22to upper perforating gun100A. The first firing signal may pass through the upper perforating gun100A (via through-wire cable164of upper perforating gun100A) and tandem sub200, entering lower perforating gun100B. The first firing signal may be communicated to the addressable switch of lower perforating gun100B via the through-wire cable164of lower perforating gun100B. Being addressed to the lower perforating gun100B, the switch of gun100B may detonate the detonator166thereof in response to receiving the first firing signal. Similarly, following the actuation of lower perforating gun100B, surface assembly11may transmit a second firing signal addressed to the switch of upper perforating gun100A through wireline22to upper perforating gun100A. The second firing signal may be communicated to the addressable switch of upper perforating gun100A via the through-wire cable164of gun100A. Being addressed to the upper perforating gun100A, the switch of gun100A may detonate the detonator166thereof in response to receiving the second firing signal. Referring toFIGS.4-6, tandem sub200of tool string20is generally configured to communicate electrical signals therethrough and between the pair of perforating guns100A,100B. Additionally, tandem sub200is configured to provide a pressure bulkhead whereby upper perforating gun100A is isolated from pressure within lower perforating gun100B and vice-a-versa. In other words, pressure within central passage104of the outer housing102of lower perforating gun100B is not communicated and does not act upon the central passage104of the outer housing102of upper perforating gun100A and vice-a-versa. In this manner, the pressure generated within lower perforating gun100B following the detonation of the shaped charges150thereof may not be transferred to the components (e.g., the addressable switch, detonator160, shaped charges150) of the upper perforating gun100A. In this embodiment, tandem sub200of tool string20has a central or longitudinal axis205(concentric with central axis25of tool string20) and generally includes a cylindrical outer housing202and a molded pass-thru assembly240. Outer housing202may be integrally or monolithically formed and may comprise a metallic material such as alloy steel, mild steel, etc. The outer housing202of tandem sub200includes a first or upper end204, a second or lower end206opposite upper end204, a central bore or passage208defined by a generally cylindrical inner surface extending between ends204,206, and a generally cylindrical outer surface210extending between ends204,206. As shown particularly inFIG.6, outer surface210of outer housing202includes a pair of releasable or threaded connectors214positioned at the ends204,206thereof and a pair of annular seal assemblies216positioned axially between the releasable connectors214. The releasable connector214positioned at the upper end204of outer housing202is configured to releasably or threadably connect to the releasable connector108positioned at the lower end105of the outer housing102of upper perforating gun100A while the releasable connector214positioned at the lower end206of outer housing202is configured to releasably or threadably connect to the releasable connector108positioned at the upper end103of the outer housing102of lower perforating gun100B. In other embodiments, outer housing202may couple to perforating guns100A,100B via mechanisms other than releasable connectors214. Additionally, a first or upper seal assembly216of the pair of seal assemblies216is configured to sealingly engage the inner surface106of the outer housing102of upper perforating gun100A while a second or lower seal assembly216of the pair of seal assemblies216is configured to sealingly engage the inner surface106of the outer housing102of lower perforating gun100B upon assembly of the tandem sub200with the perforating guns100A,100B. Seal assemblies216may each comprise a pair of O-rings positioned in grooves formed in the outer surface212of outer housing202; however, in other embodiments, the configuration of seal assemblies216may vary. The inner surface210of outer housing202includes a pair of radially extending annular outer shoulders or faces218, and a pair of receptacles220extending axially from the outer shoulders218. The pair of receptacles220may each comprise one or more surface features or protrusions221configured to increase an area of receptacles221along the portions of receptacles220which protrusions221extend. Protrusions221may comprise one or more annular ridges or splines and thus may also be referred to herein as ridges221; however, in other embodiments, the configuration of protrusions221may vary. For instance, in other embodiments, protrusions221may comprise hemispherical, conical, and/or frustoconical projections or dimples which extend radially inwards from central axis205. The central passage208of outer housing202comprises a pass-thru passage222extending from a first or upper outer face218to an opposing second or lower outer face218. Additionally, the central passage208comprises a pair of outer recesses224extending between outer faces218and the upper and lower ends204,206of outer housing202. In other embodiments, outer housing202may not include receptacles220and/or protrusions221, and instead, pass-thru passage222may extend entirely between outer faces218. Pass-thru assembly240of tandem sub200generally comprises an electrical conductor or pass-thru242and a molded insulator260in which the electrical contact242is positioned. In this embodiment, electrical conductor242comprises a cylindrical signal bar and thus may also be referred to herein as signal bar242. As shown particularly inFIG.6, signal bar242comprises a pair of opposing longitudinal ends244and a generally cylindrical outer surface246extending between longitudinal ends244. Signal bar242may be integrally or monolithically formed and may comprise an electrically conductive material such as, for example, brass. Signal bar242has a longitudinal length245extending between the longitudinal ends244. Each longitudinal end244of signal bar242may be spaced inwardly (towards the center of tandem sub200) from the outer faces218of outer housing202such that an axially extending gap is formed between each longitudinal end244and the outer faces218. Signal bar242is rigid entirely across the longitudinal length245and each end244is not biased radially outwards from passage222by a biasing member. In other words, pass-thru assembly240does not include a biasing member for biasing any component or feature of pass-thru assembly240, including signal bar242. Additionally, the longitudinal length245of signal bar242is greater than half the longitudinal length of the passage222. A conical recess or receptacle248may be formed in each longitudinal end244of signal bar242such that each conical receptacle248extends concentrically with central axis205. The outer surface246of signal bar242may comprise one or more surface features or protrusions250configured to increase an area of outer surface246along the portions of outer surface246which protrusions250extend. Protrusions250may comprise one or more annular ridges or splines and thus may also be referred to herein as ridges250; however, in other embodiments, the configuration of protrusions250may vary. For instance, in other embodiments, protrusions250may comprise hemispherical, conical, and/or frustoconical projections or dimples which extend radially outwards from central axis205. As shown particularly inFIG.4, molded insulator260of pass-thru assembly240may comprise a pair of opposed longitudinal ends262and may entirely fill an annulus226of the central passage208of outer housing202formed between the outer surface246of signal bar242and the inner surface210of outer housing202. The molded insulator260may be integrally or monolithically formed and may comprise an electrically insulating material. In some embodiments, molded insulator260may comprise a polymeric material such as Polyether ether ketone (PEEK), Polyetherimide (PEI), etc.; however, molded insulator260may comprise various electrically insulating materials. In this manner, molded insulator260may electrically insulate signal bar242from outer housing202which may comprise an electrically conductive material in some embodiments. The longitudinal ends262of molded insulator260may be positioned at the interfaces between receptacles220and outer faces218of outer housing202. Molded insulator260may have a maximum length265extending between longitudinal ends262which is less than the maximum length245of signal bar242. Molded insulator260may adhere to both the inner surface210of outer housing202and the outer surface246of signal bar242thereby coupling or affixing signal bar242to outer housing202whereby relative axial and rotational movement between signal bar242and outer housing202may be restricted. Molded insulator260may be annular and comprise a central passage264defined by a generally cylindrical inner surface266extending between longitudinal ends262and a generally cylindrical outer surface268also extending between ends262. Signal bar242may be received within the central passage264of molded insulator260. The inner surface266of molded insulator260may sealingly engage and adhere to the outer surface246of signal bar242while the outer surface268of molded insulator260may sealingly engage and be adhered to the inner surface21of outer housing202, thereby restricting fluid communication and isolating pressure across annulus226. The inner surface266of molded insulator260may comprise one or more surface features or inner protrusions270configured to increase an area of inner surface266along the portions of inner surface266which inner protrusions270extend. Similarly, the outer surface268of molded insulator260may comprise one or more surface features or outer protrusions272configured to increase an area of outer surface268along the portions of outer surface268which outer protrusions272extend. Protrusions270,272may comprise one or more annular ridges or splines and thus may also be referred to herein as ridges270,272; however, in other embodiments, the configuration of protrusions270,272may vary. For instance, in other embodiments, protrusions270,272may each comprise hemispherical, conical, and/or frustoconical projections or dimples which extend radially with respect to central axis205. Outer protrusions272of molded insulator260may interlockingly engage with protrusions221of outer housing202to enhance the degree or quality of coupling between molded insulator260and outer housing202. Similarly, inner protrusions270of molded insulator260may interlockingly engage with the protrusions250of signal bar242to enhance the degree or quality of coupling between molded insulator260and signal bar242. In this manner, protrusions270,272may allow tandem sub200to be operated in relatively more extreme applications (applying a greater differential pressure across pass-thru assembly240) while maintaining a seal and pressure isolation between the upper end204and lower end206of outer housing202. However, in other embodiments, outer housing202may not include protrusions221, signal bar242may not include protrusions250, and molded insulator260may not include protrusions270,272; instead, adhesion between molded insulator260and the outer housing202and signal bar242formed during the formation of molded insulator260may maintain the coupling between molded insulator260and both outer housing202and signal bar242. A frustoconical recess or receptacle274may be formed in each longitudinal end262of molded insulator260such that each frustoconical receptacle274extends concentrically with central axis205. The longitudinal ends244of signal bar242may be positioned at inner ends of frustoconical receptacles274. In other words, each longitudinal end244of signal bar242is positioned in a corresponding frustoconical receptacle274of molded insulator260. In some embodiments, the conical receptacles248of signal bar242are flush with the frustoconical receptacles274of molded insulator260. An axial gap is formed between the longitudinal ends244of signal bar242and the longitudinal ends262of molded insulator260with the longitudinal ends244of signal bar242being recessed within the frustoconical receptacles274of molded insulator260. In this configuration, there is no outward projection or pin of signal bar242extending from one of the frustoconical receptacles274that may be inadvertently damaged or broken off during operation of tandem sub200. Following the assembly of tandem sub200with perforating guns100A,100B, the contact pin144of the lower electrical connector142of upper perforating gun100A may be received in the conical receptacle248positioned at an upper longitudinal end244of signal bar242, thereby establishing electrical contact and signal communication between upper perforating gun100A and tandem sub200. Similarly, contact pin134of the upper electrical connector132of lower perforating gun100B may be received in the conical receptacle248positioned at a lower longitudinal end244of signal bar242, thereby establishing electrical contact and signal communication between lower perforating gun100B and tandem sub200. The conical shape of frustoconical receptacles274of molded insulator260and conical receptacles248of signal bar242may guide contact pins134,144into aligned engagement with conical receptacles248. Referring toFIGS.6,7, an exemplary method for producing tandem sub200is shown therein. Beginning atFIG.7, following the fabrication of outer housing202and a cylindrical rod242′ (shown inFIG.7) from which signal bar242will be formed, cylindrical rod242′ may be held centrally within the central passage204of outer housing202by a fixture of a mold assembly (not shown inFIGS.6,7). With cylindrical rod242′ positioned centrally in outer housing202, a pair of endcaps (not shown inFIGS.6-8) of the mold assembly may be inserted into the outer recesses224of outer housing202such that the endcaps sealingly engage the outer faces218of housing202. With the endcaps so positioned, mold material (e.g., a polymeric material in some embodiments) may be injected (via a port formed in one of the endcaps) into the annulus226until the entire annulus226has been filled with the mold material. A molded member260′ (shown inFIG.7) may be formed in annulus226following curing of the mold material, the molded insulator260being formed from the molded member260′. Particularly, longitudinal ends of both the cylindrical rod242′ (now sealably adhered to the outer housing202by molded member260′) and molded member260′ may be machined to form conical recesses248in cylindrical rod242′ and frustoconical recesses274in molded member260′, thereby forming signal bar242and molded insulator260, respectively.FIGS.6,7represent one method for producing the tandem sub200shown inFIG.6and in other embodiments other manufacturing methods may be used for producing tandem sub200. As described above, tandem sub200provides an electrical pass-thru for signal communication between perforating guns100A,100B via the pass-thru assembly240thereof. Instead of relying on separate and distinct sealing elements, such as elastomeric O-rings, to seal the ends of tandem sub200, sub200may utilize the sealable adhesion produced between molded insulator260and the outer housing202and signal bar242to seal the ends of tandem sub200and to isolate pressure thereacross. Given that pass-thru assembly240does not rely on fragile elastomeric sealing elements (e.g., one or more O-rings, etc.) which must be refurbished or replaced after each use of tandem sub200, tandem sub200may be reused an indefinite number of times without needing to replace pass-thru assembly240. Additionally, by comprising only a single signal bar242that may be encased in a monolithically formed and relatively thick molded insulator260that extends continuously across the entire length245of signal bar242, pass-thru assembly240may reduce the risk of electrical leakage between signal bar252and outer housing242relative to other electrical contact assemblies which do not include a monolithic insulator. In some embodiments, a ratio of an outer diameter of molded insulator260to an outer diameter of signal bar242may be five or greater. Further, given that pass-thru assembly240may comprise only a single monolithically formed signal bar242, the electrical connection formed between perforating guns100A,100B via tandem sub200includes only two electrical contact points—the points of contact between contact pins144,134of perforating guns100A,100B, respectively, and the longitudinal ends244of signal bar242. Thus, pass-thru assembly240may have a lesser number of electrical contact points, which are susceptible to failure during operation, relative to other assemblies which rely on a plurality of components to provide electrical continuity. Referring toFIGS.8,9, another embodiment of a tandem sub300is shown. Tandem sub300may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Additionally, tandem sub300may include features in common with tandem sub200shown inFIGS.4-7, and shared features are labeled similarly. Tandem sub300generally includes an outer housing202′, pass-thru assembly240, and a pair of blast washers302and retaining rings310. Blast washers302may each be disc shaped and including an annular inner face304and an annular outer face306. Blast washers302may act as a sacrificial element configured to absorb the impact of the detonation of a perforating gun (e.g., one of perforating guns100A,100B) positioned adjacent thereto while protecting the inner surface210′ of outer housing202′. In some embodiments, blast washers302may comprise a hardened material whereas in other embodiments blast washers302may comprise a material having similar properties as the material comprising outer housing202′. When assembled with outer housing202′, the inner face304of blast washers302may contact and cover the outer faces218of outer housing202′ and potentially at least a portion of the longitudinal ends262of molded insulator260, thereby protecting outer faces218and the longitudinal ends262of molded insulator260from the impact of the detonation of a perforating gun positioned adjacent tandem sub300. By protecting outer faces218of outer housing202and the longitudinal ends262of molded insulator260, blast washers302may reduce or eliminate a potential need to resurface (e.g., grind, machine, and/or polish, etc.) outer faces218and/or the longitudinal ends262of molded insulator260after one or more uses of tandem sub300, thereby minimizing the costs for operating tandem sub300. The outer housing202′ of tandem sub300may be similar in configuration to the outer housing202shown inFIGS.4-7except that the inner surface210′ of outer housing202′ may comprise a pair of annular recesses or lips228formed therein proximal the ends204,206of outer housing202′. Retaining rings310may be snapped into lips228of outer housing202′ to secure blast washers302to outer housing202via contact between retaining rings310and the outer faces of blast washers302. In some embodiments, retaining rings310may comprise radially expandable C-rings. Referring toFIGS.10,11, another embodiment of a tandem sub400is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Additionally, tandem sub400may include features in common with tandem sub200shown inFIGS.4-7, and shared features are labeled similarly. Tandem sub400generally includes an outer housing402, a pair of pressure bulkheads or pass-thru assemblies410, and an electrical connector440extending between the pair of pressure bulkheads410. Housing402is similar in configuration to the housing202of tandem sub200shown inFIGS.4-7, and thus will be not described in detail. In this embodiment, each pressure bulkhead410comprises an electrically insulating outer retainer412and an inner electrical connector420. Retainer412may be overmolded onto the electrical connector420. In some embodiments, retainer412may comprise a plastic material while electrical connector420comprises a metallic, electrically conductive material. As shown particularly inFIG.11, retainer412of each pressure bulkhead410comprises a receptacle414which guides and receives the contact pin134,144of one of the electrical connectors132,142. Additionally, a generally cylindrical outer surface of the retainer412of each pressure bulkhead410comprises a releasable or threaded coupler416. In this embodiment, protrusions of221of outer housing402comprise internal threads which threadably couple with the threaded connectors416of pressure bulkheads410to retain pressure bulkheads410within the central passage208of outer housing402. Further, a pair of annular seals or O-rings418are positioned within corresponding grooves formed on the outer surface of each retainer412. O-rings418seal against the inner surface210of outer housing402to provide bi-directional pressure isolation between upper perforating gun100A and lower perforating gun100B. Also, as shown particularly inFIG.11, the electrical connector420of each pressure bulkhead410comprises a first or inner end comprising a contact pin422and a second or outer end comprising a conical recess or receptacle424. Contact pin422projects outwardly from retainer412while conical receptacle424is recessed within retainer412such that the receptacle414of retainer412extends towards the conical receptacle424of electrical connector420. The contact pin422of each pressure bulkhead410contacts electrical connector440to form an electrical connection between pressure bulkheads410. In this embodiment, electrical connector440comprises a biasing element or coil spring; however, in other embodiments, electrical connector440may comprise other types of electrical connectors including electrically conductive tubes, rods, cables, etc. When tandem sub400is assembled with perforating guns100A,100B, the contact pins134,144of electrical connectors132,142are received in the conical receptacles424of the electrical connectors420of pressure bulkheads410to establish an electrical connection between upper perforating gun100A and lower perforating gun100B. Thus, tandem sub400may both provide electrical connectivity between perforating guns100A,100B while also isolating or preventing the transmission of pressure from upper perforating gun100A to lower perforating gun100B and from lower perforating gun100B to upper perforating gun100A. Referring toFIGS.12,13, another embodiment of a tandem sub450is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub450may include features in common with tandem sub200shown inFIGS.4-7and tandem sub400shown inFIGS.10,11, and shared features are labeled similarly. Tandem sub450is configured to electrically connect and provide bi-directional pressure isolation between perforating guns445A,445B. Perforating guns445A,445B are similar to the perforating guns100A,100B, respectively, shown inFIGS.3,4, except that electrical connectors132,142each comprise receptacles446rather than pin connectors134,144, respectively. Tandem sub450generally includes an outer housing452, a pair of pressure bulkheads or pass-thru assemblies460, a pair of bulkhead retainers480, and an electrical connector490extending between the pair of pressure bulkheads460. Housing452is similar in configuration to the housing202of tandem sub200shown inFIGS.4-7, and thus will be not described in detail. In this embodiment, each pressure bulkhead460comprises an outer electrical insulator462and an inner electrical connector470. Insulator462may be overmolded onto the electrical connector470. In some embodiments, insulator462may comprise a plastic material while electrical connector470comprises a metallic, electrically conductive material. In other embodiments, insulator462may comprise an electrically insulating coating applied onto electrical connector470. For example, insulator462may comprise a non-conductive metallic material which is coated onto electrical connector470via an anodizing process. As shown particularly inFIG.13, a pair of annular seals or O-rings464are positioned within corresponding grooves formed on the outer surface of the insulator462of each pressure bulkhead460. O-rings464seal against the inner surface210of outer housing402to provide bi-directional pressure isolation between an upper perforating gun445A and a lower perforating gun445B. Also, as shown particularly inFIG.13, the electrical connector470of each pressure bulkhead460comprises a first or inner end comprising a conical recess or receptacle472and a second or outer end comprising a contact pin474. Contact pin474projects outwardly from insulator462while conical receptacle472is recessed within insulator462. The contact pins474of pressure bulkheads460may be received in the receptacles446of the electrical connectors142,132of perforating guns445A,445B, respectively, to form an electrical connection between tandem sub450and perforating guns445A,445B. Additionally, opposing terminal ends of the electrical connector490are received in the conical receptacles472of the electrical connectors470of pressure bulkheads460to establish an electrical connection between the pair of pressure bulkheads460. In this embodiment, electrical connector490comprises a biasing element or coil spring; however, in other embodiments, electrical connector490may comprise other types of electrical connectors including electrically conductive tubes, rods, cables, etc. The bulkhead retainers480are configured to retain pressure bulkheads460within their respective receptacles220of outer housing452. In this embodiment, each bulkhead retainer480comprises an outer surface including a releasable or threaded connector. Additionally, in this embodiment, protrusions of221of outer housing452comprise internal threads which threadably couple with the threaded connectors482of the bulkhead retainers480to retain or capture pressure bulkheads460. When tandem sub450is assembled with perforating guns445A,445B, an electrical connection is established between perforating guns445A,445B via the pressure bulkheads460and electrical connector490. Additionally, pressure isolation is provided between perforating guns445A,445B via the sealing engagement between pressure bulkheads460A and outer housing452whereby the transmission of pressure from upper perforating gun445A to lower perforating gun445B and from lower perforating gun445B to upper perforating gun445A is prevented. Referring toFIG.14, another embodiment of a tandem sub500is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub500may include features in common with tandem sub200shown inFIGS.4-7and tandem sub450shown inFIGS.12,13, and shared features are labeled similarly. Tandem sub500generally includes an outer housing502, a pressure bulkhead or pass-thru assembly530, and a single bulkhead retainer480. Outer housing502of tandem sub500includes features in common with the outer housing202shown inFIGS.4-7, and shared features are labeled similarly. Outer housing502includes a central bore or passage504defined by a generally cylindrical inner surface506extending between opposing ends of outer housing502. In this embodiment, the inner surface506of outer housing502includes a first or outer receptacle and a second or inner receptacle510each positioned proximal a first or upper end of the outer housing502which connects to upper perforating gun100A. Outer receptacle508of inner surface506has a greater diameter than a diameter of inner receptacle510and is separated from inner receptacle510by an annular shoulder formed on inner surface506. Additionally, inner receptacle510has a larger diameter than a diameter of the segment or portion of inner surface506which extends directly from inner receptacle510. An annular second or inner shoulder512is positioned between inner receptacle510and the segment of inner surface506which extends directly from inner receptacle510. Pressure bulkhead530of tandem sub500includes an outer electrical insulator532and an inner electrical conductor or connector540. A first or upper pair of annular seals or O-rings534are positioned within corresponding grooves formed on an outer surface of the insulator532proximal a first or upper end thereof while a second or lower pair of annular seals or O-rings536are positioned within corresponding grooves formed on the outer surface of the insular532proximal a second or lower end thereof. Upper O-rings534have a greater diameter than a diameter of lower O-rings536. Each pair of O-rings534,536sealingly engage the inner surface506of outer housing502. Insulator532may be overmolded onto the electrical connector540. In some embodiments, insulator532may comprise a plastic material while electrical connector540comprises a metallic, electrically conductive material. In other embodiments, insulator532may comprise an electrically insulating coating applied onto electrical connector540. For example, insulator532may comprise a non-conductive metallic material which is coated onto electrical connector54via an anodizing process. The electrical connector540of pressure bulkhead530comprises a pair of conical recesses or receptacles542positioned at opposing terminal ends thereof. In this embodiment, a first or upper end531of pressure bulkhead530is greater in outer diameter than an outer diameter of a second or lower end533of pressure bulkhead530opposite upper end531. The upper end531of pressure bulkhead530is received within the inner receptacle510of outer housing502while the lower end533of pressure bulkhead530is received within the segment of the inner surface506of outer housing502which extends directly from inner receptacle510. Bulkhead retainer480is received in the outer receptacle508and the threaded connector482thereof threadably connects to threads221of outer housing502. Pressure bulkhead530may be slidably inserted into the central passage504of outer housing502. Following the coupling of pressure bulkhead480with outer housing502, relative movement between pressure bulkhead530and outer housing502is restricted via engagement between the upper end531of pressure bulkhead530and both the bulkhead retainer480and the inner shoulder512of outer housing502. Following coupling of tandem sub500with perforating guns100A,100B, guns100A,100B may be electrically connected via contact between the contact pins134,144of electrical connectors132,142, respectively, and the conical receptacles542of pressure bulkhead530. Referring toFIG.15, another embodiment of a tandem sub550is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub550may include features in common with tandem sub200shown inFIGS.4-7, and shared features are labeled similarly. Tandem sub550generally includes an outer housing552and a pressure bulkhead or pass-thru assembly570. As with the tandem subs described above, including tandem sub200shown inFIGS.4-7, tandem sub550is generally configured to electrically connect upper and lower perforating guns together while isolating pressure within the lower perforating gun from the upper perforating gun, and isolating pressure within the upper perforating gun from the lower perforating gun. In this embodiment, tandem sub550is configured to electrically connect and provide bi-directional pressure isolation between perforating guns545A,545B. Perforating guns545A,545B are similar to the perforating guns100A,100B, respectively, shown inFIGS.3,4, except that an outer housing546of each perforating gun545A,545B comprises a first or upper pin connector547and a second or lower box connector548opposite the upper pin connector547. Thus, instead of including a housing102which comprises box threaded connectors108at each end thereof with a tandem sub threadably connected between outer housings102(shown inFIGS.3,4), the outer housings546of perforating guns545A,545B may be threadably connected directly together in a box-by-pin configuration. In this configuration, tandem sub550is sandwiched between the outer housings546of perforating guns545A,545B instead of being threadably connected wither either or both of perforating guns545A,545B, as will be discussed further herein. Allowing for the direct connection of lower perforating gun545B with upper perforating gun545A may allow for a reduction of the overall axial length of the assembled perforating guns545A,545B, thereby increasing the ease at which a tool string comprising perforating guns545A,545B may be deployed through a wellbore Outer housing552includes a central bore or passage554defined by a generally cylindrical inner surface556extending between opposing ends of outer housing552, and a generally cylindrical outer surface558also extending between the opposing ends of outer housing552. In this embodiment, the outer surface558of outer housing552comprises an annular shoulder560, a first or upper pair of annular seals or O-rings562, and a second or lower pair of annular seals or O-rings564. In this embodiment, shoulder560is positioned between the upper O-rings562and the lower O-rings564whereby a diameter of the upper O-rings562is greater than a diameter of the lower O-rings564. Pass-thru assembly570of tandem sub550generally comprises an electrical conductor or pass-thru572and a molded insulator574in which the electrical conductor572is positioned. In this embodiment, electrical conductor572comprises a cylindrical signal bar and thus may also be referred to herein as signal bar572. Signal bar572may be integrally or monolithically formed and may comprise an electrically conductive material such as, for example, brass. A conical recess or receptacle576may be formed in each longitudinal end of signal bar572. A generally cylindrical outer surface of signal bar572may comprise one or more surface features or protrusions578. Protrusions578may comprise one or more annular ridges or splines and thus may also be referred to herein as ridges578; however, in other embodiments, the configuration of protrusions578may vary. For instance, in other embodiments, protrusions578may comprise hemispherical, conical, and/or frustoconical projections or dimples which extend radially outwards. Molded insulator574of pass-thru assembly570may entirely fill and thereby seal an annulus566of the central passage554of outer housing552formed between the outer surface of signal bar572and the inner surface556of outer housing552. The molded insulator574may be integrally or monolithically formed and may comprise an electrically insulating material. In some embodiments, molded insulator574may comprise a polymeric material such as Polyether ether ketone (PEEK), Polyetherimide (PEI), etc.; however, molded insulator574may comprise various electrically insulating materials. Molded insulator574may adhere to both the inner surface556of outer housing552and the outer surface of signal bar572thereby coupling or affixing signal bar572to outer housing552whereby relative axial and rotational movement between signal bar572and outer housing552may be restricted. Additionally, ridges578of signal bar572may interlock with corresponding ridges of molded insulator574formed during the molded process to lock the molded insulator574with signal bar572. Further, a plurality of protrusions or ridges568formed on the inner surface556of housing552may interlock with corresponding ridges of molded insulator574formed during the molding process to lock molded insulator574with housing552. In this manner, molded insulator574may electrically insulate signal bar572from outer housing552while also preventing the communication of pressure from within lower perforating gun545B to upper perforating gun545A and from within upper perforating gun100A to the lower perforating gun545B. In this embodiment, signal bar572may be molded to insulator574in its completed or fully machined state such that no additional machining must be performed following the forming of molded insulator574onto signal bar572. This is in contrast to the signal bar242shown inFIGS.4-7which is machined following the forming of molded member260′. Tandem sub550may be assembled with perforating guns545A,545B by inserting a lower end of tandem sub550into an upper end of lower perforating gun545B. The housing546of upper perforating gun545A may then be threadably connected to the housing546of lower perforating gun545B whereby signal bar572enters in electrical connectivity with the electrical connectors142,132of perforating guns545A,545B, respectively. In this assembled configuration, an upper end of the outer housing552of tandem sub550contacts an annular inner shoulder549of the housing546of upper perforating gun545A while an upper terminal end of the housing546of the lower perforating gun545B contacts the shoulder560of outer housing552, thereby restricting relative axial movement between tandem sub550and both perforating guns545A,545B. Referring toFIG.16, another embodiment of a tandem sub600is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub600may include features in common with tandem sub200shown inFIGS.4-7, and shared features are labeled similarly. Tandem sub600generally includes an outer housing602, a single pressure bulkhead410, an addressable switch620, and a detonator630. As with the tandem subs described above, including tandem sub200shown inFIGS.4-7, tandem sub600is generally configured to electrically connect upper and lower perforating guns together while isolating pressure within the lower perforating gun from the upper perforating gun, and isolating pressure within the upper perforating gun from the lower perforating gun. In this embodiment, tandem sub600is configured to electrically connect and provide bi-directional pressure isolation between perforating guns585A,585B. Perforating guns585A,585B are similar to the perforating guns100A,100B, respectively, shown inFIGS.3,4, except that the detonator and addressable switch associated with each perforating gun585A,585B is positioned within a tandem sub600associated with the particular perforating gun585A,585B. For example, the tandem sub600shown inFIG.16is associated with upper perforating gun585A and thus addressable switch620is configured to detonate the shaped charge of upper perforating gun585A via detonator630and a det cord586which extends through a lower endplate588of upper perforating gun585A. Lower perforating gun585B may be configured similarly as upper perforating gun585A and thus another detonator and addressable switch positioned downhole from lower perforating gun585B (e.g., within another tandem sub600) may be associated with lower perforating gun585B. The outer housing602of tandem sub600comprises a central passage604defined by a generally cylindrical inner surface606extending between opposed ends of outer housing602. The pressure bulkhead410sealingly engages the inner surface606of outer housing602to prevent the communication of pressure from lower perforating gun585B to upper perforating gun585A, and from upper perforating gun585A to lower perforating gun585B. In this embodiment, a chassis615of tandem sub600may be received within the central passage604of outer housing602and which houses both addressable switch620and detonator630. Chassis615includes an outer surface comprising a releasable or threaded connector616which is configured to threadably connect with internal threads221of outer housing602to secure both addressable switch620and detonator630within the central passage604of outer housing602. The addressable switch620received in housing615is electrically connected to detonator630via a first electrical cable or wire622. Additionally, in this embodiment, addressable switch620is electrically connected to upper perforating gun100A via a second electrical cable or wire624which connect to an electrical connector (e.g., a pin-and-socket style connector) formed in the lower endplate588of upper perforating gun585A. Further, addressable switch620comprises an electrical connector (e.g., a pin-and-socket style connector) which electrically connects with pressure bulkhead410to provide an electrical connection between addressable switch620and the lower perforating gun585B. Referring toFIG.17, another embodiment of a tandem sub650is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub650may include features in common with tandem sub200shown inFIGS.4-7, and shared features are labeled similarly. Tandem sub650generally includes an outer housing652and a pressure bulkhead or pass-thru assembly670. Outer housing652generally includes a first end654, a second end656opposite first end652, and a central bore or passage658defined by a generally cylindrical inner surface660. In this embodiment, each end654,656of outer housing652may comprise an annular endface that is generally planar. The pass-thru assembly670of tandem sub650generally comprises an inner electrical connector or signal bar assembly672and a generally cylindrical outer insulator690. In this embodiment, signal bar assembly672comprises a pair of signal bars672A,672B which are coupled together at a threaded coupling674formed therebetween when tandem sub650is assembled. Each signal bar672A,672B comprises an outer annular endplate676having a conical recess or receptacle678formed therein and configured to receive one of the contact pins134,144of electrical connectors132,142, respectively. Each endplate676is positioned external the central passage658of outer housing652and has a diameter that is greater than a maximum inner diameter of the inner surface660of outer housing652. Endplates676are each electrically insulated from outer housing652by a pair of electrically insulating, disc shaped gaskets or pads680which are positioned axially between the endplates676and the ends654,656of outer housing652. In other embodiments, endplates676may each be partially coated or overmolded by an electrically insulating material (excluding conical receptacles678) to electrically insulate endplates676from outer housing652without needing to use insulating pads680. The portions of signal bars672A,672B positioned within the central passage658of outer housing652are electrically insulated from housing652by the cylindrical insulator690which is overmolded or otherwise coated onto (e.g., anodized, etc.) signal bars672A,672B. In other embodiments, signal bars672A,672B may not include insulator690and instead a radial gap formed between the outer surfaces672A,672B and the inner surface660of outer housing652may ensure that signal bars672A,672B are not electrically connected to outer housing652. In this embodiment, tandem sub650may be assembled by inserting signal bars672A,672B into central passage658at ends654,656respectively with an insulating pad680positioned adjacent each endplate676. Signal bars672A,672B may be threadably connected together to form threaded connection674such that signal bars672A,672B are secured together thereby forming signal bar assembly672which is axially locked to the outer housing652of tandem sub650. In this configuration, tandem sub650may be assembled with perforating guns100A,100B (partially shown inFIG.17) whereby tandem sub650may provide an electrical connection between guns100A,100B while preventing the communication of pressure from upper perforating gun100A to lower perforating gun100B and from lower perforating gun100B to upper perforating gun100A. Referring toFIG.18, another embodiment of a tandem sub700is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub700may include features in common with tandem sub200shown inFIGS.4-7, and shared features are labeled similarly. Tandem sub700generally includes an outer housing702and a pressure bulkhead or pass-thru assembly720. Outer housing702generally includes a first end704, a second end706opposite first end704, and a central bore or passage708defined by a generally cylindrical inner surface710. In this embodiment, each end704,706of outer housing702may comprise an annular endface that is generally planar. The pass-thru assembly720of tandem sub700generally comprises an electrical connector or signal bar722. Signal bar722comprises a first end which includes a conical recess or receptacle724and a second end, opposite the first end, which comprises an annular endplate726. The pin connector144of the electrical connector142of upper perforating gun100A may be received within the conical receptacle724of signal bar722to establish electrical communication with upper perforating gun100A. Endplate726is positioned external the central passage708of outer housing702and is covered by an electrical insulator728except for an outer opening730positioned at a center of endplate726and facing the pin connector134of the electrical connector132of lower perforating gun100B. Insulator728may be overmolded or otherwise coated (e.g., anodized, etc.) onto endplate728to thereby electrically insulate endplate728from outer housing702. In other embodiments, an O-ring, an annular or disc shaped gasket, or other electrically insulating member may be used to insulate endplate728from outer housing702in lieu of insulator728. To assemble tandem sub700the signal bar722may be inserted through the central passage708of outer housing702such that endplate726is positioned directly adjacent the second end706of outer housing702. In this position, a plurality of fasteners732may be extended through apertures formed in endplate728and threadably connected to receptacles712formed in the second end706of outer housing702, thereby retaining signal bar722to outer housing702such that relative movement therebetween is restricted. Perforating guns100A,100B may then be coupled with tandem sub700to establish an electrical connection therebetween via signal bar722. Additionally, the insulator728covering endplate726may be clamped against an annular seal or O-ring714positioned in a groove formed on the second end706of outer housing702to prevent the communication of pressure from lower perforating gun100B to upper perforating gun100A, and from upper perforating gun100A to lower perforating gun100B. Referring toFIG.19, another embodiment of a tandem sub750is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub750may include features in common with tandem sub200shown inFIGS.4-7and the tandem sub700shown inFIG.18, and shared features are labeled similarly. Particularly, tandem sub750is similar to tandem sub700shown inFIG.18except that a pressure bulkhead or pass-thru assembly752of tandem sub700only includes endplate726and an associated insulator728′, and does not include the signal bar722. In this embodiment, tandem sub750is configured to electrically connect and provide bi-directional pressure isolation between an upper perforating gun745A (partially shown inFIG.19) and the lower perforating gun100B. Upper perforating gun745A is similar to upper perforating gun100A except an electrical connector746coupled to lower endplate140(not shown inFIG.19) of upper perforating gun745A has an extended pin connector748which extends entirely through the central passage708of the outer housing702of tandem sub750and contacts the conductive endplate726via an inner opening754formed in the insulator728′. Referring toFIG.20, another embodiment of a tandem sub760is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub760may include features in common with tandem sub200shown inFIGS.4-7, and shared features are labeled similarly. Tandem sub760generally includes an outer housing762, a pressure bulkhead or pass-thru assembly780, and a retainer790. Outer housing762generally includes a first end764, a second end766opposite first end764, and a central bore or passage768defined by a generally cylindrical inner surface770. In this embodiment, each end764,766of outer housing762may comprise an annular endface that is generally planar. In this embodiment, feed-thru assembly780generally includes an electrical connector or signal bar782which is covered by a generally cylindrical outer insulator784with the exception of a pair of conical recesses or receptacles783formed in opposing ends of signal bar782. Conical receptacles783are configured to receive the pin connectors144,134of the electrical connectors132,142of perforating guns100A,100B, respectively, whereby perforating guns100A,100B may be electrically connected via signal bar782. In this embodiment, outer insulator784may be overmolded or otherwise coated (e.g., anodized, etc.) onto an outer surface of signal bar782such that rod782is electrically insulated from outer housing762. An outer surface of outer insulator784comprises a shoulder785which engages a corresponding shoulder772of outer housing762. Additionally, a pair of annular seal assemblies or O-rings786are positioned in corresponding grooves formed on the outer surface of outer insulator784and which sealingly engage an inner surface of the retainer790. Retainer790comprises a releasable or threaded connector792formed on an outer surface thereof which threadably connects to internal threads221of outer housing762. An end of the outer insulator784may contact a shoulder794formed on the inner surface of insulator784which may, in concert with engagement with shoulder772of outer housing762) retain pass-thru assembly780to outer housing762. Additionally, the outer surface of retainer790sealingly engages an annular seal or O-ring774positioned in a groove formed on the second end766of outer housing762. The sealing engagement provided by O-rings786of pass-thru assembly780and by O-ring774of outer housing762may bi-directionally pressure isolate the perforating guns100A,100B from each other such that pressure cannot be communicated across tandem sub760. Referring toFIG.21, another embodiment of a tandem sub800is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub800may include features in common with tandem sub200shown inFIGS.4-7and tandem sub650shown inFIG.17, and shared features are labeled similarly. Tandem sub800generally includes an outer housing802. Outer housing is similar to the outer housing652of the tandem sub650shown inFIG.17; however, outer housing802is formed or comprises an electrically insulating material. In some embodiments, an electrically conductive coating806may be applied to an outer surface804of outer housing802extending between ends654,656. Coating806may provide a path for an electrical ground for components coupled downhole from tandem sub800. Additionally, instead of electrically connecting and providing pressure isolation between perforating guns100A,100B as with tandem sub650shown inFIG.17, tandem sub800is configured to electrically connect and provide bi-directional pressure isolation between an upper perforating gun790A (partially shown inFIG.21) and a lower perforating gun790B (also partially shown inFIG.21). Perforating guns790A,790B are similar to perforating guns100A,100B, respectively, except that electrical connectors792,796coupled to endplates130,140(not shown inFIG.21), respectively, comprise extended pin connectors794,798, respectively, which extend through the central passage658of outer housing802. In this configuration, terminal ends of pin connectors794,798contact each other within central passage658of outer housing802to form an electrical connection between perforating guns790A,790B. Given that outer housing802comprises an electrically insulating material, contact pins794,798are not electrically connected to outer housing802. However, in other embodiments, outer housing802may comprise an electrically conductive material (and thus may not include conductive coating806) and contact pins794,798may each be covered by an electrically insulating material that is overmolded or coated onto (e.g., anodized, etc.) an outer surface of each contact pin794,798. In still other embodiments, the inner surface660of outer housing802may be covered by an electrically insulating material. The insulating material may be molded or coated (e.g., anodized, etc.) onto the inner surface660of outer housing802to thereby electrically insulate contact pins794,798from outer housing802. Additionally, in this embodiment, a pair of annular seals or O-rings795,799are positioned in grooves formed on the outer surfaces of contact pins794,798, respectively. O-rings795,799seal against the inner surface660of outer housing802to thereby prevent the communication of pressure across tandem sub800. In this manner, pressure in lower perforating gun790B may be isolated or prevented from being communicated to upper perforating gun790A, and pressure in upper perforating gun790A may be isolated or prevented from being communicated to lower perforating gun790B. Referring toFIG.22, another embodiment of a tandem sub820is shown which may be used in conjunction with or in lieu of the tandem sub200shown inFIGS.4-7. Tandem sub820may include features in common with tandem sub200shown inFIGS.4-7, tandem sub550shown inFIG.15, and tandem sub750shown inFIG.19, and shared features are labeled similarly. Tandem sub820generally includes an outer housing822and the conductive endplate726partially covered by insulator728′ and secured to outer housing822by fasteners732. Outer housing822generally includes a first end824, a second end824opposite first end822, and a central bore or passage828defined by a generally cylindrical inner surface830extending between ends824,826. As with tandem sub750shown inFIG.19, outer housing822includes receptacles712for receiving fasteners732and O-ring714for sealing against the insulator728′.] Tandem sub820is generally configured to provide an electrical connection and bi-directional pressure isolation between a first or upper perforating gun840A and lower perforating gun545B (similar to the lower perforating gun545B shown inFIG.15). Upper perforating gun840A is similar to the upper perforating gun545A shown inFIG.15except an electrical connector842coupled to lower endplate140(not shown inFIG.22) of upper perforating gun840A has an extended pin connector844which extends entirely through the central passage828of the outer housing822and contacts the conductive endplate726via the inner opening754formed in the insulator728′. In this manner, an electrical connection is provided between perforating guns840A,545B via conductive endplate726while pressure in lower perforating gun545B is isolated or prevented from being communicated to upper perforating gun840A, and pressure in upper perforating gun840A is isolated or prevented from being communicated to lower perforating gun545B, While exemplary embodiments have been shown and described, modifications thereof can be made by one skilled in the art without departing from the scope or teachings herein. The embodiments described herein are exemplary only and are not limiting. Many variations and modifications of the systems, apparatus, and processes described herein are possible and are within the scope of the disclosure presented herein. As an example, while contact pins134,144of the electrical connectors132,142described above are shown as conical and receivable within a corresponding conical receptacle; in other embodiment, contact pins134,144(as well as other contact pins described above) may comprise planar or flat endfaces which contact corresponding planar or flat endfaces to establish an electrical connection therebetween. The relative dimensions of various parts, the materials from which the various parts are made, and other parameters can be varied. Accordingly, the scope of protection is not limited to the embodiments described herein, but is only limited by the claims that follow, the scope of which shall include all equivalents of the subject matter of the claims. Unless expressly stated otherwise, the steps in a method claim may be performed in any order. The recitation of identifiers such as (a), (b), (c) or (1), (2), (3) before steps in a method claim are not intended to and do not specify a particular order to the steps, but rather are used to simplify subsequent reference to such steps. | 69,516 |
11859958 | DETAILED DESCRIPTION OF THE INVENTION While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention and do not delimit the scope of the invention. To facilitate the understanding of this invention, a number of terms are defined below. Terms defined herein have meanings as commonly understood by a person of ordinary skill in the areas relevant to the present invention. Terms such as “a”, “an” and “the” are not intended to refer to only a singular entity, but include the general class of which a specific example may be used for illustration. The terminology herein is used to describe specific embodiments of the invention, but their usage does not delimit the invention, except as outlined in the claims. In operation, The present invention provides a powder compaction device comprising a loading platform positioned above a lower platform; a drive motor connected to the loading platform; a compaction rod operably extending from the drive motor through the loading platform, wherein the compaction rod comprises a metering region adjacent to a loading region extending to a compaction end; a first funnel-shaped device positioned below the loading platform, wherein the first funnel-shaped device comprises a first funnel shaped area extending to a first funnel aperture, wherein the first funnel aperture aligns to allow the metering region of the compaction rod to pass through the first funnel aperture; an ammunition cartridge fixture positioned below the first funnel-shaped device, wherein the ammunition cartridge fixture comprises a second funnel-shaped area extending to a second funnel aperture that connects to an ammunition cartridge shaped void adapted to receive an ammunition cartridge, wherein the second funnel aperture aligns with the first funnel aperture to allow the loading region of the compaction rod to pass through the second funnel aperture and the compaction end in the ammunition cartridge shaped void; a one or more metering reliefs positioned in the metering region of the compaction rod, wherein each of the one or more reliefs has a powder metering volume; a powder reservoir comprising a powder housing connected to a powder gate operably connected to a transport conduit in communication with the first funnel-shaped area to transport a powder from the powder housing to the first funnel-shaped area; a compaction controller in communication with the drive motor and one or more first sensors to control the vertical movement of the compaction rod and to control the force applied to the compaction rod end whereby controlling the compaction of the powder at the compaction end; a powder metering controller in communication with the powder gate and one or more second sensors to control the amount of the powder delivered to the first funnel-shaped area; and a loading controller in communication with the drive motor to control the vertical movement of the metering region of the compaction rod, wherein the loading controller positions the metering region and the one or more metering reliefs above the first funnel aperture to allow the powder into the one or more metering reliefs to load the powder, wherein the loading controller releases the powder by moving the metering region and the one or more metering reliefs through the first funnel aperture to allow the powder to release from the one or more metering reliefs and into the second funnel-shaped area of the ammunition cartridge fixture and through the second funnel aperture. The present invention provides a method of powder compaction in an ammunition cartridge comprising the steps of: providing a powder compaction device comprising a loading platform positioned above a lower platform; a drive motor connected to the loading platform; a compaction rod operably extending from the drive motor through the loading platform, wherein the compaction rod comprises a metering region adjacent to a loading region extending to a compaction end; a first funnel-shaped device positioned below the loading platform, wherein the first funnel-shaped device comprises a first funnel shaped area extending to a first funnel aperture, wherein the first funnel aperture aligns to allow the metering region of the compaction rod to pass through the first funnel aperture; an ammunition cartridge fixture positioned below the first funnel-shaped device, wherein the ammunition cartridge fixture comprises a second funnel-shaped area extending to a second funnel aperture that connects to an ammunition cartridge shaped void adapted to receive an ammunition cartridge, wherein the second funnel aperture aligns with the first funnel aperture to allow the loading region of the compaction rod to pass through the second funnel aperture and the compaction end in the ammunition cartridge shaped void; a one or more metering reliefs positioned in the metering region of the compaction rod, wherein each of the one or more reliefs has a powder metering volume; a powder reservoir comprising a powder housing connected to a powder gate operably connected to a transport conduit in communication with the first funnel-shaped area to transport a powder from the powder housing to the first funnel-shaped area; a compaction controller in communication with the drive motor and one or more first sensors to control the vertical movement of the compaction rod and to control the force applied to the compaction rod end whereby controlling the compaction of the powder at the compaction end; a powder metering controller in communication with the powder gate and one or more second sensors to control the amount of the powder delivered to the first funnel-shaped area; and a loading controller in communication with the drive motor to control the vertical movement of the metering region of the compaction rod, wherein the loading controller positions the metering region and the one or more metering reliefs above the first funnel aperture to allow the powder into the one or more metering reliefs to load the powder, wherein the loading controller releases the powder by moving the metering region and the one or more metering reliefs through the first funnel aperture to allow the powder to release from the one or more metering reliefs and into the second funnel-shaped area of the ammunition cartridge fixture and through the second funnel aperture; positioning an ammunition cartridge in the ammunition cartridge shaped void; moving the metering region into the first funnel shaped area above the first funnel aperture; releasing a first powder load into the first funnel shaped area; filling the one or more reliefs with the powder; moving the metering region through the first funnel aperture to release the powder from the one or more reliefs into the second funnel-shaped area; allowing the powder to pass through the second funnel aperture into the ammunition cartridge; moving the compaction end into the ammunition cartridge to compress the powder; compressing the powder with the compaction end; removing the compaction end from the ammunition cartridge and the second funnel aperture; and removing the ammunition cartridge in the ammunition cartridge shaped void. FIG.1is a prospective view that depicts one embodiment of the powder loading, metering and compaction device of the present invention. The compaction device10includes a frame12which may be constructed of polymer, plastic, metal or any other desirable rigid material. The frame12includes a platform14that is supported by one or more risers16aand16b. The one or more risers16aand16bmay be constructed of polymer, plastic, metal or any other desirable rigid material and may be of any height necessary for the operation of the compaction device10. A drive device17is connected to the platform14. The drive device17include a vertical tube18housing a movable compaction rod22. The vertical tube18extending from the platform14to a drive motor20to move the compaction rod22. Although the drive motor20is depicted at the top of the vertical tube18it may be positioned at any location allowing activation of the compaction rod22with the desired degree of movement. The drive motor20may be a pneumatic or electric motor that is gear, belt, chain or directly driven to actuate the compaction rod22. The platform14includes a compaction rod aperture (not shown) position in communication the vertical tube18to allow passage of the compaction rod22through the platform14. The compaction rod22extends through the compaction rod aperture (not shown) and is positioned in the vertical tube18in operable communication with the drive motor20which moves the compaction rod22toward and away from the platform14. A holding platform24is aligned with and in communication with the compaction rod aperture (not shown). The holding platform24slidably accepts an ammunition cartridge fixture26. The ammunition cartridge fixture26is slidably secured in the adaptor platform24to align the compaction rod aperture (not shown) and the compaction rod22with the ammunition cartridge fixture26. The ammunition cartridge fixture26includes a funnel-shaped opening28with a funnel aperture (not shown) connected to an interior chamber (not shown) within the ammunition cartridge fixture26. The funnel aperture (not shown) and compaction rod aperture (not shown) are aligned to allow the compaction rod22enter the interior chamber (not shown) of the ammunition cartridge fixture26. The drive motor20may be manually controlled or automatically controlled. The drive motor20includes one or more sensors to measure, record, transmit, store, or report one or more physical measurements. For example, the one or more sensors may be force and/or distance sensor that measure the force applied to the compaction rod, the force exerted by the motor, the compression force applied at the tip of the compaction rod, the distance the compaction rod moves, etc. The data from the sensors may be stored, reported and/or used to control the operation of the drive motor. For example, the sensor may record the force applied to the powder and when a specific compression force (e.g., 5-5000 psi) is reached the motor will reverse direction to move the compaction rod opposite direction. The specific parameters (distance or force curve) may vary and depend on the specific powders, caliber, compaction rod diameter or tip profile being used. FIG.2is a prospective view that depicts one embodiment of the powder loading, metering and compaction device of the present invention. The compaction device10includes a frame12which may be constructed of polymer, plastic, metal or any other desirable rigid material. The frame12includes a platform14that is supported by one or more risers16aand16b. The one or more risers16aand16bmay be constructed of polymer, plastic, metal or any other desirable rigid material and may be of any height necessary for the operation of the compaction device10. A drive device17is connected to the platform14. The drive device17include a vertical tube18housing, a drive motor20and a movable compaction rod22. The vertical tube18extends from the platform14to the drive motor20to move the compaction rod22. Although the drive motor20is depicted at the top of the vertical tube18it may be positioned at any location allowing activation and movement of the compaction rod22to the desired degree of movement. The drive motor20may be a pneumatic or electric motor that is gear, belt chain or directly driven to actuate the compaction rod22. The platform14includes a compaction rod aperture21position in communication the vertical tube18to allow passage of the compaction rod22through the platform14. The compaction rod22extends through the compaction rod aperture21and is positioned in the vertical tube18in operable communication with the drive motor20which moves the compaction rod22toward and away from the platform14. A first funnel-shaped device23for housing powder is positioned below the platform14. A first funnel aperture25is positioned in the first funnel-shaped device23and aligned with the compaction rod aperture21to allow the compaction rod22to pass through the compaction rod aperture21and through the first funnel aperture25. A holding platform24is aligned with and in communication with the compaction rod aperture21and the first funnel aperture25. The holding platform24accepts an ammunition cartridge fixture26. The ammunition cartridge fixture26includes a funnel-shaped opening28with a funnel aperture32extending into an interior chamber30. The funnel aperture32aligns with the first funnel aperture25and the compaction rod aperture21to accommodate the movement of the compaction rod22into the interior chamber30. The ammunition cartridge fixture26may be constructed of polymer, plastic, metal or any other desirable rigid material. The interior chamber30of the ammunition cartridge fixture26has the profile of the ammunition cartridge being loaded such that the interior chamber30mimics the shape of an ammunition cartridge chamber. The ammunition cartridge fixture26supports the ammunition cartridge on all sides as it is supported in a chamber of the corresponding rifle. The ammunition cartridge being loaded may be any ammunition cartridge caliber. For example, loading a 7.62 mm ammunition cartridge requires an interior chamber30with a profile that mates to the 7.62 mm ammunition cartridge. The ammunition cartridge fixture26is aligned and positioned below the first funnel-shaped device23. The ammunition cartridge fixture26includes a funnel-shaped opening28positioned adjacently above and in communication with the interior chamber30through the funnel aperture32. The funnel-shaped opening28allows propellant to be funneled into the ammunition cartridge (not shown) placed into the ammunition cartridge fixture26. The ammunition cartridge fixture26includes a lower groove34that is adapted to slide into the tongue38of the adaptor platform24to secure the ammunition cartridge fixture26in position. In one embodiment, the ammunition cartridge fixture26is slidably secured in the adaptor platform24to align the compaction rod aperture21, the first funnel aperture25and the funnel aperture32to allow movement of the compaction rod22into the interior chamber30. In another embodiment, the ammunition cartridge fixture26is comprised of 2, 3, 4, or more sections that are moved together to form the ammunition cartridge fixture26. The compaction rod22includes reliefs22aand22blocated in the wall of the compaction rod22. The reliefs22aand22bare positioned to correspond to the position of the first funnel aperture25to act as a metering device. Initially the reliefs22aand22bare positioned in the first funnel-shaped device23above the first funnel aperture25. Powder added to the first funnel-shaped device23fills the reliefs22aand22b. As compaction rod22is moved by the drive motor20the reliefs22aand22bmove through the first funnel aperture25to locate the reliefs22aand22bbelow the first funnel aperture25. As the reliefs22aand22bupon passing through the first funnel aperture25the powder is released. The released powder is transferred to the funnel-shaped opening28. The size, shape, number, location, depth, etc. of the reliefs22aand22bmay be varied to finetune the amount of powder released. The powder is then transferred into the interior chamber30. The compaction rod22is moved by the drive motor20through the funnel aperture32and into the interior chamber30for compaction. The compaction rod22may have a compaction rod tip at the compaction end that is flat, convex, concave, curved, angled or any other shape. In addition, the compaction rod22may be hollow to allow passage through the compaction rod22. The compaction rod22may be removable and replicable either entirely or partially. The compaction rod22may be adapted to receive a replaceable compaction rod tip depending on the particular application. The drive motor20may be manually controlled or automatically controlled. The drive motor20includes one or more sensors to measure, record, transmit, store, or report one or more physical measurements. For example, the one or more sensors may be force and/or distance sensor that measure the force applied to the compaction rod, the force exerted by the motor, the compression force applied at the tip of the compaction rod, the distance the compaction rod moves, etc. The data from the sensors may be stored, reported and/or used to control the operation of the drive motor. For example, the sensor may record the force applied to the powder and when a specific compression force (e.g., 5-5000 psi) is reached the motor will reverse direction to move the compaction rod opposite direction. The specific parameters (distance or force curve) may vary and depend on the specific powders, caliber, compaction rod diameter or tip profile being used. In operation an ammunition cartridge36to be loaded with powder is positioned in the ammunition cartridge fixture26such that the ammunition cartridge36mates to the interior chamber30. The ammunition cartridge fixture26is positioned in the adaptor platform24by sliding the lower groove34of the ammunition cartridge fixture26into the tongue38of the adaptor platform24. The ammunition cartridge fixture26is secured in the adaptor platform24allowing the ammunition cartridge interior40to be accessible through the funnel-shaped opening28. Powder is placed in the first funnel-shaped device23and the compaction rod22extends into the funnel-shaped opening28and through the first funnel aperture25. The reliefs22aand22bof the compaction rod22are positioned in the first funnel-shaped device23and filled with the powder. The drive motor20moves the compaction rod22to transition the reliefs22aand22band powder contained therein through the first funnel aperture25. As the reliefs22aand22bexit the first funnel aperture25the powder contained in the reliefs22aand22bis released. The controlled volume and release of the powder serves to meters the amount of powder delivered for compaction. The powder is then transported into the funnel-shaped opening28which is then funneled through the funnel aperture32and into the ammunition cartridge36. The compaction rod22is moved through the funnel aperture32and into the ammunition cartridge interior40to contact the deposited powder for compaction. The drive motor20is activated to move the compaction rod22contacts the powder and moved to compress the powder to a specific preset distance of movement or pressure. Once the powder is compressed the compaction rod22may be removed (either manually or automatically), the ammunition cartridge fixture26is removed from the adaptor platform24and the ammunition cartridge36removed from the interior chamber30. During operation the powder may be added in stages and then compressed at each stage to form a layered powder configuration. Alternatively, the powder may be added in single stage or layer and then compressed. Each stage or layer may use the same powder or a different powder. Similarly, each stage or layer may be compressed to a different degree of compaction. As a result, the individual cartridge powder compaction may be fine-tuned through the adjustment of the type of powder, the number of powders, the distribution (or layers) of the powders, the amount of compression, the compaction of the layers of the powders, etc. FIG.3is a top down view of one embodiment of the ammunition cartridge fixture of the present invention. The ammunition cartridge fixture26which may be constructed of polymer, plastic, metal or any other desirable rigid material. The ammunition cartridge fixture26includes a funnel-shaped opening28with a funnel aperture32that passes into an interior chamber (not shown). The ammunition cartridge fixture26is seen as a multipart fixture having body portions26a,26band26cthat mate to complete the funnel-shaped opening28with a funnel aperture32that passes into an interior chamber (not shown). FIG.4is a cut through image of one embodiment of the ammunition cartridge fixture of the present invention. The ammunition cartridge fixture26which may be constructed of polymer, plastic, metal or any other desirable rigid material. The ammunition cartridge fixture26includes an interior chamber30which has the profile of the ammunition cartridge being loaded. The interior chamber30mimics the shape of an ammunition cartridge chamber and supports the ammunition cartridge on all sides as in the chamber of the corresponding rifle. The ammunition cartridge being loaded may be any ammunition cartridge caliber. For example, loading a 7.62 mm ammunition cartridge requires an interior chamber30with a profile that mates to the 7.62 mm ammunition cartridge. The ammunition cartridge fixture26includes a funnel-shaped opening28positioned adjacently above and in communication with the interior chamber30through the funnel aperture32. The funnel-shaped opening28allows powder to be funneled into the ammunition cartridge (not shown) secured in the interior chamber30of the ammunition cartridge fixture26. The ammunition cartridge fixture26includes a lower groove34that is adapted to slide into the adaptor platform (not shown) to secure the ammunition cartridge fixture26in position. FIG.5is a cut through image of one embodiment of a segment of the ammunition cartridge fixture of the present invention. The ammunition cartridge fixture segment26ais a portion of the ammunition cartridge fixture (not shown) that when combined makes up the completed ammunition cartridge fixture (not shown). The ammunition cartridge fixture segment26aincludes a funnel-shaped opening28athe funnels to a funnel aperture segment32athat is in communication with the interior chamber segment30awhich has the profile of a portion of the ammunition cartridge being loaded. The interior chamber segment30amimics the shape of an ammunition cartridge chamber. Each of the ammunition cartridge fixture segment26asupports a portion of the ammunition cartridge (not shown) on the side wall (not shown), the neck (not shown) and the nose (not shown) as the ammunition cartridge is supported in the chamber of the corresponding rifle. In the depicted embodiment the completed ammunition cartridge fixture (not shown) is made up of 3 ammunition cartridge fixture segments. However, the ammunition cartridge fixture (not shown) may be made of 2, 3, 4, or more ammunition cartridge fixture segment that are moved together to form the ammunition cartridge fixture26. Similarly, the funnel-shaped opening may be a single member that is in communication with a multipiece ammunition cartridge fixture having 2, 3, 4, or more ammunition cartridge fixture segment that are moved together to form the interior chamber (not shown). The ammunition cartridge fixture segments when mated supports the ammunition cartridge on all sides as in a chamber of the corresponding rifle. The ammunition cartridge being loaded may be any ammunition cartridge caliber. For example, loading a 7.62 mm ammunition cartridge requires an interior chamber30with a profile that mates to the 7.62 mm ammunition cartridge. The powder may be any powder or propellant know to the skilled artisan for use in ammunition loading. For example, vihta vuori n310, alliant blue dot, hodgdon varget, accurate arms nitro 100, accurate arms no. 7, imr 4320, alliant e3, alliant pro reach, winchester 748, hodgdon titewad, hodgdon longshot, hodgdon bl-c(2), ramshot competition, alliant 410, hodgdon cfe 223, alliant red dot, alliant 2400, hodgdon leverevolution, alliant promo, ramshot enforcer, hodgdon h380, hodgdon clays, accurate arms no.9, ramshot big game, imr red, accurate arms 4100, vihtavuori n540, alliant clay dot, alliant steel, winchester 760, hodgdon hi-skor 700-x, norma 8123, hodgdon h414, alliant bullseye, vihtavuori n110, vihtavuori n150, imr target, hodgdon lil' gun, accurate arms 2700, hodgdon titegroup, hodgdon 110, imr 4350, alliant american select, winchester 296, imr 4451, accurate arms solo 1000, imr 4227, hodgdon h4350, alliant green dot, accurate arms 5744, alliant reloder 17, imr green, accurate arms 1680, accurate arms 4350, winchester wst, hodgdon cfe blk, norma 204, hodgdon trail boss, norma 200, hodgdon hybrid 100v, winchester super handicap, alliant reloder 7, vihtavuori n550, hodgdon international, imr 4198, alliantreloder 19, accurate arms solo 1250, hodgdon h4198, imr 4831, vihtavuori n320, vihta vuori n120, ramshot hunter, accurate arms no. 2, hodgdon h322, accurate arms 3100, ramshot zip, accurate arms 2015br, vihtavuori n160, hodgdon hp-38, alliant reloder 10×, hodgdon h4831 & h4831sc, winchester 231, vihta vouri n130, hodgdon superformance, alliant 20/28, imr 3031, imr 4955, winchester 244, vihtavouri n133, winchester supreme 780, alliant unique, hodgdon benchmark, norma mrp, hodgdon universal, hodgdon h335, alliant reloder 22, imr unequal, ramshot x-terminator, vihtavuori n560, alliant power pistol, accurate arms 2230, vihtavuori n165, vihta vuori n330, accurate arms 2460s, imr 7828 & imr 7828 ssc, alliant herco, imr 8208 xbr, alliant reloder 25, winchester wsf, ramshot tac, vihtavuori n170, vihtavuori n340, hodgdon h4895, accurate arms magpro, hodgdon hi-skor 800-x, vihtavuori n530 140 imr 7977, ramshot true blue, imr 4895, hodgdon h1000, accurate arms no. 5, vihtavuori n135, ramshot magnum, hodgdon hs-6, alliant reloder 12, hodgdon retumbo, winchester autocomp, accurate arms 24951r, imr 8133, hodgdon cfe pistol, imr 4166, vihtavuori n570, ramshot silhouette, imr 4064, accurate arms 8700, vihtavuori 3n37, norma 202, vihta vuori 24n41, vihtavuori n350, accurate arms 4064, hodgdon 50 bmg, vihtavuori 3n318, accurate arms 2520, hodgdon us869, imr blue, alliant reloder 15, vihtavuori 20n29, or other similar powders or propellants. The present invention is not limited to the described caliber and is believed to be applicable to other calibers as well. This includes various small, medium and large caliber munitions, including 5.56 mm, 7.62 mm, 308, 338, 3030, 3006, and .50 caliber ammunition cartridges, as well as medium/small caliber ammunition such as 380 caliber, 38 caliber, 9 mm, 10 mm, 20 mm, 25 mm, 30 mm, 40 mm, 45 caliber and the like. The projectile and the corresponding cartridge may be of any desired size, e.g., 0.223, 0.243, 0.245, 0.25-06, 0.270, 0.277, 6.8 mm, 0.300, 0.308, 0.338, 0.30-30, 0.30-06, 0.45-70 or 0.50-90, 50 caliber, 45 caliber, 380 caliber or 38 caliber, 5.56 mm, 6 mm, 6.5 mm, 7 mm, 7.62 mm, 8 mm, 9 mm, 10 mm, 12.7 mm, 14.5 mm, 14.7 mm, 20 mm, 25 mm, 30 mm, 40 mm, 57 mm, 60 mm, 75 mm, 76 mm, 81 mm, 90 mm, 100 mm, 105 mm, 106 mm, 115 mm, 120 mm, 122 mm, 125 mm, 130 mm, 152 mm, 155 mm, 165 mm, 175 mm, 203 mm or 460 mm, 4.2 inch or 8 inch. The cartridges, therefore, are of a caliber between about .05 and about 5 inches. Thus, the present invention is also applicable to the sporting goods industry for use by hunters and target shooters. The present invention includes a motor controller in communication with at least the drive motor and/or one or more sensors. The motor controller may also include one or more microprocessors, a servo amplifier for driving the motor and a proportional integral derivative (PID) filter for controlling the motor based upon feedback from the motor and/or the one or more sensors. The motor controller may also be connected to a computer or memory module that contain information regarding parameters of the motion of the drive motor to control the force, actual position, velocity, errors and/or motor status. The position, force, velocity or acceleration of the compaction rod or the drive motor can be programmed into the controller with extreme precision in any of those parameters, yielding extremely fine resolution and control over the drive motor. The controller has a communications port that may be accessed by an RS232 plug from a personal computer. Two or more controllers can be linked together via their communication ports to provide multi-axis motion with the controllers and their connected motors synchronized. A peripheral device port located adjacent to the communications port on a back end of the controller affords connections for devices such as a flat panel display, which may be mounted on the controller and display information regarding the motor or controller, or joystick for controlling the motor directly. In addition, the present invention may include a powder reservoir in communication with the funnel-shaped opening directly or through a pouring conduit below the reservoir and extending to the funnel-shaped opening either with or without a gate or slide to control flow. It will be understood that particular embodiments described herein are shown by way of illustration and not as limitations of the invention. The principal features of this invention can be employed in various embodiments without departing from the scope of the invention. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, numerous equivalents to the specific procedures described herein. Such equivalents are considered to be within the scope of this invention and are covered by the claims. All publications and patent applications mentioned in the specification are indicative of the level of skill of those skilled in the art to which this invention pertains. All publications and patent applications are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference. The use of the word “a” or “an” when used in conjunction with the term “comprising” in the claims and/or the specification may mean “one,” but it is also consistent with the meaning of “one or more,” “at least one,” and “one or more than one.” The use of the term “or” in the claims is used to mean “and/or” unless explicitly indicated to refer to alternatives only or the alternatives are mutually exclusive, although the disclosure supports a definition that refers to only alternatives and “and/or.” Throughout this application, the term “about” is used to indicate that a value includes the inherent variation of error for the device, the method being employed to determine the value, or the variation that exists among the study subjects. As used in this specification and claim(s), the words “comprising” (and any form of comprising, such as “comprise” and “comprises”), “having” (and any form of having, such as “have” and “has”), “including” (and any form of including, such as “includes” and “include”) or “containing” (and any form of containing, such as “contains” and “contain”) are inclusive or open-ended and do not exclude additional, unrecited elements or method steps. The term “or combinations thereof” as used herein refers to all permutations and combinations of the listed items preceding the term. For example, “A, B, C, or combinations thereof” is intended to include at least one of: A, B, C, AB, AC, BC, or ABC, and if order is important in a particular context, also BA, CA, CB, CBA, BCA, ACB, BAC, or CAB. Continuing with this example, expressly included are combinations that contain repeats of one or more item or term, such as BB, AAA, MB, BBC, AAABCCCC, CBBAAA, CABABB, and so forth. The skilled artisan will understand that typically there is no limit on the number of items or terms in any combination, unless otherwise apparent from the context. All of the compositions and/or methods disclosed and claimed herein can be made and executed without undue experimentation in light of the present disclosure. While the compositions and methods of this invention have been described in terms of preferred embodiments, it will be apparent to those of skill in the art that variations may be applied to the compositions and/or methods and in the steps or in the sequence of steps of the method described herein without departing from the concept, spirit and scope of the invention. All such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope and concept of the invention as defined by the appended claims. | 32,491 |
11859960 | DETAILED DESCRIPTION OF THE INVENTION The following detailed description is of the best currently contemplated modes of carrying out exemplary embodiments of the invention. The description is not to be taken in a limiting sense but is made merely for the purpose of illustrating the general principles of the invention, since the scope of the invention is best defined by the appended claims. Broadly, an embodiment of the present invention provides an arrow system embodying a screw-over configuration of broadhead to arrow shaft connection. The screw-over configuration provides a broadhead with female internal threading that operatively associates with male threading of an insert that interconnects the broadhead to the arrow shaft. The insert has a broadhead portion with two precision bosses bookending a threaded portion, wherein the forward precision boss provides a clocking slot for facilitating user-selected clocking configurations. Referring toFIGS.2A through3C, the present invention provides arrow systems adapted to optimize concentricity between broadhead10and arrow shaft30through improvements in the broadhead-arrow shaft interface. The broadhead-arrow shaft interface may be a glue-in configuration and, in another embodiment, may be a screw-over configuration. Referring toFIGS.2A and2B, the glue-in configuration includes broadhead10with an externally threadless shank12. The externally threadless shank12may provide a uniform cross-section throughout its operable length (i.e., the length of the externally threadless shank12that interfaces with the arrow shaft30). The externally threadless shank12may extend between one-half to one and one-half of an inch. The arrow shaft30is tubular, wherein the inner circumference or periphery of the lumen32of the arrow shaft30has a uniform cross section approximately coextensive with the external cross section of the threadless shank12, facilitating a snug reception. Adhesive may be applied along the outer surface of the threadless shank12, thereby further ensuring a tight, secure fitment between the threadless shank12and the inner circumference of the arrow shaft30. The externally threadless shank12may have internal threading for selectively receiving the threaded weights14, as illustrated inFIG.2B. The glue-in configuration may include an optional intermediate collar20into which the externally threadless shank12is slid, as opposed to the lumen32of the arrow shaft30. The opposing end of the collar20may slide over the arrow shaft30. Again, the collar20is optional as the glue-in configuration can directly connect the broadhead shank into the lumen on the arrow shaft30. Referring toFIGS.3A through3D, one embodiment of the screw-over configuration includes a broadhead40, an insert60and an arrow shaft80. The broadhead40has an open proximal end that communicates with a hollow having a distal portion and a proximal portion, as illustrated inFIG.3C. The proximal portion has internal female threading along an inner circumference thereof. The distal portion has a uniform inner circumference. The distal portion has a diameter less than the diameter of the proximal portion. The insert60has a distal portion and a proximal portion58. The distal portion54includes a distal end and a threaded portion just proximal thereof. The threaded portion has external male threading complementary of the internal female threading of the proximal portion of the hollow central axial chamber of the broadhead40. The distal end is dimensioned to be snugly received in the cylindrical inner circumference of the distal portion so as to act as a guide when the insert is operatively associated with the broadhead40. The diameter of the distal end is less than the diameter of the threaded portion. A uninform precision boss is disposed between the threaded portion and a flange that separates the broadhead40from the shaft80in an assembled configuration. The diameter of this rearward precision boss is equal to the outer diameter of the threaded portion. The flange64has a diameter larger than the adjacent rearward precision boss so that the proximal end of the broadhead40and the distal end of the shaft80interface opposing sides of the flange64. The proximal portion of the insert60has a hollow portion with internal threading for selectively receiving threaded weights. The proximal portion is dimensioned and adapted to be slidably received in the inner circumference of the lumen of the arrow shaft8-, thereby affording the advantages of eliminating a collar as well as enabling a snug fitment that promotes concentricity. The screw-over configuration, with the female threading of the broadhead, facilitates clocking of the broadhead blades49relative to the vanes82of the arrow shaft80in a repeatably manner irrespective of the size and shape of the remaining portion of the broadhead40. Thereby enabling inherent modularity of different types and styles of broadheads blades49to the same insert60. The ability of the end user to readily and repeatedly transition among a plurality of target point configurations for different situations is an advantage of the present invention. Referring toFIGS.4A through5D, another embodiment of the screw-over configuration includes a broadhead40, an insert60and an arrow shaft80. The broadhead40longitudinally extends from a tip end42to a proximal end44of a ferrule46. The ferrule46extends from the proximal end44to a distal end43of the ferrule, from which the tip end42protruded. The longitudinal length of the ferrule46may be between approximately 1.0 and 2.5 inches. The broadhead blades49radially extend out of blade slots47along the ferrule46. The broadhead blades49may be wing blades that open upon contact though are compact during flight, thereby improving the accuracy of the arrow's flight. The length of the ferrule46affords the needed space for the compact wing blades. The proximal end44of the ferrule46has an opening48that communicates with a central axial chamber50, which is described in more detail below. Referring toFIG.5B, the insert60has a broadhead portion62and a shaft portion64. The broadhead portion62includes, in series, a flange66, a rearward precision boss68, a threaded portion70, and a forward precision boss72, respectively. A beveled edge78may transition from the circumferential walls of the forward precision boss72to a distal face74of the broadhead portion62. The distal face74may provide a clocking slot76. The beveled edge78may have a longitudinal length ‘F’ of approximately 0.050 inches. The clocking slot76may extend another approximately 0.062 inches in longitudinal length ‘E’ from the distal face74relative to the longitudinal length (‘F’) of the beveled edge. The rearward precision boss68extends from the flange66. The longitudinal length ‘A’ of the flange66may be between approximately 0.050 and 0.100 inches. The rearward precision boss68has a uniform diameter throughout its longitudinal length ‘B’ (that extends from the flange66to the threaded portion70). In other words, the rearward precision boss68has a non-barbed circumferential sidewall. The longitudinal length ‘B’ may be between approximately 0.050 and 0.250 inches. The diameter of the rearward precision boss68may be between approximately 0.150 and 0.250 inches. The rearward precision boss68provides strength and ensures proper alignment of the insert60relative to the broadhead40when they are operatively associated, as described in more detail below. The threaded portion70provides external, male threading having an outside diameter concentric and coextensive with the diameter of the rearward precision boss68. The internal diameter of the male threading may be between approximately 0.100 and 0.250 inches. The longitudinal length ‘C’ of the threaded portion70may be between approximately 0.100 and 0.350 inches. Downstream (toward the broadhead40) of the threaded portion70is the forward precision boss72. The forward precision boss72has a uniform diameter throughout its longitudinal length ‘D’ (but for, in certain embodiments, the beveled edge78). In other words, the forward precision boss72has a non-barbed circumferential sidewall. The diameter of the forward precision boss72is concentric with the inner diameter of the external threading of the threaded portion70. The diameter of the forward precision boss72is coextensive with or is less than the diameter of said external threading. This is typically the result of threading the threaded portion70with tap and die tools. The longitudinal length ‘D’ may be between approximately 0.050 and 0.300 inches. The forward precision boss72provides strength and ensures proper alignment of the insert60relative to the broadhead40when they are operatively associated, as described in more detail below. The broadhead40of the present invention has a larger length relative to the prior art, approximately five times that of the target point of the prior art. The advantage of the increased length is an increase in accuracy. The challenge presented by the additional length is that any lateral forces applied to the tip end42of the broadhead40will create bending moment forces up to five times greater than the prior art would experience at or near the flange64. Such forces would cause the prior art hardware to catastrophically fail if its proximal-most portion (i.e., if the threaded portion extended to the flange64), which is subject to the highest bending stress, were threading. The present invention overcomes this challenge with the presence of the solid rearward precision bosses68, which absorbs the maximum bending moment stress; rather than the threaded portion70, which is downstream of the rearward precision bosses68. That larger diameter of the rearward precision bosses68relative to the inner diameter of the threading affords the rearward precision bosses68with a greater resistive moment of inertia. The additional length of broadhead portion62also advantageously acts as a bulwark against off-axis misalignment, assuming eccentricities are not introduced over this length. Here, again, the present invention is up to the challenge. The forward precision boss72and the rearward precision68facilitate a concentric fit at the distal end of the broadhead portion62of the insert60due to their non-barbed, uniform outer surface. Prior art connections that rely on threading for alignment and concentricity between the insert and the arrow tip tend to introduce eccentricities, due to the corrugated, barbed outer surface of the threading. Note, overall eccentricity is a function of the distance between the arrow tip and the most-proximal portion that is offset, causing the eccentricity. Therefore, prior art having threading adjacent to its flange are inviting greater eccentricity. For all the above reasons, the inventive sequence of forward precision boss72-threaded portion70-rearward precision boss68ensures that the uniform surfaces of the bosses72and68control and govern the concentricity of the connection between the insert60and the broadhead80. Furthermore, the threaded portion70—being located between the rearward and forward precision bosses68and72—further ensures it is not being relied upon for its locating and alignment functionality; rather, the threaded portion70only provides clamping loads as desired for a properly designed fastener joint. A hemispherical clocking slot76may be formed in the distal face74of the forward precision boss72. The clocking slot76is dimensioned and adapted to slidably receive a portion of a body of the locking pin55, wherein the locking pin55radially extends through a distal portion59of the central axial chamber50. Thereby the clocking slot76accurately clocks the locking pin55relative to the blades49, which extend radially from ferrule slots47. Specifically, the blades49may be spaced 180-degrees apart along the broadhead40, while the vanes42have their own spacing along the shaft80, wherein the relative relationship between the blades49and the vanes82defines the clocking configuration or orientation. As mentioned above, each archer may prefer a specific clocking configuration. Importantly, during assembly, the insert60is engaged with the shaft80prior to the insert60being engaged to the broadhead40. As a result of knowing the orientation of the blades49relative to the broadhead40, a user may selectively define the desired clocking orientation by merely orienting the vanes82relative to the clocking slot76at the distal end of the insert60during this initial insert-shaft engagement. Then, after the insert60and broadhead40are subsequently engaged, the locking pin55is set in the clocking slot76, locking in the selected clocking configuration. Referring toFIG.5D, in the ferrule46, the central axial chamber50extends between a distal portion59and a proximal portion51, wherein the proximal portion51communicates with the ferrule opening48, Starting at the proximal portion51, the central axial chamber50provides, in series, a rearward boss chamber52, a threaded chamber54, and a forward boss chamber56. The rearward boss chamber52is dimensioned and adapted to snugly receive the rearward precision boss68. The forward boss chamber56is dimensioned and adapted to snugly receive the forward precision boss72. The threaded chamber54provides internal, female threading dimensioned and adapted to operatively associate with the external male threading of the threaded portion70. The elongated fastener or locking pin55may pass through, in a direction orthogonal to the longitudinal axis of the central axial chamber50, a distal end of the distal portion59so that approximately half of the generally cylindrical body of the locking pin55occupies the clocking slot, thereby facilitates clocking of the broadhead blades49relative to the vanes82in a repeatably manner irrespective of the size and shape of the remaining portion of the broadhead40. Also, the locking pin55may extend between two diametrically opposing locations along the ferrule46, maintaining the structural integrity of the ferrule46, thereby the preventing the insert60from unscrewing during use. The proximal portion43has internal female threading along an inner circumference thereof. The distal portion41has a uniform inner circumference. The distal portion has a diameter less than the diameter of the proximal portion43. The insert50has a distal portion54and a proximal portion58. The distal portion54includes a distal end51and a threaded portion52just proximal thereof. The threaded portion52has external male threading complementary of the internal female threading of the proximal portion43of the central axial chamber of the broadhead40. The distal end51is dimensioned to be snugly received in the cylindrical inner circumference of the distal portion41so as to act as a guide when the insert50is operatively associated with the broadhead40. The diameter of the distal end51is less than the diameter of the threaded portion52. A flange55is just proximal of the threaded portion54, the flange55has a diameter larger than the threaded portion52. The proximal portion58of the insert50has internal threading for selectively receiving threaded weights14. The proximal portion58is dimensioned and adapted to be slidably received in the inner circumference of the arrow shaft80, thereby affording the advantages of eliminating a collar as well as enabling a snug fitment that promotes concentricity. The screw-over configuration, with the female threading of the open proximal end48, facilitates clocking of the broadhead blades49relative to the vanes82of the arrow shaft80in a repeatably manner irrespective of the size and shape of the remaining portion of the broadhead40. Thereby enabling inherent modularity of different types and styles of broadheads blades49to the same insert50, thus the ability of the end user to readily and repeatedly transition among a plurality of target point configurations for different situations. As used in this application, the term “about” or “approximately” refers to a range of values within plus or minus 10% of the specified number. And the term “substantially” refers to up to 90% or more of an entirety. Recitation of ranges of values herein are not intended to be limiting, referring instead individually to any and all values falling within the range, unless otherwise indicated, and each separate value within such a range is incorporated into the specification as if it were individually recited herein. The words “about,” “approximately,” or the like, when accompanying a numerical value, are to be construed as indicating a deviation as would be appreciated by one of ordinary skill in the art to operate satisfactorily for an intended purpose. Ranges of values and/or numeric values are provided herein as examples only, and do not constitute a limitation on the scope of the described embodiments. The use of any and all examples, or exemplary language (“e.g.,” “such as,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments or the claims. No language in the specification should be construed as indicating any unclaimed element as essential to the practice of the disclosed embodiments. In the following description, it is understood that terms such as “first,” “second,” “top,” “bottom,” “up,” “down,” and the like, are words of convenience and are not to be construed as limiting terms unless specifically stated to the contrary. It should be understood, of course, that the foregoing relates to exemplary embodiments of the invention and that modifications may be made without departing from the spirit and scope of the invention as set forth in the following claims. | 17,770 |
11859961 | The following table catalogs the numbered elements and lists the figures in which each numbered element appears. Similarly numbered elements represent elements of the same type, but they need not be identical elements. Numbered ElementsElementDescriptionFIGS.102, 104, 105, 106, 107,object reflection1, 2, 3, 4, 5, 16,109-128, 150-15317, 19-25103optical axis1140-148illuminated portion26, 27160-163light image17201-203, 206PD array1, 2, 3, 5, 6, 7, 10204, 205PD3, 4301-304lens1-10311entry cavity11, 12, 14, 15312entry surface11, 14, 15313exit surface13, 14, 15314convex mirror13, 14, 15315concave mirror14, 15, 16401-403object1, 2, 3, 19-23, 25501-506polar coordinate sensor19-27510focusing optical part11-16featuring a reflectiveobjective511camera sensor13-17601display19-23602shape of touch24sensitive surface603detection zone25, 27604detection zone perimeter25, 27606calculating unit2, 19-23607, 608arrows (indicate movement)20609PCB5, 7610, 611processor1-3 DETAILED DESCRIPTION The present invention relates to reflection-based sensors having a 2D detection zone shaped as a wedge or circle, or a 3D detection zone shaped as a cone or sphere. The sensor is situated at a vertex of the wedge or cone and at the center of the circle or sphere. Sensors having a 2D detection zone detect the polar angle of an object within the detection zone and are referred to as polar coordinate sensors, and sensors having a 3D detection zone detect the polar angle and azimuth angle of the object in the detection zone and are referred to as spherical coordinate sensors. In some embodiments of the invention, two or more sensors are arranged to have overlapping detection zones and the location of detected objects is obtained by triangulating the polar angles and azimuth angles returned by different sensors. In other embodiments of the invention, each sensor includes apparatus for determining time of flight for photons reflected by the object. Therefore, in addition to determining the polar and azimuth angles, the polar and spherical coordinate sensors also calculate a radial distance between the object and the sensor based on the time of flight. The polar angle together with the radial distance calculated by one polar coordinate sensor is sufficient to determine the object location within a 2D detection zone, and polar and azimuth angles together with the radial distance calculated by one spherical coordinate sensor is sufficient to determine the object location within a 3D detection zone. In some embodiments of the invention, a polar or spherical coordinate sensor includes an array of light detectors, which is a term that includes, inter alia, CMOS and CCD camera sensors and arrays of photodiodes. In some embodiments of the invention, the sensor further includes a lens that directs object reflections onto the array of light detectors. In some embodiments of the invention, the sensor also includes light emitters that illuminate the detection zone in order to generate object reflections. Reference is made toFIG.1, which is a simplified illustration of an object detected by a polar coordinate sensor, in accordance with an embodiment of the present invention. Object401is in an illuminated environment and produces reflections102that are directed by lens301and detected by array201. Array201includes multiple light detecting elements such as PDs, and processor610, connected to array201, identifies the elements within the array that detect maximal reflection and determines the polar angle θ therefrom. Line103inFIG.1is the optical axis of lens301. In some embodiments of the invention, the polar coordinate sensor detects spherical coordinates, namely, the PDs in array201are arranged as a two-dimensional grid or any other configuration that provides sensitivity to both the polar and azimuth angles of incoming reflections. In this case, the sensor is referred to as a spherical coordinate sensor. In some embodiments of the invention, array201is a CCD or CMOS camera sensor. Reference is made toFIG.2, which is a simplified illustration of object reflections detected by two polar coordinate sensors, in accordance with an embodiment of the present invention. Specifically, a first polar coordinate sensor includes PD array202, lens302, and processor610; and a second polar coordinate sensor includes PD array203, lens303and processor611. The first polar coordinate sensor detects object401by reflection104, and the second polar coordinate sensor detects object401by reflection105. Accordingly, the first polar coordinate sensor identifies object401at polar angle θ and the second polar coordinate sensor identifies object401at polar angle β. Calculating unit606triangulates these polar angles to determine a 2D location of object401. When both polar coordinate sensors are configured to determine polar and azimuth angles of incoming reflections, calculating unit606triangulates these angles to determine a 3D location of object401. Reference is made toFIG.3, which is a simplified illustration of two objects detected by a polar coordinate sensor, in accordance with an embodiment of the present invention.FIG.3shows two objects402and403, at different locations, being detected by a polar coordinate sensor, and specifically by non-adjacent PDs204and205within PD array201that detect maximum reflections from these objects. Thus, multi-touch functionality is supported by the polar coordinate sensor of the present invention. In some embodiments of the invention, the polar coordinate sensor also measures time of flight for reflections106and107, thereby enabling a single polar coordinate sensor to function as a touch and gesture detector by identifying the polar angle and radial distance of objects402and403. Reference is made toFIGS.4and5, which are top view and side view illustrations of a polar coordinate sensor receiving light from two objects, in accordance with an embodiment of the present invention.FIG.4shows lens structure301directing two light objects124and125onto PDs205and204, respectively, situated underneath reflective facet309of lens structure301. Each light object124and125has a width, illustrated inFIG.4by three parallel beams of which the central beam represents the object's chief ray and the two outer beams represent the width of the light object. FIG.5shows a side view of lens structure301ofFIG.4.FIG.5shows incoming light objects124and125, PD array206and PCB609on which lens structure301and PD array206are mounted.FIG.5shows that the images of124and125are folded downward onto PD array206by internally reflective facet309. Reference is made toFIG.6, which is a perspective view of a lens structure used in the sensor ofFIGS.4and5, in accordance with an embodiment of the present invention.FIG.6shows a perspective view of lens structure301, showing reflective facet309along the periphery and indicating PD array206situated underneath facet309. Reference is made toFIG.7, which is an exploded view of the sensor ofFIGS.4and5, in accordance with an embodiment of the present invention.FIG.7shows an exploded view of lens structure301and its underlying PCB609with PD array206mounted thereon. In certain embodiments lens structure301is designed around two radii: a first radius of the lens input surface, and a second radius, larger than the first, along which PD array206is arranged, as explained hereinbelow. Reference is made toFIGS.8-10, which are illustrations of the geometry of the lens structure inFIGS.4-7, in accordance with embodiments of the present invention.FIG.8is a view from above of lens structure301, showing radius R1 of the lens input surface. The focal point of the lens input surface is indicated as C1. FIG.9is a side view of lens structure301, showing radius R1 and focal point C1. The second radius, namely, that radius defining the arc along which PD array206is arranged, is the sum of a+b, which is the distance traveled by the light beams from the focal point C1 to array206. Because height b inFIG.9is vertical, the rear portion of lens structure301is formed with a radius R2 whose center is shifted a distance b in front of C1. This is illustrated inFIG.10. FIG.10is a view from above of lens structure301, showing radius R1 and focal point C1 of the lens input surface, and radius R2 along which PD array206is arranged. The center for radius R2 is shown as C2, which is shifted a distance d away from C1. d is equal to height b inFIG.9. This enables PD array206to use the same focal point C1 as the input surface. Reference is made toFIGS.11-15, which are different views of a focusing optical part, designed to be mounted above a camera sensor on a circuit board by an automated mounting machine, and featuring a reflective objective, in accordance with an embodiment of the present invention.FIGS.11-13illustrate an alternative to the lens structure inFIGS.4-10.FIGS.11-13are different perspective views of a focusing optical part510coupled with a camera sensor511, in accordance with an embodiment of the present invention. The optical part includes a reflective objective, for example, a modified, two-mirror Schwarzschild objective. In the prior art, reflective objectives are known to have an advantage over refracting lenses in terms of chromatic aberrations. Namely, whereas a refractive lens causes chromatic aberrations due to refraction, a reflective objective uses only mirrors. This enables creating an optical system without any refraction, and thus, without any chromatic aberrations, as long as the light reflected by the mirrors travels only through air. It would be counter-intuitive to design a reflective objective that passes light through multiple air-to-plastic interfaces, as these interfaces would refract the light causing chromatic aberrations which the reflective objective is typically designed to eliminate. However, it is difficult to build a reflective objective in a manner that the two mirrors will be suspended in air, yet characterized in that the part is suitable for being delivered on a tape and reel and mounted on a PCB by an automated mounting machine. Therefore, the present invention teaches a reflective objective formed as a single, solid optical part that can be delivered on a tape and reel and mounted on a PCB using automated mounting machinery. FIGS.11and12are perspective views of focusing optical part510from the top, andFIG.13is a perspective view from the bottom. Light enters focusing optical part510from the top and exits from the bottom. The top of optical part510is dome-shaped, with a cavity or well311carved out of the center of the dome. The bottom of cavity311is a light-transmissive input surface312through which light enters focusing optical part510. The light is reflected and focused inside focusing optical part510, as explained hereinbelow, and the focused light exits through concave exit surface313onto camera sensor511. At the top of exit surface313is a dome-shaped convex mirror314whose mirror surface faces entry surface312. FIG.14is a wireframe perspective view of focusing optical part510. Focusing optical part510is designed for a 0.3 mm×0.3 mm, 8×8 pixel camera sensor, shown in the figure as element511. Focusing optical part510has a focal length of 0.4 mm and an f-number less than 0.8.FIG.14shows that the part has an upper dome, the interior of which is an upside-down bowl-shaped, concave mirror315, and a lower, upside-down bowl-shaped, concave exit surface313. At the top of the upper dome there is a hollow cavity or well311, the bottom of which is entry surface312. Opposite and underneath entry surface312is convex mirror314. FIG.15shows a cross-section of focusing optical part510and sensor511ofFIG.14. Reference is made toFIGS.16and17, which illustrate light from four different objects traveling through the focusing optical part ofFIGS.11-15to the underlying camera sensor, in accordance with an embodiment of the present invention.FIG.16shows light from four objects,150-153, entering focusing optical part510through entry surface312, reflected inside the optical part by mirrors314and315, and exiting the part through exit surface313onto camera sensor511. Mirrors314and315are a reflective objective. FIG.17is an enlarged portion ofFIG.16showing focused light exiting optical part510and arriving at camera sensor511.FIG.17shows that the light from the four objects,150-153, is directed onto respective focused locations160-163in camera sensor511. Focusing optical part510is designed to be used with a 0.3 mm×0.3 mm, 8×8 pixel, camera sensor511. Thus, the sensor has 4×4 pixels in each quadrant.FIGS.16and17show that light from four different objects entering optical part510at slightly different angles is received as four distinct images on the sensor, each image being focused on a fine point160-163on the sensor. The light150from a first object directly opposite optical part510is focused on the center of sensor511, which is at the center of the four central pixels in sensor511. The light151-153from three other objects entering optical part510at different angles is focused on points161-163, respectively, each point being at the center of a corresponding pixel in sensor511. The angle between incoming light150and incoming light153is 20 degrees. This is the field of view of optical part510coupled with a 0.3 mm×0.3 mm sensor511, as light entering the part at an angle greater than 20 degrees is directed to a location outside the 8×8 pixel sensor. When a larger sensor is used, the field of view is larger. The light entering optical part510inFIG.16is illustrated as a hollow tube. This is because a portion of the incoming light is reflected, by mirror314, back out of the optical part through entry surface312. Thus, this portion of the light does not reach sensor511. In order to minimize this light leakage, entry surface312is designed to refract the incoming light so that the light is spread across mirror314to minimize the amount of light reflected back out of the optical part and maximize throughput through the reflective objective. In some embodiments of the invention, entry surface312has a radius of at least 0.25 mm for easy manufacturing. Each incoming light object inFIG.16has a 0.3 mm radius, of which the central 0.1 mm radius is reflected back out of entry surface312, and the remaining light reaches sensor511. In order to calculate the f-number of optical part510, the diameter of the entrance pupil is calculated according to the amount of light reaching sensor511: F-number=focal_length/diameter_of_entrance_pupil entrance pupil area=π*0.32−π*0.12 entrance pupil radius=0.081/2 entrance pupil diameter=2*0.081/2=0.5657 mm The focal length of optical part510is 0.4 mm, and thus, the f-number is 0.707. Exit surface313is designed to cause almost zero refraction to the focused light exiting optical part510. Some of the salient features of focusing optical part510are its low f-number (less than 1; even less than 0.8), which is much lower than any comparable refractive lens, and its wide field of view)(+−20°) that requires a very short focal length, particularly when the image height is short (0.15 mm—half the width of sensor511). Reference is made toFIG.18, which is an illustration of the dimensions of the focusing optical part ofFIGS.11-15, in accordance with an embodiment of the present invention.FIG.18shows the dimensions of focusing optical part510, namely, 12 mm in diameter and 6 mm high. As explained hereinabove, camera sensor511mounted beneath focusing optical part510is used to identify the polar angle and azimuth angle in 3D space at which light from the object enters optical part510. In order to identify the location in 3D space at which the object is located, two units, each including a camera sensor511and a focusing optical part510, are used and the polar and azimuth angles reported by the two units are triangulated. Additional units can be added, as discussed below, to add precision and to cover additional areas. In some embodiments of the invention, camera sensor511is a time-of-flight camera and a light emitter is added to the system, whereby the camera reports the time of flight from activation of the emitter until the light is detected at sensor511. This information indicates the radial distance of the object from the sensor. Thus, a single unit is operable to identify the location of the object in 3D space using spherical coordinates, namely, the object's polar angle, azimuth angle and radial distance. In such embodiments too, additional units can be added, as discussed below, to add precision and to cover additional areas. Reference is made toFIGS.19-24, which are illustrations of multiple sensors arranged along the perimeter of a detection zone, in accordance with embodiments of the present invention.FIG.19shows touch screen display601equipped with a triangulating sensor that includes five polar coordinate sensors501-505. In some embodiments of the invention, sensors501-505include focusing optical part510coupled with a camera sensor511, with or without the time-of-flight feature, discussed hereinabove with respect toFIGS.4-10. In other embodiments of the invention sensors501-505include lens structure301and array206of PDs, discussed hereinabove. Object401is shown detected by polar coordinate sensors501and505, which is sufficient to triangulate the object's location using detected angles α and β when these angles are large and sensitive to small movements of the object. FIG.20shows that the area along the bottom edge of display601is difficult to accurately triangulate because different locations along this edge result in only minimal changes in the reflection angles detected by polar coordinate sensors501and505. I.e., moving object401in the directions indicated by arrows607and608will cause only minimal changes in angles α and β. Moreover, in this region it is difficult to track more than one object because when two objects are in this region each polar coordinate sensor only detects reflections of the closer object; reflections from the distant object are blocked by the closer object. One approach to resolving the problem illustrated by object401inFIG.20is to provide additional polar coordinate sensors around display601such that any movement on the display will cause a significant change in detection angle for at least some of the polar coordinate sensors, and even when multiple objects are present, each object will always be detected by at least two of the sensors.FIG.21shows reflections111,112.126-128of object401detected by five polar coordinate sensors501-505that surround display601enabling the object's position to be triangulated accurately by calculating unit606. FIG.22shows the problem when two objects402,403are situated along, or near, a line connecting two polar coordinate sensors,501,505. namely, that each sensor only detects reflections of the closer object; reflections113,114from the distant object are blocked by the closer object. FIG.23shows how the solution of providing additional polar coordinate sensors around display601resolves the issue illustrated byFIG.22, as different reflections113-120of each object are detected by the five polar coordinate sensors501-505that surround display601and thereby enable multi-touch detection for objects402and403ofFIG.23. Another approach to resolving the location of object401inFIG.20is to use the cumulative intensity of the reflections detected at each polar coordinate sensor in addition to the angle, in order to determine an object's position. Specifically, the intensity of the detected reflection changes according to the distance between object401and each polar coordinate sensor. Thus, movement along the bottom edge of display601is tracked by calculating unit606comparing the intensities of the detection by sensor501to the intensities of the detection by sensor505, and also by analyzing each sensor's detections over time as the object moves, to determine if the object is moving toward that sensor or away from it. When the detections at the two sensors do not change at similar rates in opposite directions, it can be inferred that each sensor is detecting reflections from a different object and the movement of each object can be estimated based on the changing intensities of the detections at one of the sensor components over time. The intensity of the detection used is the cumulative output of the entire PD array or camera sensor in each polar coordinate sensor. Yet another approach to resolving the ambiguities discussed in relation toFIGS.20and22is to determine time of flight for detected reflections in each of the polar coordinate sensors. Using time of flight, each sensor identifies not only the object's polar angle, but also is its radial distance from the sensor. This enables clearly identifying movement even along the bottom of display601in the directions indicated by arrows607,608inFIG.20, and also enables differentiating the locations of the different objects402,403illustrated inFIG.22, as each sensor detects the radial distance of the object nearest to it. The examples ofFIGS.21and23also serve to illustrate how the present invention provides a scalable solution. Thus, adding additional polar or spherical coordinate sensors around a detection zone increases the resolution of detection. Conversely, cost can be reduced by providing fewer sensors and thereby reducing resolution. Also, the same polar or spherical coordinate sensor hardware is used for different size screens. FIG.24shows further how versatile the polar and spherical coordinate sensors of the present invention are.FIG.24shows a gesture interaction space602that is a complex shape. Nonetheless, the polar or spherical coordinate sensors501-505placed along the perimeter of interaction space602provide touch and gesture detection for that space. Reference is made toFIG.25, which is an illustration of a sensor at the center of a detection zone, in accordance with an embodiment of the present invention.FIG.25shows embodiments of the polar and spherical sensors whose detection zone surrounds the sensor. In the embodiments ofFIG.25sensor506is configured to detect polar angles of reflections arriving from anywhere around the sensor, within detection zone603that ends at border604.FIG.25illustrates three objects401-403detected by reflections121-123, respectively. This sensor has several embodiments and applications. In one embodiment, sensor506detects only the polar angle of a detected reflection. Nonetheless, it is used alone to detect radial movements in detection zone603, e.g., to report clockwise and counter clockwise gestures. For such applications, it is not necessary that the sensor identify the radial distance of a detected object, only its clockwise or counterclockwise movement. One example for such an application is the iPod® click wheel used to navigate several iPod models. IPOD is a trademark of Apple Inc. registered in the United States and other countries. In a second embodiment, sensor506provides time of flight detection and is therefore operable to determine both polar angle and radial distance of a reflective object. In a third embodiment, multiple sensors are placed at different locations such that their detection zones603partially overlap, whereby objects detected by more than one sensor are triangulated. As discussed hereinabove, an illuminator, inter alia one or more LEDs, VCSELs or lasers, is provided for each polar coordinate sensor and spherical coordinate sensor to create detected reflections. Reference is made toFIGS.26and27, which are illustrations of sensor illumination schemes, in accordance with embodiments of the present invention.FIG.26shows an illuminator configuration in which different illuminators illuminate140-145different areas of a display or detection zone for a directional sensor, andFIG.27shows an illuminator configuration for a sensor at the center of a detection zone. The illumination schemes ofFIGS.26and27enable illuminating only those parts of the detection zone in which the object is likely to be located. For example, when a moving object's direction and velocity have been detected, it is possible to assume where within the detection zone it is highly unlikely for the object to be located in the near future. Furthermore, when a reflection arrives at the sensor component from part of the detection zone in which the object is not likely to be located, that reflection can be ignored as noise or treated as a different object. The sensor components according to the present invention are suitable for numerous applications beyond touch screens and touch control panels, inter alia, for various environmental mapping applications. One application in semi-autonomous vehicles is identifying whether the person in the driver's seat has his feet placed near the gas and brake pedals so as to quickly resume driving the vehicle if required. Additional sensor components are also placed around the driver to identify head orientation and hand and arm positions to determine whether the driver is alert, facing the road and prepared to take control of the vehicle. In some embodiments, the spherical coordinate sensor featuring focusing optical part510and a camera sensor is used to map the body of a vehicle occupant and identify the occupant's behavior, e.g., to determine if a driver is prepared to take over control of a semi-autonomous vehicle. Yet another use for this sensor is to mount it in the rear of a vehicle cabin to detect a baby left in the back seat of a parked car and alert the person leaving the car. Yet another use for this sensor is to mount it in the cargo section of a vehicle, such as a car trunk or an enclosed cargo space in a truck, to determine if a person is inside that section and avoid locking that section with the person inside. In some embodiments of the invention, image processing of a camera image of the occupant is combined with the proximity sensor information to precisely locate a vehicle occupant's limbs and track their movements. In some cases, the image is taken from the same camera used to obtain the polar coordinates based on reflections. Another application is car door collision detection, whereby polar or spherical coordinate sensors are mounted along the bottom edge of a car door facing outward to detect if the door will scrape the curb, hit a tree or stone, or scratch a neighboring parked car as the door opens. In some embodiments, a sensor is mounted such that its detection zone extends between the car and the car door when the door is open, enabling the sensor to detect if a finger or clothing will be caught when the door closes. In yet another application, polar or spherical coordinate sensors are mounted facing outward of a moving vehicle, inter alia, cars, trucks and drones, and generate a proximity map surrounding the vehicle as it moves, like a scanner passing across a document. In yet another application, a polar or spherical coordinate sensor is mounted on the top or bottom of a done propeller to detect approaching obstacles and prevent drone collision. In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. | 27,559 |
11859962 | DETAILED DESCRIPTION OF EMBODIMENTS FIG.1shows a schematic and exemplarily representation of a system for performing impact tests on a coating of a probe surface according to some embodiments of the present invention. The system100is preferably a single, automated system incorporating all devices necessary for automatically performing chip impact tests on a coating of a probe surface and afterwards collecting test results and performing suitable analysis thereof. Alternatively, the system100could also not combine all necessary devices at one place, in which case the method would not be automated completely but would necessitate human interaction, such as, for instance, a manual transfer of probes from one device to another and/or a manual controlling of the devices. For performing the impact tests, the system100comprises a ballistic device110. The ballistic device is adapted to receive the ballistic objects which are to be used for the impact tests. Typically, the ballistic objects are steel or stone chips of a predetermined size and with a predetermined weight, wherein the size and the weight distribution of the chips can be subject to corporate or international standards. The ballistic device110receives the chips via a receptacle possessing an opening towards the interior of the device, such that a plurality of chips can be fed to the receptacle without having all of them transferred through the opening at once. Rather, the opening might be passed by the chips in a controlled manner and with a controlled speed, such that a predeterminable amount of chips reaches the interior of the ballistic device110at predeterminable time steps. The chips having reached the interior of the ballistic device are preferably being further fed to shooting means when the ballistic device is using the shooting means for shooting the chips at a predetermined test location. The shooting means may, for instance, be operable pneumatically, i.e. by air pressure, and are adapted to shoot the chips one by one with a configurable frequency and over a configurable time, wherein the chips reach the test location since the ballistic device is adapted to shoot them with an appropriate speed and into an appropriate direction. The system100is adapted to receive a probe at the test location in an orientation such that its surface faces the ballistic means of the ballistic device110. Hence, once a probe is coated at its surface and located at the test location with its surface oriented towards the shooting means, the ballistic device110may be operated to shoot the chips onto and possibly into the coating of the probe surface. In this way the coating will generally be damaged. The system100further comprises a cleaning device120that is adapted to remove the chips from the coating that are stuck therein after impact. For removing stone chips from the coating, the cleaning device120comprises, for instance, means for rolling a tape and for bringing the tape in contact with the coated probe located at the test location, and hence also for bringing the tape into contact with any chips stuck in the coating. The tape preferably comprises an adhesive such that when the tape is repeatedly brought into contact with the chips stuck in the coating and stripped off therefrom, the stone chips will adhere to the tape and can thereby be removed from the coating. Preferentially, removing the chips from the coating comprises several repetitions of bringing the tape into contact with the coating and subsequently removing it therefrom, wherein each repetition is followed by a translative movement of the tape, such that every repetition is performed with a fresh portion of tape. System100further comprises a sensing device130that is adapted to sense the surface geometry of the coated probe. Preferably, the sensing device is an optical device. In this way, the geometry can be sensed contactlessly. If the coating surface is not transmissive in the frequency range of the light used for sensing, the sensed geometry will be confined to the surface geometry. If the coating surface is transmissive in the used frequency range, and if the rest of the coating is so as well, then sensing data can be obtained for positions inside the coating as well. The sensing device130will typically have a limited resolution, meaning that it will, in case it is an analog device, determine a distance below which no differences in sensing data can be detected, and in case it is a digital device, it will only collect a limited amount of data points. Calibration of the sensing device will determine the distribution of points at the test location, or on the coated probe surface, for which sensing data can be collected by the sensing device130. System100further comprises an examining device140, such as a computer, that serves for analyzing the sensing data provided by the sensing device130, and which is therefore adapted for examining the coating of the probe surface. The examining device140comprises a providing unit141, such as an interface electronically coupleable to the sensing device130, which is adapted for providing the sensing data. The providing unit may also be understood as a reading unit adapted to read sensing data stored on any storage medium and provide it for further analysis. The examining device140also comprises a determining unit142, possibly realized by a processor of a computer140, which is adapted for determining a depth representation of the coating based on the sensing data, and a deriving unit143, which can be similar or identical to the determining unit142, which is adapted for deriving a property of the coating based on the determined depth representation. In applications unrelated to chip impact testing, such as quality control during an ongoing production process of depositing a structured coating on a film or foil, the sensing device130and the examining device140may alternatively be put to use independently from the rest of system100, such as in the form of a holographic camera. In that case it is particularly preferred that the sensing device is adapted to collect optical sensing data from the interior of the coating, provided the coating is transparent for the light used for sensing. The sensing data are then indicative of interaction of the light with inner regions of the coating. This can be achieved, for instance, by adjusting the traveling distance for the reference beam using an additional slidable mirror. In this way the relative phase shift collected by a beam of light along its way through the coating can enter the sensing data and hence allow conclusions about the refractive index of layers at different partial depths. A change in refractive index can be interpreted by the examining device140as indicating the position of an interface between layers. Preferably, phase jumps of 180° arising during reflection of light at an interface towards an optically thicker layer are recognized. Based on such recognition of phase jumps in the sensing data provided by the providing unit141, i.e., for instance, in the holographic interferogram data, the determining unit142may determine a layer profile of the coating. The determining unit142can be adapted to determine the layer profile using a standard technique for digital holographic reconstruction as known by the skilled person, wherein it may be preferred that the reconstruction is performed for a synthetic wavelength if the interferogram data comprise data for more than one actual wavelength. From the layer profile, as a coating property, a layer thickness may be derived by the deriving unit143. For instance, a layer thickness, as derivable in the layer profile by, for instance, based on the number of pixels between two interfaces, may be corrected by multiplication with a corresponding refractive index known from previous, independent measurements. Moreover, direct determination of layer thicknesses and/or interface positions, i.e. interface partial depths, from layer profiles generated by holographic means can also be applied in stone evaluating results of stone impact test, namely if the coatings are transparent or partially transparent. In that case, characteristic sizes of defects can be determined based on the location of layer interfaces directly determined from a layer profile. FIG.2schematically illustrates how a system100as described in relation toFIG.1may be used for performing impact tests on a coating of a probe surface. The general procedure according to the invention is as follows: In a first step1, ballistic objects are being shot into the coating using the ballistic device110of the system100. Then, in a second step2, the coating is being cleaned by removing the ballistic objects from it that have been shot into it and are stuck therein, using the cleaning device120. Afterwards, in a third step3, sensing data are being collected using the sensing device130. In a fourth step4, the cleaned coating is being examined using the examining device140. Step4will subsequently be described in more detail with reference toFIG.3. FIG.3shows a flow chart illustrating schematically and exemplarily the step of examining the coating as a method in its own, which could be executed using the examining device140previously described. Accordingly, a first step41consists in providing sensing data indicative of a depth of a coating at each of a subset of probe surface points. In a particular embodiment, the sensing data comprise a first and a second set of data, being representative of a holographic interferogram generated with a laser defined by a first and a second wavelength, respectively, and recorded using a photo-sensitive medium. The interferences encoded in the interferogram stem from phase modulations due to an interaction of a part of the laser beam with the coating, wherein this part, i.e. the probe beam, has been split by interferometric means from the original source laser beam, then brought to interact with the coating, and thereafter superposed again with the remaining part of the source beam serving as the reference beam. Since the phase modulations are different for different wavelengths, the corresponding interferograms, and therefore the two sets of holographic interferogram data will differ from each other, meaning that more information about the coating is provided than would be provided when using only interferogram data collected for a single wavelength. In this embodiment, step42comprises a step421of calculating reconstructed holographic interferogram data by applying a digital holographic reconstruction to the holographic interferogram data that involves determining reconstruction data corresponding to a synthetic wavelength. The synthetic wavelength preferably corresponds to, i.e. is proportional to the inverse of, a beat frequency of a fictive superposition of a laser beam with the first wavelength and a laser beam with the second wavelength. It is particularly preferred that the synthetic wavelength is equal to the sum of the first and the second wavelength. In this way, large depth variations can be accurately and unambiguously determined without sacrificing short range resolution. Then, in a step422, the depth representation of the coating is being determined based on the reconstructed holographic interferogram data. For instance, the depth representation is a topographic map, particularly a two-dimensional projection of a three-dimensional image arising from the reconstruction visualizing the geometry of the coating surface, wherein a Cartesian coordinate system could be imagined with a reference x-y plane coinciding with the probe surface underlying the coating, which is in this example assumed to be planar. In this configuration, the z-values of points on the coating surface are interpreted as full depths, i.e. thicknesses of the coating. In the preferred case of a digital holographic reconstruction resulting in reconstruction data corresponding to a synthetic wavelength exceeding the first and the second wavelength, both the microstructure of the coating surface as well as substantially larger geometric variations in coating thickness are resolved by the topographic map. Preferably, damage by impact tests is controlled by means of the ballistic device110to be confined to a finite region of the coated probe surface, such as, for instance, a square area of 75 mm×75 mm. In many cases, only this square region will be shown by the topographic map. However, depth values can also be collected outside of the square region. A possibly slanted position of the coated probe surface can then be detected, namely by averaging depth values outside of the square at more than one, preferably at least three, positions, which can be positions at the corners of the probe, for instance. Then, all depth values, including those from inside the impact region and hence included in the topographic map, can be corrected for the slanted position. In step43, a coating property is being derived based on the depth representation. In the presently described embodiments, the depth representation comprises a topographic map, and step43in itself comprises step431of generating a depth histogram, step432of determining a peak depth interval indicative of a position of a local maximum in the histogram, and step433of deriving the coating property based on the determined peak depth interval. Step431, in turn, comprises a step4311of counting, for each of the predetermined depth intervals, a number of probe surface points in the subset for which the coating depth indicted by the topographic map lies in the respective depth interval, and step4312of associating the counting results with the respective depth interval. In this particular embodiment, the predetermined depth intervals are chosen to have equal sizes, wherein the size is determined to be 1 μm, and wherein the number of depth intervals is taken to be equal to an estimated maximum coating depth, which corresponds to a depth of the coating in a region where it has not been damaged. Typically, in this way the number of depth intervals is at least 100, since damages to the coating arising from impact test are expected to lead to a reduced depth of the coating in the impact regions by up to 100 μm. Counting is then performed by going through all points in the topographic map and deciding for each of them whether the depth associated to it lies within a given depth interval. Deciding whether a depth value lies within a given depth interval comprises comparing the depth value to the boundaries of the depth interval. For instance, it is decided that the depth value lies within the depth interval if it is greater or equal to the lower boundary of the depth interval but less than the upper boundary of the depth interval. In this way, the decision can be made correspondingly for all depth intervals without risking that there are any depth values that are not represented in the depth histogram. In the context of chip impact tests, the depth values may be penetration depth values dp, i.e. indicative of a deviation between a coating thickness at a given probe surface point and an intact coating depth. The intact coating depth may correspond to a coating thickness outside the impact region. Penetration depths can also be determined without knowledge about the actual coating thickness, but simply by determining the difference between the value z0of the z-coordinate corresponding to the intact coating surface, possibly corrected for a slanted probe position as previously described, and the value of the z-coordinate of the coating surface at the respective point, i.e. dp=z0−z. Preferably, only penetration depth values with an absolute value beyond a predeterminable roughness level are taken into account by setting a corresponding margin around z0in both directions. The size of this margin can be set manually or automatically. When set automatically, it is preferably learned from a reference pattern, such as a reference probe, and thereafter applied to all further probes. In impact testing, the margin can also serve for ignoring impacts of chips that have been bounced off the coating surface without substantially damaging. The margin will generally depend on the type of coating and can range from values as small as 0.5 μm for a small ignorance levels to values as high as 5 μm for transparent coatings, like clearcoats. Step432of determining a peak depth interval indicative of a position of a local maximum in the histogram comprises, in this particular embodiment described with reference toFIG.3, step4321of providing a tolerance threshold value indicative of a base depth variety, step4322of computing count differences by subtracting counting results associated with selected depth intervals, step4323of determining signs of selected count differences and/or comparing counting results associated with selected depth intervals, and step4324of determining the peak depth interval based on the provided tolerance threshold value, the computed count differences, and the determined signs and/or compared counting results. In this embodiment, the tolerance threshold serves for eliminating a base level, i.e. a number of counts nearly detected for substantially all depth intervals, and is determined automatically. Its presence is due to the generally non-uniform shape of the defects. In other embodiments, the tolerance threshold is set manually. Step4322uses preferably standard techniques for determining the discrete derivate of a function, wherein the function is in this case given by the counting results in dependence on the coating depth. Step4323is in this embodiment realized by detecting between which two adjacent depth intervals the sign of the discrete derivative of the counting results, i.e. the count differences, changes, wherein detecting is only executed for depth intervals associated with counting result exceeding the provided tolerance threshold. Once a peak depth interval is determined, a coating property is derived based on it in step433. Assessing44the coating can then be performed based on the derived coating property. The coating property indicates preferably a change in structure and/or material of the coating. This is because ballistic objects shot at and into the coating will leave defects therein that preferably display a depth corresponding to a change in structure and/or material in the coating, such as possibly given by an interface between two coating layers. A first type of defects leads to sharp peaks due to a distinguished off-chipping of coating material at layer interfaces. A second type of defects leads to broader peaks in the histogram. This second type of defects arises if the ballistic objects enter the coating material in a more digging manner, displaying greater parts of the layers along the defect walls. Preferably, the coating comprises a thin surface layer, a first major defect layer underlying the surface layer, and a second major defect layer underlying the first major defect layer. In that case, the histogram comprises three peak depth intervals, wherein a first peak depth interval is indicative of the surface layer of the coating, the second peak depth interval is indicative of the first major defect layer and the third peak depth interval is indicative of the second major defect layer of the coating. A coating property can then be derived as an interface partial depth value indicative of a position at which a transition occurs between, for example, the first major defect layer and the second major defect layer. In particular embodiments, a characteristic size can be estimated for each defect by measuring its extent, in the topographic map, in a direction parallel to the probe surface and at a depth corresponding to the determined interface partial depth. The characteristic sizes of the plurality of defects can be statistically analyzed, wherein the statistical analysis can be used for assessing a mechanical resistance of the coating according to standard reference values. For instance, such a standard reference value might depend on an average characteristic size of the plurality of defects. It may further, additionally or alternatively, depend on a total fraction of damaged area of the coating surface. An estimate for the damaged area might be given by the total number of points attributed to a defect. This number can be estimated based on the depth histogram as well, namely by summing counting results associated to peaks. Counting results may be associated to peaks if they are associated to a depth interval lying within a predetermined range around the corresponding determined peak depth interval indicative of the maximum of the peak. For instance, counting results can be included if they lie within a 6σ range of a peak, a denoting the standard variation. Other ranges can however also be applied, possibly learned from model systems. Also, it is possible that only those counting results are included that are associated to a depth interval indicative of a position beyond a determined interface partial depth. FIG.4shows an exemplary depth histogram H in accordance with the embodiment described with respect to the previous Figures. The horizontal axis indicates a penetration depth dpinto the coating in units of 1 μm, and the vertical axes indicates the coating results n, i.e. the number of points in the predetermined subset of probe surface points for which sensing data have initially been provided indicative of the respective penetration depth. For instance, approximately 10.000 points exist on the examined portion of the coating for which a penetration depth of 7±0.5 μm has been determined. The histogram H displays three maxima, of which two are strongly pronounced, while a first maximum has a peak value that is substantially lower. The respective peak depth intervals lie between 2 μm and 6 μm for the first peak, between 45 μm and 57 μm for the second peak, and between 87 μm and 98 μm for the third peak. Between the peaks, a threshold value of approximately 17.000 counts is not exceeded. At the same time, counting results also nowhere approach 0 in the displayed range of penetration depths. This base level of counts between 0 and 17.000 can be understood as irrelevant for deriving the coating property, and could therefore be eliminated by setting the tolerance threshold to be equal to 17.000. FIG.5illustrates the discrete derivative H′ of the histogram H ofFIG.4. Again, the horizontal axis indicates the penetration depth dpin μm, while the vertical axis now measures how the counting results n change from one penetration depth interval to the next. This change, denoted by Δn/Δdp, is measured in units of inverse micrometres, μm−1. The second and third maxima are rather easily detected based on the clearly distinguishable sign changes of the discrete derivative at about 51 μm and about 92 μm, respectively, wherein in both cases the sign changes from plus to minus. In order to determine the first peak depth position two penetration depth values serve as candidates, namely a value of about 5 μm and a value of about 7 μm, since at both of these values the sign of the discrete derivative changes from plus to minus. The true peak depth interval position is in this case calculated by averaging, i.e. leading to 6 μm in this case. In other embodiments, the true peak depth interval position is determined by fitting an analytical peak function, possibly Gaussian shaped, to the counting results in a neighbourhood of the peaks, and the peak position is estimated as the position of the maximum of the analytical peak function. FIG.6illustrates a cross-section through an exemplary portion of a probe covered with a multi-layer coating having a defect due to impact of a chip during a chip impact testing procedure.FIG.6could also be viewed as a schematic visualization of a layer profile as determined based on sensing data also from the interior of the coating and, in this case, also the probe. The probe comprises a bulk part PB and a probe surface PS. The coating overlies the probe surface PS and comprises three layers. The top layer is a comparably thin surface layer LS. Underneath, a first main layer L1and a second main layer L2follow, wherein the surface layer LS and the first main layer are divided by an interface IS1. In this example, the first main layer L1is sandwiched between the surface layer LS and the second main layer L2, the second layer having direct contact with the probe surface PS. The defect D affects all three layers. In other examples, the coating may comprise further layers, the lower ones lying deep enough not to be affected by chip impact. Since in the example shown inFIG.6the first main layer L1and the second main layer L2are both substantially affected, they could also be called first and second major defect layer, respectively. A characteristic size of the defect D is assumed to be given by its width w, as measured in the cross-sectional plane and along the interface112between the first main defect layer L1and the second main defect layer L2. The location, i.e. partial depth, of the interface is known either from the respective peak depth interval determined from a depth histogram, or, in the case of transparent or partially transparent coatings, from the layer profile, as explained above. The cross-sectional plane in which W is measured might be chosen, according to corporate conventions, for instance, such that W is maximal. Optionally, W might further be corrected to W′=(W+Wperp)/2, wherein Wperpis the maximal extent of the defect as measure along the interface112in the cross-sectional plane perpendicular to the original one. A defect is then sorted, again by convention, into a severeness class defined by given threshold values. For example, a defect with W(′)≤1 mm will be classified into the lowest severeness class, while a defect with 1 mm<W(′)≤1 mm will already be classified more severe. The distribution of severeness classes over all defects, possibly approximately parametrized by a mean severeness class and/or a standard variation, accurately measures the resistance of the tested coating against chip impacts. Its characteristics can serve for assessing44a mechanical resistance of the coating. Although the embodiments described above with references to the Figures referred an application of the invention for examining coatings with respect to impacts arisen from impact testing procedures and using digital holographic techniques, it is understood that the same or similar principles can also be applied for other purposes and with different means. In particular, the disclosed method is not limited to be used in conjunction with holographic imaging techniques, but can likewise make use of other means for providing sensing suitable for determining a depth representation. Such means specifically include profilometric and deflectometric measurement and/or imaging systems. Also, it will be appreciated that the method finds applications also for monitoring on-going production processes in which structured coatings are applied to films or foils, or for assessing results thereof, such as whether the particular coating structure desirable for obtaining a certain functionality has been achieved. In particular, although determining a depth histogram has been proposed only in relation with stone impact test, it is understood that depth histograms as referred to herein will also be of advantage in other types of examinations of coatings by deriving objective characteristics therefrom. On the other hand, it is to be noted that in impact tests performed for transparent or partially transparent coatings, such as clearcoats, additionally or alternatively to determining a depth histogram from a topographic map of the coating, coating properties like the position of layer interfaces may also be derived from a layer profile. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practising the claimed invention from the study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite articles “a” or “an” does not exclude a plurality. BASF SE190149190149 A single unit or device may fulfill the functions of several items to be cited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Procedures like the providing of sensing data, the determining of a depth representation of the coating from the sensing data, as well as the deriving of a coating property based on the depth representation, but also procedures like the shooting of ballistic objects into a coating, cleaning the coating, and the collecting of sensing data, described as performed by one or several units or devices, can be performed by any other number of units or devices. These procedures and/or the operations of the system can be implemented as instructions of a computer program and/or as dedicated hardware. A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state storage medium, supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless communication systems. Any reference signs in the claims should not be construed as limiting the scope. | 29,659 |
11859963 | DETAILED DESCRIPTION The principles, uses, and implementations of the teachings herein may be better understood with reference to the accompanying description and figures. Upon perusal of the description and figures present herein, one skilled in the art will be able to implement the teachings herein without undue effort or experimentation. In the figures, same reference numerals refer to same parts throughout. In the description and claims of the application, the words “include” and “have”, and forms thereof, are not limited to members in a list with which the words may be associated. As used herein, the term “substantially” may be used to specify that a first property, quantity, or parameter is close or equal to a second or a target property, quantity, or parameter. For example, a first object and a second object may be said to be of “substantially the same length”, when a length of the first object measures at least 80% (or some other pre-defined threshold percentage) and no more than 120% (or some other pre-defined threshold percentage) of a length of the second object. In particular, the case wherein the first object is of the same length as the second object is also encompassed by the statement that the first object and the second object are of “substantially the same length”. According to some embodiments, the target quantity may refer to an optimal parameter, which may in principle be obtainable using mathematical optimization software. Accordingly, for example, a value assumed by a parameter may be said to be “substantially equal” to the maximum possible value assumable by the parameter, when the value of the parameter is equal to at least 80% (or some other pre-defined threshold percentage) of the maximum possible value. In particular, the case wherein the value of the parameter is equal to the maximum possible value is also encompassed by the statement that the value assumed by the parameter is “substantially equal” to the maximum possible value assumable by the parameter. As used herein, the term “about” may be used to specify a value of a quantity or parameter (e.g. the length of an element) to within a continuous range of values in the neighborhood of (and including) a given (stated) value. According to some embodiments, “about” may specify the value of a parameter to be between 80% and 120% of the given value. For example, the statement “the length of the element is equal to about 1 m” is equivalent to the statement “the length of the element is between 0.8 m and 1.2 m”. According to some embodiments, “about” may specify the value of a parameter to be between 90% and 110% of the given value. According to some embodiments, “about” may specify the value of a parameter to be between 95% and 105% of the given value. As used herein, according to some embodiments, the terms “substantially” and “about” may be interchangeable. For ease of description, in some of the figures a three-dimensional cartesian coordinate system (with orthogonal axes x, y, and z) is introduced. It is noted that the orientation of the coordinate system relative to a depicted object may vary from one figure to another. Further, the symbol ⊙ may be used to represent an axis pointing “out of the page”, while the symbol ⊗ may be used to represent an axis pointing “into the page”. Referring to the figures, in block diagrams and flowcharts, optional elements and operations, respectively, may appear within boxes delineated by a dashed line. The present disclosure advantageously expands the picosecond ultrasonics technique to allow for three-dimensional probing of samples, such as semiconductor devices, characterized by one or more (physical) properties, which may vary along one or more lateral directions. The manner in which the properties vary may be depth dependent. Thus, the present disclosure generalizes the picosecond ultrasonics technique to allow obtaining not only one-dimensional structural information regarding a probed structure but also two and three-dimensional structural information. In particular, the methods and systems of the present disclosure advantageously allow for three-dimensional probing of nanostructures within a sample (e.g. a wafer), which are positioned too deeply within the sample, or extend too deeply into the sample, to allow probing thereof using a scanning electron microscope. More specifically, the present disclosure expands the picosecond ultrasonics technique to allow for depth-profiling of one or more “lateral structural features”. That is, structural features whose geometries, densities, and/or material compositions, for example, vary along at least one lateral dimension, and whose lateral extent may be as small as 5 nm (at least along one lateral direction). In particular, the methods and systems of the present disclosure enable estimating a depth-dependence of one or more parameters parameterizing a lateral structural feature (e.g. one or more parameters characterizing the variation in geometry and/or density along one or more lateral directions). As a non-limiting example, intended to render the exposition more concrete,FIGS.1A-1Cschematically depict different stages, respectively, in depth-profiling of a sample100(e.g. a semiconductor device) characterized by lateral structural variation in the sense of including a lateral structural feature, according to some embodiments of the present disclosure. More precisely, a target region110of sample100is depicted. A dashed line B1indicates a first (lower, as depicted in the figure) boundary of target region110and a dashed line B1′ indicates a second (upper, as depicted in the figure) boundary of target region110. Target region110includes three adjacent subregions: a first side-subregion110a, a second side-subregion110b, and a mid-subregion110c. Mid-subregion110cis positioned between first side-subregion110aand second side-subregion110b. Side-subregions110aand110bmay, for example, correspond to a first solid medium, characterized by a first refractive index, while mid-subregion110cmay, for example, correspond to a second solid medium, characterized by a second refractive index, which differs from the first refractive index. Additionally, or alternatively, according to some embodiments, the first solid medium and the second solid medium may be characterized by a different speed of sound. In this regard, it is noted that for implementing the depth-profiling methods disclosed herein, it is sufficient that at least one of the refractive index and the speed of sound varies across the target region. According to some alternative embodiments, not depicted inFIGS.1A-1C, mid-subregion110cmay constitute a void. WhileFIGS.1A-1Cpresent two-dimensional views of target region110, taken along the zx-plane, to facilitate the description, target region110is assumed to be uniform along the y-axis up to fabrication imperfections. Hence,FIGS.1A-1Ceffectively fully depict target region110. InFIGS.1A-1C, the width of mid-subregion110cdecreases as the z-coordinate is increased. Thus, the structure of target region110exhibits not only lateral (i.e. transverse) variation (or dependence) but also longitudinal variation. In other words, the geometry of target region110varies not only along the x-axis but also along the z-axis (for some ranges of values of x). More specifically, as target region110is traversed in parallel to the x-axis from the first boundary thereof, indicated by dashed-line B1, to the second boundary thereof, indicated by dashed-line B1′, the material composition thereof changes twice. In this sense the structure of target region110may be said to include a lateral structural feature. More specifically, the lateral structural feature is constituted by the change from the first medium (characterized by a first refractive index) to the second medium (characterized by a second refractive index) and back again to the first medium, as target region110is traversed in parallel to the x-axis. The shape of mid-subregion110cmay be estimated utilizing the methods and systems disclosed herein. More precisely, a depth-dependence (i.e. a dependence on the z-coordinate) of a width (indicated by w(z)) of mid-subregion110cmay be estimated utilizing the disclosed methods and systems. In this sense, the disclosed methods and systems are said to allow for depth-profiling of a sample (e.g. a semiconductor device). In particular, the present disclosure teaches how to estimate the depth-dependence of a parameter characterizing a lateral structural feature. InFIGS.1A-1C, a natural choice for the parameter is the width (indicated by w(z)) of mid-subregion110c. Thus, utilizing the methods and systems of the present disclosure the function w(z) may be evaluated. Target region110further includes a lateral absorption layer120. According to some embodiments, and as depicted inFIGS.1A-1C, absorption layer120is positioned adjacently to an external surface124of sample100. Referring toFIG.1A, a pump pulse123(e.g. a laser pulse) is projected on external surface124on an area thereof adjacent to absorption layer120. Pump pulse123is configured to be absorbed in absorption layer120, and thereby heat absorption layer120. It is noted that a (lateral) area of absorption layer120is dependent on a beam width DIA1(i.e. the beam diameter; indicated by a double-headed arrow) of pump pulse123. The increase of temperature (of absorption layer120) induces a mechanical strain(s) in absorption layer120, resulting in the production of an acoustic pulse125(shown inFIGS.1B and1C). Acoustic pulse125propagates away from external surface124and into the depth of target region110. As acoustic pulse125propagates within target region110, acoustic pulse125temporarily locally modifies the density distribution across the segment whereat the acoustic pulse is momentarily localized. This in turn leads to a temporary modification in the refractive index due to the elasto-optic effect. These changes in the refractive index may be sensed through Brillouin scattering. More specifically and as elaborated on below, a series of probe pulses may be scattered off a series of acoustic pulses (such as acoustic pulse125), each at a respective depth, and respective (backward) scattered component of the probe pulses may be detected to obtain a plurality of measured signals. The plurality of measured signals may be analyzed to reveal structural information about a probed structure, such as, for example, the width w(z) of mid-subregion110c The beam diameter of pump pulse123may be greater than, or at least equal to, a maximum width of mid-subregion110c, or in mathematical terms DIA1≥wmax, wherein wmax=maxz[w(z)] (i.e. the maximum value assumable by w(z)). This in turn implies that a lateral extent of acoustic pulse125is at least about equal to the maximum width of mid-subregion110c, which in this case may constitute the lateral extent (i.e. maximum lateral extent) of the lateral structural feature. From optical diffraction limit considerations, to produce a well-defined beam, a wavelength of pump pulse123cannot be smaller than about 2·DIA1. Therefore, the wavelength of pump pulse123may be greater than, or about equal to, 2·wmax. While inFIGS.1A-1C, the width of target region110is depicted as greater than the maximum width of mid-subregion110c(i.e. wmax), it is to be understood that the width of target region110may be selected to be more narrow. In particular, the width of target region110may be selected to be equal to wmax(by selecting the beam diameter of pump pulse123to be equal to wmax). Referring toFIG.1C, a probe pulse127(e.g. a laser pulse) is projected on external surface124on an area thereof adjacent to absorption layer120. Probe pulse127is configured to penetrate into target region110, such as to undergo Brillouin scattering off acoustic pulse125(within target region110). Further indicated is a scattered component131of probe pulse127, which is (Brillouin) scattered backwards off acoustic pulse125. Scattered component131may be detected by a (light) detector132to produce a corresponding measured signal. The depth (or, what amounts to the same thing, the z-coordinate) at which the target region is probed is determined by the depth at which probe pulse127“engages” acoustic pulse125, i.e. undergoes (Brillouin) scattering therefrom. By controlling a time-delay Δt between the launch of pump pulse123and the launch of probe pulse127, the z-coordinate at which the probing occurs may be selectively controlled. By implementing the above-described operations at different time-delays Δt, target region110may be probed at a plurality of corresponding depths, and, in particular, all along the depth dimension of target region110. In this way, a plurality of measured signals may be obtained, corresponding to a to a plurality of (probed) depths. As described in detail below in the Methods subsection, the plurality of measured signals may be analyzed to evaluate (estimate) w(z), or, in other words, the dependence of the width of mid-subregion110con the depth. It is noted that the measurement resolution along the z-axis is determined by a width u of acoustic pulse125, which in turn may be determined by a thickness b (indicated inFIGS.1B and1C) of absorption layer120, as discussed in more detail below in the Methods subsection. According to some embodiments, u may advantageously be as small as about 10 nm. Finally, it is noted that while inFIGS.1A-1Cabsorption layer120is shown as being adjacent to external surface124of sample100, other options are in general possible, as depicted, for example, inFIGS.4and5. In particular, according to some embodiments, the absorption layer may be fully embedded within the sample region and/or positioned outside the target region. As used herein, according to some embodiments, the term “lateral structural feature” refers to a structure which laterally varies along at least one (lateral) direction in the sense that along that direction a value of at least one parameter characterizing the structure is not constant. A lateral structural feature may manifest, for example, as a change in one or more of a geometry, material composition, media, mass density, density of embedded elements and/or voids, spatial arrangement of embedded elements and/or voids, doping concentration(s) (i.e. density of doping impurities), which in turn manifests as a change in the refractive index, and/or the speed of sound, along at least one lateral direction. Variations in optical properties such as, for example, variations in birefringence and/or optical anisotropy (i.e. the dependence of the of refractive index on the polarization and/or propagation direction of light therein) along at least one lateral direction may also constitute a lateral structural feature in the sense described above. It is noted that a lateral structural feature may vary along two lateral directions or only along a single lateral direction. The disclosed methods and systems allow for depth-profiling in both cases.FIG.4andFIGS.6A-6Gdepict examples of lateral structural features which laterally vary along two (lateral) directions.FIGS.1A-1C,FIG.5, andFIGS.7A-7Edepict examples of lateral structural features, which (up to fabrication imperfections) laterally vary only along a single direction (namely, in parallel to the x-axis). Generally, a target region is selected to fully include a lateral structural feature. If the lateral structural feature forms part of a greater feature, which extends beyond the target region, variation of the greater feature outside the target region is not probed (and does not affect the classification of the lateral structural feature as varying along one or two lateral directions). Else the size of the target region may be increased (by increasing the beam width of the pump and probe pulses) to fully include the greater feature. As used herein, according to some embodiments, the term “lateral extent” with reference to a lateral structural feature, which laterally varies along two lateral directions, may refer to a maximal lateral extent of the feature (wherein the maximum is taken over the depth dimension). For example, the lateral extent of a (circular) cylinder, whose symmetry axis is parallel to the longitudinal direction, is the diameter of the cylinder, while the lateral extent of a conical frustum, whose symmetry axis is parallel to the longitudinal direction, is the greater of its two diameters. The lateral extent of an elliptical cylinder (i.e. a cylinder whose lateral cross-section defines an ellipse), whose symmetry axis is parallel to the longitudinal direction, is the greater of the two diameters of the ellipse. According to some embodiments, the term “lateral extent” with reference to a lateral structural feature, which laterally varies only along a single lateral direction, may refer to a maximal extent of the feature along that lateral direction (wherein the maximum is taken over the depth dimension). According to some embodiments, the term “lateral extent” with reference to a lateral structural feature, whose rate of a change along a first lateral direction is significantly greater than along a second lateral direction may refer to a maximal extent of the feature along the first lateral direction. According to some embodiments, the term “lateral extent” with reference to a lateral structural feature, which laterally varies along two lateral directions, may refer to a maximal extent of the feature along a specific lateral direction, irrespective of whether the maximal extent of the feature along another lateral direction may be greater. This may the case, for example, in embodiments wherein the lateral structural feature is sought to be depth-profiled only along one lateral direction. Systems FIG.2schematically depicts a computerized system200for depth-profiling of samples, such as semiconductor devices and structures, according to some embodiments. System200includes a light source202, a detector204(a light sensor), a measurement data analysis module208, a stage212, a controller214, optical equipment216, and a lock-in amplifier218. Light source202may be a coherent light source, such as a laser source (i.e. a laser generator). Optical equipment216may include a pump modulator222, a variable delay line226, a filter230(e.g. an optical filter), and, optionally, one or more of a probe modulator232, a pump polarization module236, and/or a probe polarization module238. Optical equipment216may further include an objective lens244and a plurality of beam splitters246: a first beam splitter246a, a second beam splitter246b, and a third beam splitter246c. Light source202, detector204, stage212, and optical equipment216constitute, or form part of, an optical setup (not numbered) of system200. Stage212is configured to have placed thereon a sample, such as a sample250(e.g. a semiconductor device or structure). Also indicated is a target region252in sample250, which is to be probed (i.e. depth-profiled). As elaborated on below, target region252may include a lateral structural feature (not shown inFIG.2). That is, a structure, which varies along at least one lateral direction (i.e. in parallel to the xy-plane), in the sense that along that direction, a value of at least one parameter characterizing the structure is not constant. According to some embodiments, the structure of target region252may further vary along the longitudinal direction (i.e. along the z-axis, which quantifies the depth). According to some embodiments, target region252may include a plurality of the lateral structural feature (i.e. a plurality of same lateral structural features), which constitute a composite a lateral structural feature. According to some such embodiments, the plurality of the lateral structural feature constitutes a repeating lateral structural pattern (i.e. a composite lateral structural feature, which is periodic). According to some embodiments, target region252may include a plurality of different lateral structural features. According to some embodiments, a plurality of lateral structural features, particularly when not arranged periodically, may be collectively considered as a single lateral structural feature. Controller214may be functionally associated with each of light source202, detector204, stage212, optical equipment216, and lock-in amplifier218. More specifically, controller214is configured to control and synchronize operations and functions of the above-listed modules and components—particularly those of optical equipment216—during depth-profiling of a sample. For example, controller214may set a sequence of time-delays, which are imposed by variable delay line226, such that the minimum time-delay allows probing target region252at maximum depth and the maximum time-delay allows probing target region252at minimum depth. According to some embodiments, stage212may be movable, at least along one or more lateral directions, thereby allowing depth-profiling of different target regions in sample250. According to some embodiments, stage212may be configured to allow monitoring and controlling the temperature of a sample placed thereon. For example, according to some embodiments, a sample-placement surface of stage212(i.e. the top surface of stage212) may be controllably cooled (and optionally also heated). In operation, light source202may produce a laser beam215(e.g. a laser pulse) directed at first beam splitter246a. According to some embodiments, laser beam215may be a laser pulse or may include a series of pulses. A first sub-beam215a, also referred to as a probe pulse227, indicates the portion of laser beam215that is passed through first beam splitter246a. A second sub-beam215bindicates the portion of laser beam215that is reflected off first beam splitter246atowards pump modulator222. Second sub-beam215bis modulated by pump modulator222, thereby preparing a pump pulse223(indicated by dashed-dotted arrows), as elaborated on below. Pump pulse223travels from pump modulator222towards objective lens244(via second beam splitter246b) and is focused thereby on sample250. Pump pulse223is configured to be absorbed by an absorption layer (not shown inFIG.2) of sample250and thereby generate an acoustic pulse (not shown inFIG.2), essentially as described in the description ofFIGS.1A-1Cand as further elaborated on below. Some possible configurations of target regions and absorption layers in samples are depicted inFIGS.1A-1C,4, and5, as well as inFIGS.6A-6G and7A-7E. According to some embodiments, pump modulator222may be configured to modulate a waveform of second sub-beam215b, such that pump pulse223is characterized by a pump carrier (i.e. carrier wave) and a pump envelope: The pump carrier may be configured (e.g. is characterized by a wavelength) to facilitate penetration of pump pulse223into the sample, and—when the absorption layer is fully embedded within sample (as depicted, for example, inFIGS.4and5)—propagation therein onto the absorption layer, as well as absorbance of pump pulse223within the absorption layer, as described below in the Methods subsection. The pump envelope may be configured to facilitate the separation of a scattered component of the probe pulse from background signals and noise, thus improving detection. According to some such embodiments, pump modulator222may include a frequency-doubler (not shown). According to some embodiments, a portion of pump pulse223may be returned from sample250(due to one or more scattering and/or reflection mechanisms). Such a returned component251of pump pulse223is indicated inFIG.2by dashed-double-dotted arrows. Returned component251may be substantially filtered out by filter230, as elaborated on below. According to some embodiments, wherein optical equipment216further includes pump polarization module236(e.g. a polarization filter whose polarization angle is controllable), pump polarization module236may be used to modify the polarization of pump pulse223(e.g. from circular polarization to linear polarization), such as to maximize, or substantially maximize, the absorption of pump pulse223within the absorption layer (as described, for example, in the description ofFIGS.7A and7B), and thereby increase the magnitude of the Brillouin oscillations, potentially facilitating extraction thereof by measurement data analysis module208. From first beam splitter246aprobe pulse227(i.e. first sub-beam215a) travels to variable delay line226. Variable delay line226is configured to delay probe pulse227for a time interval, which may be controllably selected, as elaborated on below. According to some embodiments, from variable delay line226, the delayed probe pulse227(indicated by dashed arrows) continues to third beam splitter246c. According to some embodiments, and as depicted inFIG.2, on the way to third beam splitter246cprobe pulse227may pass through probe modulator232. In such embodiments, probe pulse227may be modulated by probe modulator232, according to a modulation signal received from controller214. According to some embodiments, and as depicted inFIG.2, probe pulse227may further pass through probe polarization module238(e.g. a polarization filter whose polarization angle is controllably selectable). Probe pulse227is reflected from third beam splitter246ctowards second beam splitter246b. From second beam splitter246bprobe pulse227is reflected towards objective lens244, which focuses probe pulse227on sample250. As elaborated on below, probe pulse227is configured to penetrate sample250and enter into target region252, such as to be scattered off the acoustic pulse at a depth within target region252, which is determined by the delay-time imposed by variable delay line226. More specifically, variable delay line226may be configured to controllably increase the optical path length of probe pulse227(e.g. using mirrors), thereby increasing the travel time thereof, with the result that probe pulse227arrives at sample250at a controllable time delay relative to pump pulse223. Also indicated is a (backward) scattered component231of probe pulse227(resulting from the scattering thereof off the acoustic pulse). According to some embodiments, wherein optical equipment216further includes probe polarization module238, probe polarization module238may be utilized to modify the polarization of probe pulse227, such as to maximize, or substantially maximize, an intensity of scattered component231of probe pulse227. Filter230may be configured to transmit therethrough scattered component231while simultaneously filtering out noise and/or blocking returned component251. According to some embodiments, filter230is, or includes, an optical filter, and a wavelength of pump pulse223and a wavelength of probe pulse227may be selected such that a wavelength characterizing returned component251and a wavelength characterizing scattered component231differ, thereby allowing transmission of scattered component231through filter230and blocking of returned component251thereby. More generally, according to some embodiments, waveforms of pump pulse223and probe pulse227are selected such as to allow employing an optical filter to discriminate there between. According to some embodiments, rays of scattered component231and rays of returned component251, arriving at filter230, may be oriented along two different directions (or two distinct ranges of angles). In such embodiments, filter230may be, or also include, an angular filter (i.e. allowing light, arriving only in specific incidence angles, to be transmitted therethrough). According to some embodiments, which include probe polarization module238, and, optionally, pump polarization module236, filter230may be, or include, a polarization filter. In such embodiments, a polarization of probe pulse227, and, optionally, a polarization of pump pulse223, may be selected such as to allow transmission of scattered component231through filter230and blocking, or substantial blocking, of returned component251. Detector204is configured to detect the output of filter230(i.e. scattered component231after transmission thereof through filter230) to obtain a measured signal. The measured signal may be relayed to lock-in amplifier218. Lock-in amplifier218is configured to receive from controller214the modulation signal, employed by pump modulator222in the preparation of pump pulse223. Lock-in amplifier218uses the modulation signal to obtain an extracted (i.e. FM demodulated) signal in which the contribution to the measured signal due to scattered component231is amplified (and background signals and noise are suppressed). The extracted signal essentially represents deviations from a baseline signal (which would be obtained in the absence of the pump pulse). As such, the extracted signal corresponds to the Brillouin oscillations due to the scattering of the probe pulse off the acoustic pulse (which, in turn, is produced by the pump pulse). In order to estimate the depth-dependence of the lateral structure feature (e.g. in order to estimate w(z) in specific embodiments of sample250, wherein sample250has the structure of sample100), the above-described sequence of probing operations may be implemented at a plurality of different time-delays, such as to probe the target region at a corresponding plurality of different depths. For each time-delay a respective measured signal Mr(z) is obtained. Mr(z) denotes a measured signal obtained when probe pulse227is scattered off the acoustic pulse at a coordinate z (within target region252). The plurality of measured signals {Mr(z)}r(e.g. after demodulation) may be analyzed by measurement data analysis module208to obtain an estimate of a depth-dependence of the lateral structural feature, as elaborated on below. (It is noted that the same depth may be probed a plurality of times.) The number of measured signals and the corresponding time delays may depend on the longitudinal extension of the target region. As a non-limiting example, according to some embodiments, the time delays may be smaller than about 2 nsec (nanosecond), with a temporal sampling resolution of about 1 psec (picosecond). Measurement data analysis module208includes computer hardware (one or more processors, and volatile as well as non-volatile memory components; not shown). Measurement data analysis module208may be configured to receive from lock-in amplifier218(e.g. one at a time) the extracted signals corresponding to {Mr(z)}rand to combine the extracted signals into a (single) combined signal ES(z). Measurement data analysis module208is further configured to analyze the combined signal, such as to obtain a depth-dependence (or, what amounts to the same thing, dependence on the z-coordinate) of at least one parameter characterizing a lateral structural feature within target region252. Further, according to some embodiments, measurement data analysis module208may be configured to isolate the elasto-optic contribution to the measured signal from contributions thereto due to other physical effects triggered by the pump pulse. In particular, as elaborated on below in the Methods subsection, measurement data analysis module208may be configured to distinguish the elasto-optic contribution to the measured signal from a thermo-optic contribution thereto, based on the different physical characteristics thereof: Using signal processing techniques, the thermo-optic contribution to the (extracted) measured signal may be identified and subtracted therefrom. According to some embodiments, target region252includes a composite lateral structural feature (not shown) composed of a plurality lateral structural features, which up to fabrication imperfections are the same. Further, a size of a beam diameter of probe pulse227is selected such as to allow target region252to be fully probed. In such embodiments, each of the plurality lateral structural features may contribute—and substantially equally—to the combined signal. The output of measurement data analysis module208may then be interpreted as an average depth-dependence of the at least one parameter characterizing the lateral structural features (i.e. wherein the average is taken over all the lateral structural features included in the plurality). According to some embodiments, measurement data analysis module208may be configured to obtain the depth-dependence of the at least one parameter, characterizing the lateral structural feature, based on two or more pluralities of measured signals. Each of the two or more pluralities of measured signals may be respectively obtained for a unique pump pulse-probe pulse combination. According to some embodiments, different pump pulse-probe pulse combinations may vary from one another in one or more optical characteristics selected from: a wavelength of the pump pulse, a waveform of the pump pulse, a polarization of the pump pulse, a wavelength of the probe pulse, a waveform of the probe pulse, and polarization of the probe pulse. According to some embodiments, wherein the sample is a wafer, system200may be configured to perform the above-described sequence of probing operations at different locations on the wafer. Measurement data analysis module208may be configured to analyze the obtained pluralities of measured signals to obtain information about process-variations across the wafer. According to some embodiments, same location(s) on different dies may be probed and the obtained pluralities of measured signals may then be compared to obtain a large-scale map of process variations across the wafer (e.g. as part of a die-to-die variation protocol). According to some embodiments, different locations on the same die—known to be characterized by the same lateral structural features (such as different regions in a memory array, which share the same architecture) up to fabrication imperfections—may be probed, and the obtained pluralities of measured signals may then be compared to obtain a small-scale map of process variations across a die or one or more regions thereof (e.g. as part of an in-die variation protocol). In this regard, it is noted that the type(s) of deviations from design specification of a lateral structural feature may depend on the density of the lateral structural features (e.g. when the lateral structural features constitute a repeating lateral structural pattern) in a region on a die. Various ways whereby measurement data analysis module208may process measured signals to obtain a depth-dependence of one or more parameters characterizing a lateral structural feature are further described below in the Methods subsection. According to some embodiments, optical equipment216may be configured such that each of pump pulse223and probe pulse227are incident on sample250at a vanishing, or substantially vanishing, angle of incidence (i.e. the angle of incidence is equal to, or substantially equal to, zero). According to some embodiments, optical equipment216may include optical elements configured to allow controllably modifying the angle of incidence of pump pulse223and/or probe pulse227. According to some embodiments, not depicted inFIG.2, system200may include two light sources: a first light source configured to generate pump pulse223and second light source configured to generate probe pulse227. Methods FIG.3is a flowchart of a method300for depth-profiling of samples, which include one or more lateral structural features, according to some embodiments. Method300may be implemented by computerized system200or a computerized system similar thereto. Method300may include operations of:An operation310, wherein a sample is provided. The sample includes a target region, which includes a lateral structural feature.An operation320, wherein a plurality of measured signals is obtained by implementing m times:A sub-operation320a, wherein an optical pump pulse is projected on the sample, such as to produce an acoustic pulse, which propagates within the target region. A wavelength of the pump pulse is selected to be at least about two times greater than a lateral extent of the lateral structural feature.A sub-operation320b, wherein an optical probe pulse is projected on the sample, such that the probe pulse undergoes Brillouin scattering off the acoustic pulse within the target region.A sub-operation320c, wherein a scattered component of the probe pulse is detected to obtain a measured signal.In each of the m implementations, the respective probe pulse is scattered off the acoustic pulse at a respective depth within the target region, such that the target region is probed at a plurality of depths. (For example, according to some embodiments, in an i-th and j-th implementations (i<j≤m), the i-th probe pulse may be scattered at an i-th depth s; and the j-th probe pulse may be scattered at a j-th depth sj≠si.)An operation330, wherein a depth-dependence of at least one parameter characterizing the lateral structural feature is obtained by analyzing at least the plurality of measured signals (obtained in the m implementations of sub-operations320a-320c). According to some embodiments, different implementations (from the m implementations) may differ from one another in the time-delay of the respective probe pulse relative to the respective pump pulse but, otherwise, setup parameters (e.g. wavelengths of the pump pulse and the probe pulse, polarizations thereof, and so on) may be the same in each implementation. According to some embodiments, the time-delay may measure the time-interval between the incidence of a pump pulse on the sample and the incidence of a probe pulse on the sample. More specifically, according to some embodiments, the m time-delays may be selected such as to ensure that the probe pulses are scattered off the acoustic pulses at a plurality of depths within the target region: e.g. a first probe pulse (or a first group of probe pulses, e.g. when each depth is probed more than once) is scattered off a first acoustic pulse at a first depth s1, a second probe pulse (or a second group of probe pulses) is scattered off a second acoustic pulse at a second depth s2, and so on, until an m-th probe pulse (or an m-th group of probe pulses) is scattered off an m-th acoustic pulse at an m-th depth sm. According to some embodiments, the m time-delays may be selected such that s1>s2> . . . >sm. In sub-operation320a, the wavelength of the pump pulse may be selected such as to maximize, or substantially maximize, the absorption of the pump pulse in an absorption layer, such as absorption layer120or the absorption layers depicted inFIGS.4,5,6D-6G, and7C-7E. More precisely, the absorption layer may be understood as a layer in the sample wherein most of the pump pulse is absorbed. The thickness of the absorption layer may depend on the absorption length of the pump pulse in the medium or media from which the region of the sample—wherein the absorption layer located—is composed. Thus, by increasing the absorptivity of the pump pulse (e.g. by suitably changing the wavelength thereof) in the region (wherein the absorption layer is located), the absorption length of the pump pulse within the region—and therefore the thickness of the absorption layer—are decreased. This in turn may lead to an increase in the resolution of the measured signal since the width of the acoustic pulse (which determines the resolution) may be dependent on the thickness of the absorption layer. The last statement may hold true so long as the duration of the pump pulse is shorter than the formation time of the acoustic pulse (i.e. the thermal expansion time of the absorption layer). Thus, according to some embodiments, the duration of the pump pulse may be selected not to exceed, or substantially not to exceed, the thermal expansion time of the absorption layer. According to some such embodiments, wherein the absorption layer is silicon-based, the duration of pump pulse may be smaller than about 5 psec (picosecond), about 3 psec, or even about 1 psec. Each possibility corresponds to separate embodiments. It is further noted that the location of the absorption layer may itself be controllable. For example, a sample may include a first portion of a first material and a second portion of a second material, with the second portion being positioned within the first portion. The absorption layer may then be controllably situated in the second portion by selecting a pump pulse characterized by a wavelength such that an absorption length of the pump pulse in the first material is much greater than an absorption length thereof in the second material. Depending on the composite structure of the sample, the absorption layer may or may not be included in the target region. Different possible locations of absorption layers within a sample are shown inFIGS.1,4,5,6D-6G, and7C-7E. According to some embodiments, the absorption layer may constitute, or be included in, a distinct element embedded in or on the target region. The embedded element may be characterized by different absorptive behavior (e.g. different absorption length and/or a different dependence of the absorption length on polarization) than the rest of the target region, thereby allowing to selectively heat the embedded element. According to some embodiments, wherein the absorption layer is silicon-based, the wavelength of the pump pulse may be in the ultraviolet range (i.e. below 360 nm). According to some embodiments, wherein the absorption layer is metallic, the wavelength of the pump pulse may also be selected from the visible range. The absorption of the pump pulse within the absorption layer heats the absorption layer, which in turn expands the absorption layer, resulting in the production of one or two acoustic pulses in the sample. More specifically, when the absorption layer is an outermost layer of a sample (on which the pump pulse is directly projected), a single acoustic pulse in the sample may be produced, which propagates away from the absorption layer and into the sample along a perpendicular direction to the absorption layer. When the absorption layer is an inner layer of a sample, two acoustic pulses may be produced, which propagate away from the absorption layer, perpendicularly thereto, and in opposite senses. The lateral extent of the acoustic pulse may be selected to be greater than the lateral extent of the lateral structural feature. More precisely, to fully probe the target region, the lateral dimensions of the acoustic pulse may be selected to be or greater than, or about equal to, the lateral dimensions of the target region. In turn, the lateral dimensions of the acoustic pulse are determined by the beam diameter of the pump pulse. As explained above, this sets a lower bound on the wavelength of the pump pulse, since the optical diffraction limit constrains the diameter of a laser beam to be greater than about λ/2, wherein λ is the wavelength of the laser beam. Consequently, in sub-operation320a, the wavelength of the pump pulse is selected to be greater than, or at least about equal to, twice the lateral extent of the lateral structural feature. According to some embodiments, a depth dimension (also referred to as “longitudinal dimension”) of the target region is determined by the propagation direction of the acoustic pulse within the target region. In sub-operation320b, the wavelength of the probe pulses may be selected to allow the probe pulses to traverse the target region, thereby allowing to probe the target region in full. More specifically, the wavelength of the probe pulses may be selected such that the absorption length of the probe pulses within the target region is greater than (or at least about equal to) an extent of the target region along the propagation direction of the probe pulse within the target region. According to some embodiments, the wavelength of the probe pulses is selected to be at least about two times greater than the lateral extent of the lateral structural feature. According to some embodiments, different probe pulses, which are configured to be scattered off the acoustic pulse at different depths, respectively, may be characterized by different wavelengths, and, more generally, waveforms and/or polarizations. Such a dependence of the probe pulse wavelength (or waveform and/or polarization) on the scattering depth may be implemented using a probe modulator (e.g. probe modulator232) when the target region includes different types of layers (e.g. lateral layers), which are respectively characterized by different refractive indices and/or speeds of sound (e.g. due to different material composition or internal geometry). In particular, this may allow selectively probing each type of layer. As used herein, according to some embodiments, the absorption length of a specimen (e.g. a bulk of material or a composite structure including a plurality of parts made of different materials and optionally characterized by different geometries) is defined as the distance over which an intensity of a light beam entering the specimen drops to about 1/e (≈63%) of the intensity thereof upon entry into the bulk. As used herein, according to some embodiments, the terms “sample” and “specimen” may be used interchangeably. It is noted that the absorption length of a composite specimen may depend not only the absorption lengths of each of the parts making up the specimen, but also on geometries of the parts and the spatial arrangement thereof with respect to one another. Thus, for example, a specimen including alternating layers of two different materials, such that each of the two materials is transparent or substantially transparent to radiation at a continuous range of wavelengths, may nevertheless be reflective to radiation at a specific wavelength within the range. According to some embodiments, the probe pulses may be linearly polarized. In particular, according to some embodiments, and as described below in the description ofFIG.6G, the probe pulses may be linearly polarized such as to increase the measurement sensitivity along a selected lateral direction. According to some embodiments, the polarization of the pump pulses may be selected such as to increase the absorbance thereof in the absorption layer. In this respect, the geometry of the absorbing layer may play a significant role. For example, when the absorption layer includes a plurality of parallel strips (as depicted, for example, inFIGS.7C-7E), the polarization of the pump pulses may be selected to be parallel to the strips. According to some embodiments, particularly embodiments wherein the sample includes a composite lateral structural feature (whether periodic or not) composed of a plurality of a same lateral structural feature, the lateral structural features, making up the plurality, may be simultaneously depth-profiled. In such embodiments, the target region is selected to include the plurality of the lateral structural feature, by accordingly selecting the beam width of the pump pulses (which define the lateral area of the target region). The beam width of the probe pulses may be set to the beam width of the pump pulses, thereby ensuring that the lateral extent of the target region is fully probed (so that all of the lateral structural features in the plurality are probed). From optical diffraction limit considerations, the wavelength of the pump pulses is selected to be at least about two times greater than the lateral extent of the composite lateral structural feature. Similarly, the wavelength of the probe pulses is selected to be at least about two times greater than the lateral extent of the composite lateral structural feature. The obtained plurality of measured signals then manifests the average depth-dependence of the parameters characterizing the lateral structural features in the plurality. According to some such embodiments, the composite lateral structural feature is periodic. According to some embodiments, in operation320, the temperature of the sample may be regulated, such as to ensure that at the start of each of the m implementations, the temperature of the sample is the same, and, optionally, equal to a pre-determined temperature. In operation330, a plurality of measured signals {Mr′(s)}r, obtained in the m implementations of operation320, may be demodulated and combined to obtain a single (combined) signal ES′(s). Here s denotes depth within the target region. Mr′(s) denotes a measured signal obtained for a probe pulse scattered off an acoustic pulse at depth s. It is noted that for a given depth s, {Mr′(s)}rmay generally include a plurality of measured signals obtained at the depth s. As a non-limiting—and purposefully simplified—example intended to render the discussion clearer, a plurality of measured signals {Mr″(s)}r=1, 2, including two measured signals M1″(sA) and M2″(sB), is considered. (Typically, a plurality of measured signals may include anywhere between about 10 to about 1000 measured signals.) The measured signals M1″(sA) and M2″(sB) are assumed to have been obtained by scattering a probe pulse off an acoustic pulse at depths sAand sB>sA, respectively. From the two measured signals M1″(sA) and M2″(sB) two extracted signals EsAand EsBmay be obtained (e.g. using a lock-in amplifier) in which background signals and noise are suppressed, as described above in the description of system200. The two extracted signals may be combined into a single combined signal ES″(s), wherein for sA−½·Δs≤s≤sA+½·Δs ES″(s)=EsAand for sB−½·Δs≤s≤sB+½·Δs ES″(s)=EsB. Here Δs may correspond to the thickness of the layers probed by the probe pulses at each of the two depths. Further, it is implicitly assumed that sA+½·Δs≤sB−½·Δs. The depth s at which the scattering occurred can be related to the scattering time based on the time delay Δt (of the probe pulse relative to the pump pulse), the formation time of the acoustic pulse tF, and the propagation velocity of the acoustic pulse in the target region. The formation time tFis the time it takes for the acoustic pulse to form once the absorbing layer has been irradiated by the pump pulse. The propagation velocity of the acoustic pulse equals the speed of sound vsound. (As elaborated on below, in non-uniform media, the speed of sound may be dependent on the depth, in which case the functional dependence thereof on the depth may be taken into account.) According to some embodiments, s itself may linearly, or substantially linearly, depend on the time-delay Δt. According to some such embodiments, for example, wherein the absorption layer is positioned adjacently to the target region but deeper within the sample than the target region (e.g. as shown inFIGS.6D-6G), s=D−vsound·(Δt−tF). D is the depth dimension or longitudinal extent of the target region. Thus, for the minimum delay-time (i.e. Δt tF) s=D and the target region is probed at maximum depth. For the maximum delay-time (i.e. Δt=tF+D/vsound) s=0, and the target region is probed at zero depth. According to some alternative embodiments, for example, wherein the absorption layer forms the least deep layer of the target region (e.g. as shown inFIGS.6D-6G), s=vsound·(Δt−tF). Thus, for the minimum delay-time (i.e. Δt=tF) s=0 and the target region is probed at minimum depth. For the maximum delay-time (i.e. Δt=tF+D/vsound) s=D, and the target region is probed at maximum depth. According to some embodiments, as part of the analysis in operation330, the combined signal may be compared to another signal, which is measured in the absence of an acoustic pulse (i.e. when no pump pulse is projected on the sample). The comparison allows isolating the contribution of the acoustic pulses to the combined signal, and thereby facilitates extracting the Brillouin oscillations resulting from the interactions between the probe pulses and the acoustic pulses. According to some embodiments, the production of the acoustic pulse(s) in a target region (due to the expansion of an absorbing layer) may be accompanied by a change in the refractive index of the target region (or a part thereof) that is due to the thermo-optic effect—i.e. the change in the refractive index of a medium due to a change(s) in temperature in the medium. The changes in reflectivity induced by acoustic pulses and due to the thermo-optic effect, and, in particular, the relative strength thereof, depend on the physical properties of the medium. According to some embodiments, operation330may include a sub-operation wherein the thermo-optic contribution to the combined signal may be removed or substantially removed. In particular, according to some embodiments, the thermo-optic contribution to the combined signal manifests itself as an added slowly-varying contribution to the Brillouin oscillations (due to the elasto-optic effect). That is, the Brillouin frequency is much higher than the frequency associated with the contribution of the thermo-optic effect to the combined signal. Thus, the thermo-optic contribution to the combined signal may be identified and removed, for example, by smoothing out the combined signal (that is, by averaging over short segments of the signal, such that each segment includes a small number of (Brillouin) oscillations). This may be especially pertinent when the target region is a silicon-based semiconductor, since in silicon-based semiconductors the thermo-optic effect may be much stronger than the elasto-optic effect. According to some embodiments, computer simulations may be utilized to model the Brillouin oscillations, or even a single Brillouin oscillation, that would be observed if method300were implemented with respect to an ideal (i.e. perfectly manufactured) sample. (It is noted that when the longitudinal extent of a target region is comparable to the Brillouin wavelength, as may be the case, for example, in the depth-profiling of fin field-effect transistors, only a single Brillouin oscillation may be observed). The freedom in selecting the physical parameters characterizing the setup, allows canceling “by hand” the contribution of the thermo-optic effect, so that the signal processing operations to distinguish the Brillouin oscillations from the thermo-optic contribution are obviated. Further, according to some embodiments, computer simulations may also be used to model various types of imperfections in the sample and the system, and the Brillouin oscillations associated therewith. In addition, samples that have undergone depth-profiling may be scanned using a scanning electron microscope (SEM) to obtain the actual (or true) structures of the target regions. More specifically, a target region of a sample that has gone depth-profiling (using the methods of present disclosure) may be cut into sufficiently thin layers and each layer may be scanned by a (SEM) to obtain the actual structure thereof. The obtained Brillouin oscillations (and more generally the plurality of measured signals) of different samples may thus be related to the actual structures of the respective target regions thereof. Using machine learning tools, a measurement data analysis module, such as measurement data analysis module208, may be taught to extract from observed Brillouin oscillations the depth-dependence of one or more parameters characterizing a probed lateral structural feature. The teaching may be supervised, employing pairs of observed Brillouin oscillations and/or simulated Brillouin oscillations with the corresponding structure of the lateral structural feature as measured (e.g. using a SEM) or simulated, respectively. Such Brillouin oscillations-lateral structural feature pairs, or similar types of pairs pertaining to similar setups, may potentially also be obtainable from existing libraries (e.g. online databases) of measured and/or simulated signals in similar setups (i.e. similar samples and systems). In a uniform medium the Brillouin frequency fBis given by fB=(2·vsound·n)/λprobe. Here n is the refractive index and λprobeis the wavelength of the probe pulse. In a non-uniform medium, according to some embodiments, the Brillouin frequency may be determined by both the material composition and the geometry of the structure. Namely, according to some embodiments, the speed of sound vsoundand the refractive index n are replaced by an “effective speed of sound”, veff(s), and an “effective refractive index”, neff(s), which in general may both be dependent on the depth s. The Brillouin frequency, fB(s)=(2·veff(s)·neff(s))/λprobe, is therefore also generally dependent on the depth s. The extracted signal may therefore take on the form OB(S)=A(s)·sin(2πfB(s)·s+ϕ1). Hence, by obtaining fB(s) from the extracted signal OB(S), veff(S) and neff(s) may be estimated. veff(S) and neff(s) may, in turn, be correlated to one or more parameters characterizing the lateral structural feature that is sought to be depth-profiled. According to some embodiments, fB(s) may be directly related to a lateral structural feature (for example, to the average diameter of holes in an array of vertical holes, as described below in the description ofFIGS.9A-9E). According to some embodiments, regression analysis may be employed to extract from the plurality of measured signals the depth-dependence of one or more lateral structural features. According to some embodiments, operation320may be performed a plurality of times with respect to different preparations of the pump pulse and/or the probe pulse. Different preparations may differ from one in another in one or more of: a wavelength, power, waveform, and/or a polarization of the pump pulse, and/or a wavelength, power, waveform, and/or a polarization of the probe pulse. Per each preparation a plurality of measured signals, which after pre-processing (e.g. demodulation using a lock-in amplifier, smoothing out thermo-optic contributions to the measured signals) may be jointly analyzed to determine the depth-dependence of the at least one parameter characterizing the lateral structural feature. According to some embodiments, wherein the sample is a wafer, operation320may be repeated with respect to different locations on the wafer. The pluralities of measured signals may be analyzed to obtain information about process-variations across the wafer. According to some such embodiments, the same location(s) on different dies (die-to-die variation) may be probed and the obtained pluralities of measured signals may then be analyzed to obtain a large-scale map of process variations across the wafer. According to embodiments, different locations on the same die—known to be characterized by the same lateral structural features up fabrication imperfections—may be probed and the obtained pluralities of measured signals may then be analyzed to obtain a map of process variations across a die (in-die variation) or one or more regions thereof, as described above in the description ofFIG.2. Each ofFIGS.4and5schematically depicts additional possible configurations (spatial arrangements) of a target region and an absorption layer within a sample, according to some embodiments. Referring toFIG.4, a sample400(e.g. a semiconductor device) undergoing depth-profiling, according to some embodiments, is schematically depicted. Unlike absorption layer120(of sample100ofFIGS.1A-1C), which is included in target region110, an absorption layer420of sample400is not included in a target region410of sample400. WhileFIG.4presents a two-dimensional view of target region410taken along the zx-plane, to facilitate the description, up to fabrication imperfections, target region410is assumed to exhibit rotational symmetry along an axis parallel to the z-axis (so that target region410is cylindrical). Hence,FIG.4effectively fully depicts target region410. More specifically, sample400includes target region410, an outer region430, and an inner region440. Target region410is positioned between outer region430and inner region440. Absorption layer420is positioned within inner region440, adjacently to target region410. Dashed lines B4indicate a circumferential boundary of target region410. To render the discussion more concrete and thereby facilitate the description, target region410is depicted as including two subregions: a target outer subregion410aand a target inner subregion410b, which is surrounded by target outer subregion410a. Target inner subregion410bmay correspond to a first medium, characterized by a first refractive index, while target outer subregion410amay correspond to a second medium, characterized by a second refractive, which differs from the first refractive index. InFIG.4, a width w4(z) of target inner subregion410bis seen to increase in jumps (i.e. discontinuously) in the direction of the negative z-axis. The rotational symmetry of target region410implies that target inner subregion410bis shaped as a circular step pyramid. Hence, w4(z) corresponds to the (depth-dependent) diameter of target inner subregion410b. The lateral structural feature is constituted by the change from the first medium to the second medium exhibited along any radial direction perpendicular to a rotational symmetry axis A4of target region410(axis A4is parallel to the z-axis) when starting from axis A4. A lateral extent, as defined above, of the lateral structural feature may be given by maxz[w4(z)]. Also depicted is a pump pulse423, which is projected on an external surface424of outer region430(and sample400). Pump pulse423is configured to penetrate into sample400and to reach absorption layer420by propagating through outer region430and target region410. In particular, outer region430and target region410may be transparent, or substantially transparent, to pump pulse423. Pump pulse423is further configured to be absorbed in absorption layer420and thereby heat and expand absorption layer420. The expansion of absorption layer420produces an acoustic pulse425, which propagates in the direction of the negative z-axis into target region410. A second acoustic pulse (not shown) may propagate in the direction of the positive z-axis inside inner region440. Also depicted is a probe pulse427, which is projected on sample400at a controllable time-delay with respect to pump pulse423. (Thus,FIG.4should be understood as a schematic and not as representing a single instance of time.) Probe pulse427is configured to penetrate into sample400and propagate therein, such as to be scattered off acoustic pulse425at a controllable depth within target region410. Further indicated is a scattered component431of probe pulse427, which is (Brillouin) scattered backwards off acoustic pulse425. Scattered component431may be detected by a detector432to produce a corresponding measured signal. Referring toFIG.5, a sample500undergoing depth-profiling, according to some embodiments, is schematically depicted. Unlike target region110(of sample100ofFIGS.1A-1C), which is positioned adjacently to external surface124of sample100, a target region510of sample500is fully embedded within sample500(and, hence, is not positioned adjacently to an external surface524of sample500). WhileFIG.5presents a two-dimensional view of target region510taken along the zx-plane, to facilitate the description, target region510is assumed to be uniform along they-axis up to fabrication imperfections. Hence,FIG.5effectively fully depicts target region510. More specifically, sample500includes target region510, an outer region530, and an inner region540. Target region510is positioned between outer region530and inner region540. An absorption layer520is positioned within target region510, adjacently to outer region530. A dashed line B5indicates a first boundary of target region510and a dashed line B5′ indicates a second boundary of target region510. To render the discussion more concrete and thereby facilitate the description, target region510is depicted as including three adjacent subregions: a first side-subregion510a, a second side-subregion510b, and a mid-subregion510c, which is positioned between first side-subregion510aand second side-subregion510b. A width w5(z) of mid-subregion510cincreases in the direction of the positive z-axis. Also depicted is a pump pulse523, which is projected on an external surface524of outer region530(and sample500). Pump pulse523is configured to penetrate into sample500and to reach absorption layer520by propagating through outer region530. In particular, outer region530may be transparent, or substantially transparent, to pump pulse523. Pump pulse523is further configured to be absorbed in absorption layer520and thereby heat and expand absorption layer520. The expansion of absorption layer520produces an acoustic pulse525, which propagates in the direction of the positive z-axis into target region510. A second acoustic pulse (not shown) may propagate in the direction of the negative z-axis inside outer region530. Also depicted is a probe pulse527, which is projected on sample500at a controllable time-delay with respect to pump pulse523. (Thus,FIG.5should be understood as a schematic and not as representing a single instance of time.) Probe pulse527is configured to penetrate into sample500and propagate therein, such as to be scattered off acoustic pulse525at a controllable depth within target region510. Probe pulse527may further be configured to undergo comparatively little scattering off the second acoustic pulse propagating within outer region530(i.e. the total cross-section for (backward) scattering off the second acoustic pulse may be significantly smaller than the total cross-section for (backward) scattering off acoustic pulse525). For example, the waveform of probe pulse527may be selected such that probe pulse527is focused within target region510but defocused within outer region530. Further indicated is a scattered component531of probe pulse527, which is (Brillouin) scattered backwards off acoustic pulse525. Scattered component531may be detected by a detector532to produce a corresponding measured signal. FIGS.6A-6Fschematically depict a sample600undergoing depth-profiling, according to some embodiments. Referring toFIG.6A, sample600is depicted with a front part thereof removed to better reveal the internal structure thereof. Sample600includes a structure602positioned on a bulk604. Structure602includes (air) holes608projecting thereinto. According to some embodiments, holes608may project into structure602from a top (as depicted in the figure) external surface610thereof. Structure602may be characterized by a first (effective) refractive index and bulk604may be characterized by a second refractive index, which differs from the first refractive index. Due to the presence of holes608, structure602includes a plurality of lateral structural features, which constitute a composite lateral structural feature. According to some embodiments, and as depicted inFIG.6A, the composite lateral structural feature forms a repeating pattern. That is, holes608are arranged in a periodic two-dimensional array. According to some such embodiments, the two-dimensional array is rectangular with holes608being arranged in rows and columns parallel to x-axis and y-axis, respectively. More specifically, with each of holes608a lateral structural feature is associated, which is constituted by the change from air to solid exhibited along any radial direction perpendicular to a longitudinal axis of the hole when starting from the longitudinal axis. (The longitudinal axis extends in parallel to the z-axis.) Two longitudinal axes Aaand Abof hole608aand608b, respectively, are indicated inFIG.6B. To facilitate the description, in the following, each of holes608is assumed to project longitudinally into structure602and to be characterized by an elliptical lateral cross-section whose area decreases with the depth. That is, each of holes608may be characterized by (conjugate) diameters dx(s) and dy(s) quantifying the width of holes608along the x-axis and the y-axis, respectively. Two such diameters, dx′(s) and dx″(s) of holes608aand608bare indicated inFIG.6Aat s=0, i.e. on top external surface610. Here s is the depth within sample600. (Generally, s=z+k, wherein k is a constant. If the coordinate system is selected such that the xy-plane coincides with top external surface610, then k=0 and s=z.) Also indicated inFIG.6Bare diameters dx(a)(s) and dx(b)(s) of holes608aand608bat depths s=s′ and s=s″, respectively. To “lowest order” the depth-dependence of the lateral structural features may be parametrized by the depth-dependence the of lateral cross-sectional areas (of the holes). If more accuracy is required, the depth-dependence of two parameters may be estimated, that is, the two conjugate diameters of an ellipse, as elaborated on below. If still more accuracy is required, in principle, the depth-dependence of additional parameters parameterizing various deformations—which are potentially depth-dependent—may also be sought to be obtained. For example, parameters characterizing tilting of the symmetry axes of the holes (which may depend on the depth), deviation in the spacing between adjacent holes (from design specified spacings), and so on. According to some embodiments, in order to obtain the depth-dependencies of a plurality of parameters characterizing a lateral structural feature, operation320may be implemented with respect to different preparations of the pump pulse and/or the probe pulse. For example, operation320may be performed one or more times with the probe pulse polarized in parallel to the x-axis, and one or more times with the probe pulse polarized in parallel to the y-axis. In this way, the average depth-dependencies of each of the two conjugate diameters characterizing the elliptical cross-sections of holes608may be obtained. FIG.6Bpresents a (partial) cross-sectional view of sample600, according to some embodiments. The cross-section cuts sample600along a plane, which is parallel to the zx-plane. According to some embodiments, and as depicted inFIGS.6B-6F, structure602may be a layered structure including a plurality of layers612stacked (i.e. positioned) on top of one another. (Layers612are not shown inFIG.6A.) According to some embodiments, layers612may include two types of layers: layers612aand layers612b, alternately positioned one on top of the other. Layers612aand layers612bmay be made of different materials. According to some embodiments, sample600may be a V-NAND (i.e. vertical-NAND) stack, wherein structure602is mounted on a silicon substrate constituted by bulk604. As a non-limiting example, according to some such embodiments, layers612a(including the outermost layer) may be made of silicon oxide (SiO2) and layers612bmay be made of a silicon nitride (e.g. Si3N4). FIGS.6C-6Fschematically depict four successive stages, respectively, in depth-profiling of a target region624(shown inFIG.6Aand delineated by a dashed-double-dotted line), according to method300. Referring toFIG.6C, a pump pulse623is shown projected on top external surface610, according to some embodiments. Pump pulse623is configured to penetrate into structure602and propagate therein to reach bulk604. Pump pulse623is further configured to be absorbed by bulk604. Pump pulse623is also indicated inFIG.6A. Referring toFIG.6D, pump pulse623is absorbed in an absorption layer618(which forms part of bulk604) positioned adjacently to structure602. A thickness of absorption layer618is determined by the absorption length of pump pulse623in bulk604. The heating of absorption layer618leads to an expansion thereof, as indicated by double-headed arrows e6inFIG.6B. Referring toFIG.6E, the expansion of absorption layer618leads to the formation of a (first) acoustic pulse625apropagating within structure602in the direction of the negative z-axis. A second acoustic pulse625bmay propagate within bulk604in the direction of the positive z-axis. Referring toFIG.6F, a probe pulse627is shown projected on top external surface610, according to some embodiments. Probe pulse627is configured to penetrate into structure602and propagate therein in the direction of the positive z-axis. That is, probe pulse627is configured such that structure602—at least when undisturbed—is transparent, or at least semi-transparent, with respect to probe pulse627. The localized presence of acoustic pulse625ain a subregion within structure602, renders that subregion non-transparent to probe pulse627. More precisely, probe pulse627is further configured to undergo Brillouin scattering off acoustic pulse625a. A (back) scattered component631of probe pulse627propagates away from acoustic pulse625ain the direction of the negative z-axis. Referring again toFIG.6A, target region624is included in structure602. Target region624is shaped as a cylinder, whose diameter is defined by a beam diameter of pump pulse623. According to some embodiments, in order to fully probe target region624, a beam diameter of probe pulse627may be selected to equal, or substantially equal, the diameter of pump pulse623. Target region624thus constitutes the part of structure602, which undergoes the depth-profiling. It is noted that target region624includes a plurality of holes from holes608, and, in particular, a plurality of the lateral structural feature. Since the holes in target region624are depth-profiled together, the plurality of measured signals obtained when subjecting target region624to a depth-profiling according to method300, collectively characterize the depth-dependence of the plurality of the lateral structural feature (included in target region624). That is, the obtained plurality of measured signals characterizes an average depth-dependence of the parameters characterizing the lateral structural features. In particular, from the plurality of measured signals an average depth-dependence of the lateral cross-sectional area of the holes, or the two conjugate diameters characterizing the lateral cross-section of the hole, may be obtained. Put another way, the beam diameter of probe pulse627(which is assumed to equal that of pump pulse623) defines a target region (i.e. target region624) including a plurality of holes and therefore a plurality of lateral structural features, which together define a composite lateral structural feature. The lateral extent of the composite lateral structural feature is equal to the beam diameter of probe pulse627. According to some embodiments, wherein holes608are arranged in a two-dimensional array, probe pulse627may be linearly polarized along a direction perpendicular to the z-axis. More specifically, according to some embodiments, wherein holes608are arranged in a rectangular array as described above, in order to increase measurement sensitivity along the y-axis, probe pulse627may be polarized in parallel to the x-axis. Similarly, in order to increase measurement sensitivity along the x-axis, probe pulse627may be polarized in parallel to the y-axis. As shown below in the Results of Simulations subsection, polarizing the probe pulse in parallel the x-axis leads to a non-uniform intensity distribution of the probe pulse within the target region, wherein the intensity is maximum along columns of holes. In contrast, polarizing the probe pulse in parallel they-axis leads to a non-uniform intensity distribution of the probe pulse within the target region, wherein the intensity is maximum along rows of holes. FIG.6Gdepicts depth-profiling of target region624, according to some embodiments. The depicted depth-profiling constitutes a specific embodiment of the depth profiling depicted inFIGS.6A-6F. A probe pulse627′ is depicted, which is a specific embodiment of probe pulse627. Probe pulse627′ is polarized along they-axis, as indicated by a polarization arrow Py. This allows increasing the measurement sensitivity along the x-axis, so that variations in the average depth-dependence of the x-widths of the holes (i.e. the function dx(s) averaged over the holes) may be obtained to greater precision. (Thus, for example, if only holes608aand608bare probed, then the obtained depth-dependence constitutes an average over the x-widths of holes608aand608b). Also indicated inFIG.6Gare a (first) acoustic pulse625a′, a second acoustic pulse625b′, and a scattered component631′, which are specific embodiments of acoustic pulse625a, second acoustic pulse625b, and scattered component631. Scattered component631′ is also polarized in along the y-axis. While inFIGS.6A-6Gthe diameter(s) of holes608is depicted as decreasing linearly with the depth, the skilled person that the methods and systems of the present disclosure may be applied to probe other hole geometries, such as, for example, circular hole geometries wherein the change (with the increase in depth) of the hole diameter is non-monotonic (e.g. increasing first and then decreasing). FIGS.7A-7Eschematically depict a sample700undergoing depth-profiling, according to some embodiments. Sample700includes a base portion704and a plurality of fins708positioned on base portion704. Each of fins708forms an elongated ridge-like structure, which projects from base portion704. According to some embodiments, and as depicted inFIGS.7A-7E, fins708are of the same shape and are arranged in parallel to one another. Sample700includes a plurality of lateral structural features, which form a composite lateral structural feature. According to some embodiments and as depicted inFIGS.7A-7E—the composite lateral structural feature forms a repeating pattern (i.e. periodically repeating in the direction of the x-axis). For each of fins708, the associated lateral structural feature is constituted by the change from air to fin and back to air as the fin is traversed in a lateral direction perpendicular to the elongate dimension (i.e. in parallel to the x-axis). To facilitate the description and render the discussion more concrete, inFIGS.7A-7Ea width w7(z) of fins708is depicted as decreasing with the distance from base portion704. (The skilled person will understand, however, that other geometries are possible.) w7(z) constitutes a parameter characterizing a depth-dependence of the lateral structural feature in the sense defined herein above. If more accuracy is required, in principle, the depth-dependence of additional parameters may be sought to be obtained, for example, parameters characterizing deviations from design specifications (due to fabrication imperfections) of the slopes of a right wall and a left wall of the fin. A lateral extent C7of the fins may correspond to the maximum width of the fins, which inFIGS.7A-7Ecorresponds to the width of a fin at the base thereof adjacently to base portion704. That is, C7=maxz[w7(z)] According to some embodiments, and as depicted inFIGS.7A-7E, the plurality of fins708constitutes a target region710, which is depth-profiled. According to some such embodiments, sample700may be a fin field-effect-transistor (FinFET). In such embodiments, sample700may be made of silicon, silicon-germanium, or other suitable semiconductor materials. FIGS.7B-7Epresent cross-sectional views of sample700depicting four successive stages, respectively, in a depth-profiling of sample700, according to method300. The cross-section cuts sample700along a plane, which is parallel to the zx-plane. Referring toFIG.7B, a pump pulse723is shown projected on sample700(on the side of sample700on which fins708are positioned), according to some embodiments. Referring also toFIG.7C, pump pulse723is configured to be absorbed in fins708. More specifically, pump pulse723is configured to be absorbed in (lateral) absorption layers712. Each of absorption layers712constitutes a top strip of a respective fin from fins708. For example, an absorption layer712afrom absorption layers712constitutes a top strip of a fin708afrom fins708. A thickness of absorption layers712may be determined by (or primarily by) the absorption length of pump pulse723within the fins. The absorption length in turn is dependent at least on the wavelength (and polarization angle) of pump pulse723. The heating of absorption layers712leads to expansion thereof, as indicated by double-headed arrows e7inFIG.7C. Referring toFIG.7D, the expansion of absorption layers712leads to the formation of acoustic pulses725, respectively. Each of acoustic pulses725propagates within a respective fin (from fins708) away from the absorption layer towards base portion704. For example, an acoustic pulse725a(from acoustic pulses725) propagates within fin708ain the direction of the negative z-axis. Referring toFIG.7E, a probe pulse727is shown projected on sample700(on the side of sample700from which fins708project), according to some embodiments. Probe pulse727is configured to penetrate into fins708and propagate therein in the direction of the positive z-axis. That is, probe pulse727is configured such that fins708—at least when undisturbed—are transparent, or at least semi-transparent, with respect to probe pulse727. The localized presence of acoustic pulses725, in respective subregions within fins708, renders these subregions non-transparent to probe pulse727. More precisely, probe pulse727is further configured to undergo Brillouin scattering off acoustic pulses725. A (back) scattered component731of probe pulse727propagates away from acoustic pulses725in the direction of the negative z-axis. It is noted that for the above-described configurations of pump pulse723and probe pulse727fins708are probed simultaneously. Thus, when subjecting target region710to a depth-profiling according to method300, with the above-described configurations of pump pulse723and probe pulse727, the obtained plurality of measured signals collectively characterizes the depth-dependence of the lateral structural features included in target region710(in the sense described above in the description ofFIGS.6A-6G). With reference to sample700, from such a plurality of measured signals at least an average depth-dependence of the widths of fins708may be extracted. According to some embodiments pump pulse723and/or probe pulse727may be (linearly) polarized along the elongate dimension of fins708(i.e. along they-axis), thereby increasing the measurement efficacy. The above choice of polarization for pump pulse723increases the absorption thereof in absorption layers710and minimizes absorption of thereof in sidewalls716(also numbered inFIG.7A) of fins708. Further, as shown below in the Results of Simulations subsection, the above choice of polarization for probe pulse727maximizes the penetration of probe pulse727into fins708. In contrast, if probe pulse727is polarized perpendicularly to the elongate dimensions of fins708(i.e. along the x-axis), fins708are substantially transparent to thereto. That is, in the latter case, substantially all the radiation is concentrated outside fins708in the spaces there between. The polarization direction of pump pulse723is indicated inFIGS.7A and7Bby polarization arrows Qy. The polarization direction of probe pulse727is indicated inFIG.7Eby polarization arrows Qy′, which is also the polarization direction of scattered component731. Results of Simulations FIG.8depicts an extracted signal obtained through a computer simulation of an implementation of method300with respect to a V-NAND stack, according to some embodiments. The V-NAND stack is characterized by a hole-profile similar to that of holes608of sample600(depicted inFIGS.6A-6H). More specifically, plotted is a (normalized) extracted signal E obtained in the scattering of a series of probe pulses off acoustic pulses at different depths, respectively, within the sample. E=ΔI/I, wherein I represents the intensity of the measured signal and A the deviation (resulting from the Brillouin scattering) from a baseline of the measured signal. The acoustic pulses are generated by identically prepared pump pulses, as described above in the description ofFIGS.6A-6H. The horizontal axis measures the time t from the generation of an acoustic pulse, so that the greater t the smaller the scattering depth. The maximum scattering depth corresponds to t=0 (at which time the acoustic pulse penetrates e.g. from absorption layer618into structure602). The end of the time scale represents the time at which the acoustic pulse reaches the external surface of the sample on which the probe pulse is incident (e.g. top external surface610). A plurality of Brillouin oscillations is observed with the amplitude of the oscillations decreasing with t, that is, increasing with depth. Using frequency estimation techniques, such as short-time Fourier transform or sine-fitting over short time-intervals, the time dependence of the Brillouin frequency may be obtained from the extracted signal. This, in turn, allows obtaining the local Brillouin frequency, i.e. the Brillouin frequency as a function of the depth s. It is noted that when the speed of sound is constant throughout the target region, the relation between the time t and the depth s is linear. FIGS.9A-9Epresent results of computer simulations of depth-profiling of five samples, respectively, using method300, according to some embodiments. The five samples are depicted inFIGS.10A-10E, respectively. Each of the five samples includes an array of identical vertically extending holes. The samples differ from one another in the profile (i.e. shape) of the holes. Each of the five samples modeled resembles sample600(up to a different hole-profile), and, as such, may correspond to a possible design of a V-NAND stack, or to specific possible distortions—due to manufacturing imperfections—on such a design. For example, the hole profile ofFIG.10A, may represent a possible specific design of a V-NAND stack whileFIG.10Emay represent a possible specific distortion thereof. Or, for example, the hole profile ofFIG.10B, may represent a possible specific design of a V-NAND stack whileFIG.10Dmay represent a possible specific distortion thereof. The vertical axis parameterizes the depth s as measured from the top of a hole. The horizontal axis x parameterizes the width of the hole. Only one half of a hole is shown in each ofFIGS.9A-9Ewith the implicit understanding that the hole exhibits rotational symmetry about the vertical axis (as depicted inFIGS.10A-10E). Referring toFIG.10A, a cross-sectional view of a sample1000ais shown. A diameter of holes1008awithin sample1000ais constant (i.e. does not change with the depth). Referring toFIG.10B, a cross-sectional view of a sample1000bis shown. A diameter of holes1008bwithin sample1000bdecreases with the depth at a constant rate. Referring toFIG.10C, a cross-sectional view of a sample1000cis shown. A diameter of holes1008cwithin sample1000cdecreases with the depth at a first rate for depths smaller than a threshold depth (not indicated) and at a second rate for depths greater than the first depth. The first rate is greater than the second rate. Referring toFIG.10D, a cross-sectional view of a sample1000dis shown. A diameter of holes1008dwithin sample1000ddecreases with the depth at a first rate for depths smaller than a threshold depth (not indicated) and at a second rate for depths greater than the first depth. The first rate is smaller than the second rate. Referring toFIG.10E, a cross-sectional view of a sample1000eis shown. A diameter of holes1008ewithin sample1000eincreases at a first rate with the depth for depths smaller than a first threshold depth (not indicated), decreases at a second rate for depths greater than the first threshold depth and smaller than a second threshold depth, and decreases at a third rate for depths greater than the second threshold depth. The second rate is smaller than the third rate. In each ofFIGS.9A-9E, the (true) hole-profile is depicted by a double-lined curve, while the estimated hole-profile, as derived from the (simulated) measured signals, is depicted by a dotted-curve. The estimated hole-profile corresponds to an estimate of the average hole-profile (taken over all the holes in the array). However, since in the simulation the holes were all taken to be identical, this distinction is irrelevant, except that by simultaneously probing all the holes in the array, rather than a single hole, boundary (i.e. edge) effects are reduced and a better estimate is obtained. The simulations indicate that the speed of sound within each of the samples is practically independent of the depth s, and, moreover, is practically independent of the hole-profile. That is, the simulations indicate that the speed of sound is essentially dependent only on the material composition of the samples (which is the same for all the samples). The local Brillouin frequency fB(s) is practically fully determined by neff(s). The local Brillouin frequency can be shown to be correlated to the hole-diameter in a roughly linear manner. That is, fB(i)(s)˜ai+bi·di(s), wherein the index i=1, 2, . . . , 5 labels the sample, di(s) is the hole-diameter (of sample i) at depth s, and aiand bi(which is positive) are constants. More specifically, a linear fitting algorithm was employed to fit the fB(i)(s) about the respective true hole-profiles. In a real-life (i.e. non-simulation) implementation of method300, the obtained local Brillouin frequency may be linearly fitted about the expected hole-profile. FIGS.11A-11Cpresent results of computer simulations of depth-profiling of three samples, respectively, using method300, according to some embodiments. The samples ofFIGS.11A,11B, and11Ccorrespond those ofFIGS.9A,9B, and9D, respectively. The clearly noticeable differences in the estimated profiles there between are due to the use of different data analysis and fitting schemes, as detailed below. More specifically, to obtain the estimated profiles ofFIGS.11A-11C, each of the extracted signals was smoothed to identify the thermo-optic contribution thereto. The thermo-optic contribution was next subtracted from the respective (unsmoothed) extracted signal, and the respective dependence of the Brillouin frequency on the depth s was obtained. The resulting local Brillouin frequencies were next fitted about the respective true hole-profiles using a third order polynomial fitting algorithm. In a real-life implementation of method300, the obtained local Brillouin frequency may be fitted about the expected hole-profile. The different data analysis and fitting schemes employed to obtain the hole-profiles ofFIGS.9A-9EandFIGS.11A-11C, respectively, yield results which are roughly of similar quality, but which nevertheless noticeably differ. While the estimated hole-profiles ofFIGS.9A-9Eare “noisy”, the estimated hole-profiles ofFIGS.11A-11Care smooth but, in contrast, exhibit “systemic” errors in the sense of overestimation or underestimation of the diameter over extended depth-ranges. This suggests that better estimates may be obtainable. In particular, use of machine learning tools or deep learning tools is expected to yield better estimates. Referring toFIGS.12A and12B,FIG.12Ashows the penetration of a simulated x-polarized probe pulse into a simulated V-NAND stack1200. More specifically, a lateral cross-sectional view of V-NAND stack1200, including holes1208, is shown. V-NAND stack1200is a specific embodiment of sample600. Also shown is an intensity scale I (in arbitrary units) ranging from dark to light. The bottom end of the scale corresponds to minimum intensity and the top end of the scale corresponds to maximum intensity, which is registered inside the holes. Within the bulk of V-NAND stack1200the intensity distribution of the probe pulse is seen to be maximum along columns (which extend in parallel to the y-axis) of holes1208. That is, the x-polarized probe pulse penetrates into the bulk of the V-NAND stack between adjacent pairs of holes along the columns, and substantially does not penetrate between adjacent pairs of holes along rows (which extend in parallel to the x-axis) of holes1208.FIG.12Bshows the penetration of a simulated y-polarized probe pulse into V-NAND stack1208. Within the bulk of V-NAND stack1200the intensity distribution of the probe pulse is seen to be maximum along the rows. That is, the y-polarized probe pulse penetrates into the bulk of the V-NAND stack between adjacent pairs of holes along the rows, and substantially does not penetrate between adjacent pairs of holes along the columns. FIG.13depicts an extracted signal obtained through a computer simulation of an implementation of method300with respect to a FinFET, according to some embodiments. The FinFET is characterized by a fin-profile similar to that of fins708of sample700(depicted inFIGS.7A-7E). More specifically, plotted is a (normalized) extracted signal E obtained in the scattering of a series of probe pulses off acoustic pulses at different depths, respectively, within the fins. (E=ΔI/I, as explained above in the description ofFIG.8.) The acoustic pulses are generated by identically prepared pump pulses, as described above in the description ofFIGS.7A-7E. The horizontal axis measures the time t from the generation of an acoustic pulse, or, what amounts to the same thing, the scattering time. In particular, the greater t, the greater the scattering depth. Δt time t=0 the acoustic pulses start propagating into the fins from the absorption layers—that is, the top layers of the fins (e.g. absorption layers712). The maximum scattering depth corresponds to t=tbase(at which time the acoustic pulses reach the base portion of the FinFET, e.g. base portion705). A single Brillouin oscillation is observed. FIGS.14A-14Epresent results of computer simulations of depth-profiling of five samples, respectively, using method300, according to some embodiments. Each of the five samples includes a plurality of parallel and identical fins (e.g. fins708) disposed on a base portion (e.g. base portion704). The samples differ from one another in the lateral cross-sectional profiles of the fins. Each of the five samples modeled resembles sample700, and, as such, may correspond to a possible design of a fin field-effect transistor (FinFET), or to specific possible distortions—due to manufacturing imperfections—on such a design. The vertical axis parameterizes the depth s as measured from the top of a fin. (The top layers of the fins constitute the absorption layers, as described above in the description ofFIGS.7A-7E.) The horizontal axis x parameterizes the width of the fin. Only one half of fin a is shown in each ofFIGS.14A-14E, with the implicit understanding that the fin exhibits mirror symmetry about the ys-plane (the y-axis points outside from the page). More specifically, the ys-plane bisects each fin into two identical longitudinal parts. In each of the figures, the (true) fin-profile is depicted by a double-lined curve, while the estimated fin-profile, as derived from the (simulated) measured signals, is depicted by a dotted-curve. The estimated fin-profile corresponds to an estimate of the average fin-profile. However, since in the simulation the fins were all taken to be identical, this distinction is irrelevant, except that by simultaneously probing all the fins in the array, rather than a single fin, boundary effects are reduced and a better estimate is obtained. A linear regression algorithm was employed to estimate, based on the extracted signals, the average fin-profile. More specifically, a temporal linear regression algorithm was used to relate the scattering time to the width of fin (at the depth s at which the scattering occurs). The scattering time t is straightforwardly related to the scattering depth s via s=vsound·t, thereby allowing to obtain the (estimated) dependence of the (average) width of the fins on the scattering depth s. In each of the simulations both the pump pulse and the probe pulse were linearly polarized along the longitudinal dimension of the fins, i.e. in parallel to they-axis. As explained in the description ofFIG.7E, this choice of pump pulse polarization minimizes the penetration of the pump pulses into the sidewalls of the fins, and thereby helps create a uniform acoustic pulse within each of the fins. Further, this choice of probe pulse polarization maximizes interaction of the probe pulses with the acoustic pulses, as can be seen inFIGS.15A and15B. FIG.15Ashows the penetration of ay-polarized probe pulse into a plurality of parallel fins1508. Fins1508project from a base portion1504. Fins1508are specific embodiments of fins708. Base portion1504is a specific embodiment of base portion704. The vertical axis s parameterizes depth as measured from the top of a fin. The horizontal axis x extends perpendicularly to the elongate dimension of fins1508. More precisely,FIG.15Ashows a cross-sectional view of fins1508and base portion1504with an intensity distribution of the probe pulse in greyscale superimposed thereon. Also shown is an intensity scale I (in arbitrary units) ranging from dark to light, which quantifies the intensity. The bottom end of the scale corresponds to minimum intensity and the top end of the scale corresponds to maximum intensity. The probe pulse is clearly seen to penetrate into fins1508. In contrast,FIG.15Bshows the comparative lack of penetration of a x-polarized probe pulse into the fins: The probe pulse essentially does not penetrate into the fins. As used herein, the terms “lateral extension” and “maximal lateral extension”, in reference to a lateral structural feature or a composite lateral structural feature, may be used interchangeably. As used herein, according to some embodiments, the terms “depth profiling” and “3D probing” may be used interchangeably. It is appreciated that certain features of the disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the disclosure, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the disclosure. No feature described in the context of an embodiment is to be considered an essential feature of that embodiment, unless explicitly specified as such. Although operations in disclosed methods, according to some embodiments, may be described in a specific sequence, methods of the disclosure may include some or all of the described operations carried out in a different order. A method of the disclosure may include a few of the operations described or all of the operations described. No particular operation in a disclosed method is to be considered an essential operation of that method, unless explicitly specified as such. Although the disclosure is described in conjunction with specific embodiments thereof, it is evident that numerous alternatives, modifications and variations that are apparent to those skilled in the art may exist. Accordingly, the disclosure embraces all such alternatives, modifications and variations that fall within the scope of the appended claims. It is to be understood that the disclosure is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth herein. Other embodiments may be practiced, and an embodiment may be carried out in various ways. The phraseology and terminology employed herein are for descriptive purpose and should not be regarded as limiting. Citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the disclosure. Section headings are used herein to ease understanding of the specification and should not be construed as necessarily limiting. | 99,904 |
11859964 | DETAILED DESCRIPTION Embodiments described herein relate to optical systems and methods for determining the shape and/or size of objects that include projecting a pattern of light onto the object. The pattern of light can be configured such that first-order reflections can be distinguished from second- and/or higher-order reflections, which can be rejected. Thus, even in instances in which the pattern of light is reflected onto the object multiple times, the original, or first-order, reflection can be detected, distinguished, and/or used for laser triangulation. As described in further detail below, in some embodiments, the pattern of light projected onto the object does not have reflection and/or rotational symmetry, such that second-order and/or higher-order reflections can be distinguished from the first-order reflection. Laser triangulation techniques can be applied to images captured of the object that include first-order reflections (e.g., images from which second- and/or higher-order reflections have been removed) to create a model of at least a portion of the object. The term laser triangulation is used herein to refer to techniques used to determine the shape of an object onto which a pattern is projected. It should be understood that the light projected onto the object need not originate from a laser source. Additionally, the term laser triangulation should not be understood to mean that the pattern requires a thin straight linear portion. Laser triangulation techniques can be used to ascertain the shape of an object using any predetermined pattern originating from any suitable light source. Similarly stated, laser triangulation can include analyzing an image of a part illuminated by any predetermined pattern to identify deviations in the pattern caused by the shape of the part onto which the pattern is projected. Some embodiments described herein relate to apparatus and/or systems that include a light source, a detector, and a compute device. The light source can be configured to project a pattern onto a part such that a first-order reflection of the pattern is distinct from a second order-(and/or other greater-than-first-order reflections of the pattern.) A detector can be configured to capture an image of the part, for example while the part is illuminated by the light source. The compute device can be communicatively coupled to the detector and configured to process the image of the part to identify a first-order reflection of the pattern and/or to filter second-order reflection (and/or higher-order reflections) from the image. Some embodiments described herein relate to a method that includes projecting a predetermined non-symmetric pattern onto a part. An image of the part can be captured, for example, by a camera or other detector device. The image of the part can be processed to remove a second-order reflection of the non-symmetric pattern. A shape of a portion of the part can be determined based on deviations from the predetermined non-symmetric pattern in the first-order reflection of the non-symmetric pattern in the image of the part. Some embodiments described herein relate to a computer-implemented method that includes receiving multiple images of a part that is illuminated by a pattern that has a predetermined geometry, at least one of the images including a second-order reflection of the pattern. The second-order reflection can be filtered or rejected by comparing a geometry of the second-order reflection to the predetermined geometry. A three-dimensional model of at least a portion of the part can be generated by on triangulating a portion (e.g., a linear portion) of the pattern in first-order reflections captured in the plurality of images. FIG.2is a schematic illustration of a system configured to detect a geometry of part210, according to an embodiment. A light source220is configured to project a predetermined pattern onto the part210. Suitable patterns are described in further detail below, but in general, the pattern is two-dimensional and includes a linear portion. The light source220can be any suitable light source configured to project a pattern. For example, the light source220can be a Berlin Lasers, 532 nm Glass Lens Green Laser-line Generator. As another example, the light source220can be an LED structured light pattern projector, such as those produced by Smart Vision Lights. As yet another example, the light source can be a Digital Light Processing (DLP), Liquid Crystal on Silicon (LCoS), Liquid-Crystal Display (LCD) or any other suitable projector, such as the type typically used to project visual media in a home or professional theatre. The light source220can be configured to project a predetermined pattern in any suitable wavelength and/or combination of wavelengths. In some embodiments, the light source220can be configured to project at least a portion of a pattern in a wavelength that has good contrast against the part210. For example, in instances in which the part210is blue, at least a portion of the pattern could be yellow. In instances in which the part210is red, at least a portion of the pattern could be red. A detector230is configured to image the part210while the part210is being illuminated by the light source220. The detector230can be any suitable camera or photodetector. The detector230is sensitive to the wavelengths of light emitted by the light source220. The detector230can have a resolution sufficient to capture salient features of the pattern projected by the light source220. For example, the detector230can be Basler ace acA2040-180 kc with a desired lens attached. An example of a suitable lens is the Edmund Optics 8 mm/F1.8 C-mount. The light source220and the detector230can be preconfigured, communicatively coupled, or otherwise coordinated such that the light source220is configured to project the pattern in a suitable wavelength with a suitable intensity such that the part210reflects the pattern with sufficient intensity in a color-band to which the detector230is sensitive. In embodiments in which different regions of the part have different colors or reflective characteristics, the detector230can be operable to project different patterns, patterns having different colors, and/or patterns with different intensities based on the reflective characteristics of the portion of the part210being illuminated and/or the sensitivity of the detector230. As described above with reference toFIG.1, laser triangulation systems are generally suitable for determining the geometry of simple, non-reflective parts. Objects, such as part210that have reflective and/or angled surfaces may produce second-order or higher reflections that known systems may not be able to resolve. As shown inFIG.2, the light source220projects the pattern onto the part210, creating a first-order reflection222, which the detector220can capture. The pattern can reflect off the part210and cast a second-order reflection224on another portion of the part210, which the detector220can also capture. A compute device240(having a processor and a memory) can distinguish the first-order reflection222from the second-order reflection224(and, in some instances, higher-order reflections), and refute or filter the second- and/or higher-order reflections such that only the first-order reflection222is used to determine the shape of the part210. In some embodiments, a shape and/or size of the part210can be determined by analyzing the images of the part210taken by the detector while the part is illuminated by the light source220. Specifically, the light source220can project a predetermined pattern onto the part210, and one or more images of the part210illuminated by the pattern can be captured by the detector and processed by the compute device240. The compute device240can analyze deviations between the first-order reflection of the pattern and the expected predetermined pattern. Deviations can be attributed to the shape and/or size of the part210onto which the pattern is projected. In some instances, the compute device240can store or determine (e.g., through a calibration routine) information about the detector230, which may include, but is not limited to, its position and orientation in space and/or information about the light source220, which may include, but is not limited to, its position and orientation in space. Information about the detector230and/or the light source220can be used to produce the three-dimensional position (x, y, z states within some reference frame) of the points or regions where the pattern and part210intersect. Similarly stated, three-dimensional coordinates for portions of the part210illuminated by the pattern can be determined. Any desired technique and method to produce the data may be used with the disclosed system. FIG.3andFIG.4are illustrations of example patterns that can be projected by a light source that can be used to distinguish first-order reflections from second-order reflections, according to two embodiments. In particular, the patterns shown inFIGS.3and4do not have reflection or rotational symmetry. Thus, a second-order reflection of the patterns shown inFIGS.3and4can be distinguished from a first-order reflection. Similarly stated, the patterns shown inFIGS.3and4, under any rotation or combination of rotations will never be identical or equivalent to the original (predetermined) pattern after a reflection followed by any rotation or combination of rotations. Rotation of a pattern in a mathematical sense, as applied to a two-dimensional pattern on some plane in Euclidean space, is a mapping from the Euclidean space to itself, where the shape or points are rotated about a single point on the plane and where the shape is preserved. Reflection of a pattern in a mathematical sense, as applied to a two-dimensional pattern on some plane in Euclidean space, is a mapping from the Euclidean space to itself, where the points or shapes that are reflected are mirrored about an axis on the plane. The pattern ofFIG.3includes two distinct portions, a linear portion321and a pattern portion322. The linear portion321has a predefined orientation and position relative to the pattern portion322. In some embodiments, the linear portion321and the pattern portion322can be different colors. For example, the linear portion321can be a green laser line and the pattern portion322can be red. It should be understood that the linear portion321and the pattern portion322can be any wavelength (e.g., any “color,” including wavelengths not visible to the human eye) or combination of wavelengths. Furthermore, it should be understood that the linear portion321and the pattern portion322can be generated separately or together (e.g., from a single light source or using multiple light sources). In some embodiments, the linear portion321can be used for laser triangulation to determine the shape of a part, while the pattern portion322can be used to determine whether a reflection is a first-order reflection (used to determine the shape of a part) or a second- or higher-order reflection, which can be filtered or refuted. In embodiments in which the linear portion321and the pattern portion322are different colors, the pattern portion322can be filtered from an image after any reflections are refuted, which can provide a simpler image for laser triangulation processing. Unlike the pattern ofFIG.3, the pattern ofFIG.4does not have a distinct linear portion and pattern portion. Portion421of pattern420is linear, can be detected and/or isolated from images of the pattern420by a compute device, and used to conduct laser triangulation analysis of a part. In yet other embodiments, the pattern may not include a single or continuous line. As described herein, any suitable pattern can be used, provided the detector and/or the compute devise can distinguish between first-order reflections and second-order reflections (and, in some instances, higher-order reflections). In some embodiments, the patterns do not have reflection or rotational symmetry. Similarly stated, patterns of any size, shape, and configuration can be used and may be selected based on the application, classification technique, type of light source available, and/or any other factor. In some embodiments, multiple patterns can be projected onto a part. In some embodiments, a pattern can include multiple linear portions such that multiple “laser lines” or laser line analogs can illuminate a part simultaneously. In embodiments in which multiple patterns and/or patterns having multiple linear portions illuminate a part, each pattern can be similar or each pattern can be different. Using different patterns and/or colors can be helpful to distinguish patterns, and hence originating light source, from each other in multiple pattern embodiments. Furthermore, in some embodiments, multiple patterns can be used in coordination, concurrently and/or sequentially, which can improve visibility on some surfaces or objects. Similarly stated, some patterns, colors, etc. may be more visible on certain surfaces than others. FIGS.3and4depict patterns having a portion that includes non-isosceles right triangles, but it should be understood that any suitable pattern can be used. Similarly stated, laser triangulation can be used to ascertain a shape of a part using any pre-determined pattern. Such patterns may not include a continuous linear portion and/or may contain curves and/or arcs used for laser triangulation. Such patterns will typically not have reflection or rotational symmetry. Such patterns may overlap a linear (or curved) portion that is used for laser triangulation. Multiple patterns can be used, with no limitation on the number and type(s) of patterns implemented. The pattern may or may not fully encompass the entirety of the linear (or curved) portion used for laser triangulation. FIG.5is a flow chart of a method for determining a geometry of at least a portion of a part, according to an embodiment. At510, a part can be illuminated with a pattern. As described above, the pattern typically does not have reflection or rotational symmetry. Additionally, the pattern typically includes a linear portion, either as part of the pattern itself (e.g., as shown inFIG.4) or associated with the pattern (e.g., as shown inFIG.3). In some embodiments, the pattern can be moved or “scanned” over the part, such that each portion of a surface of the part is illuminated by the pattern. In some embodiments, the part can be moved or rotated such additional surfaces of the part can be illuminated by the pattern. In some embodiments, multiple light sources disposed in different locations can illuminate the part from multiple angles such that multiple surfaces of the part can be illuminated. At520, one or more images of the part can be captured by one or more detectors. In embodiments in which a pattern(s) in scanned across a surface(s) of the part, detector(s) can capture a series of images or video of the part, where the pattern is projected onto a different portion of the part in each image. At530, a compute device operatively coupled to the detector(s) (having a processor and a memory) can process the image(s) to identify a first-order reflection of the pattern and filter or refute second- and/or higher-order reflections. For example, as described above, the pattern can be configured such that a second-order reflection (and/or higher-order reflections) are distinguishable from first-order reflections and the compute device can be operable to reject such higher-order reflections. Processing images to distinguish between first-order reflections and higher-order reflections includes pattern recognition and/or classification. In some embodiments, pattern recognition can include classifying a patterns detected in an image as first-order reflections (class 0) or as not-first-order reflections (class 1). Any suitable combination of software and hardware can be a classifier operable to identify a pattern as a class 0 reflection or a class 1 reflection. For example, an image of the part can be analyzed (e.g., by a processor) to identify pixels that match the color of at least a portion of the pattern projected by a light source (optionally accounting for color shifts caused by non-white surfaces and/or non-ideal mirrors). The pattern of pixels that match the color of at least a portion of the pattern can be compared to an expected pattern of a first-order reflection of the projected pattern. If the pattern of pixels matches the expected pattern of a first-order reflection, laser triangulation techniques can be applied to a portion of those pixels and/or pixels associated those pixels. If the pattern of pixels does not match the expected pattern (e.g., the pattern of pixels is a “mirror image” of the expected pattern), those pixels and/or other pixels associated with those pixels can be identified as being associated with a second- or higher-order reflection and may be filtered or discarded before laser triangulation techniques are applied to the image. In some embodiments, a pattern projected onto a part may be reflected more than two times and yet may appear to be a first-order reflection. For example, a third-order or other higher-order reflection may, under certain circumstances, appear similar to a first-order reflection. In some embodiments odd-numbered non-first-order reflections can be identified as class 1 reflections by analyzing intensity, noise, and/or clarity of the pattern. For real-world surfaces (i.e., not ideal mirrors), each reflection induces a degree of scattering due to surface imperfections, absorbance, and the like. Accordingly, patterns having noise above a predetermined or dynamic threshold, intensity below a predetermined or dynamic threshold, and/or clarity below a predetermined or dynamic threshold can be identified as class 1 reflections and rejected. In some embodiments, a convolutional neural network (CNN) which also may be called deep convolutional neural network or simply convolutional network, can be used as the classifier to recognize a pattern or any part thereof. Any suitable CNN, having any suitable number of layers, activation function of the neurons, connection between layers, network structure, and/or the like may be used to classify patterns detected in images. In some embodiments, the network structure of a CNN may be tuned and/or altered through training using any suitable means. A CNN may be initialized using any suitable means. In addition or alternatively, techniques like Scale Invariant Feature Transform (SIFT), or other types of neural networks could be used as a classifier to identify a pattern. In some embodiments, the classifier can localize and/or be implemented in a way to operate on regions, sub-elements, or sub-images within one or more images captured by the detector. In some embodiments, the classifier can be applied such that individual pixels (or regions indexed in some other fashion), acquired by the detector(s), can be classified. As an example, a classifier can operate by, for each pixel in the image, creating a sub-image. Each sub-image can have an odd number of (pixel) rows and an odd number of (pixel) columns centered on a particular pixel. The size of each sub-image (e.g., the number or rows and columns) can be selected based on the pattern and configured such that a salient feature of the pattern (e.g., at least one entire triangle of the patterns shown inFIGS.3and/or4) will be contained in the sub-image when the linear portion of the pattern is centered in the sub-image. The classifier can determine whether the center pixel is constituent to a linear portion of a pattern and whether the pattern is a first-order reflection or a greater-than-first-order reflection. The classifier can denote the sub-image as class 0 if the center pixel is constituent to a linear portion of a pattern and the pattern is a first-order reflection. The classifier can denote the sub-image as class 1 if the center pixel is not constituent to a linear portion of a pattern and/or the pattern is not a first-order reflection. The classifier can iterate the various sub-images and save the classification result in memory. It should be understood, however, that other classifiers may be operable to classify more than one pixel at a time in an image or sub-image, that the location of the pixel or pixels used to classify an image and/or sub-image need not be the central pixel, and/or the image and/or sub-image may be filtered, normalized, or otherwise processed by the classifier or before the image is received by the classifier. At540, a model of at least a portion of the part can be generated by the compute device, using only first-order reflections (class 0) of the pattern. Similarly stated, laser triangulation can be performed on first-order reflections of the pattern using, for example, known positions of the detector(s) and/or light source(s). In other words, deviations in a first-order reflection of a linear or other portion of the pattern having a predetermined shape can be used to identify the shape of the portion of the part illuminated by that portion of the pattern. That is, portions of the pattern that are known to be linear (or having another pre-determined shape) may appear non-linear (or have a shape different from the pre-determined shape) in the first-order reflection due to being projected onto and reflected from a non-planar surface. By analyzing first order reflections of the pattern, such deviations can be used to map the surface of the part onto which the pattern is projected. In embodiments in which the pattern(s) is scanned across a surface of the part, a three-dimensional model of the surface can be generated using the multiple images that were processed at530to remove second- and/or higher-order reflections. Similarly stated, each image can capture a different portion of the part illuminated by a first order reflection of a linear or other portion of the pattern having a predetermined shape and deviations in that portion of the pattern can be used to generate a three-dimensional model of the surface of the part. Embodiments described herein can be particularly suitable for completely or semi-autonomous manufacturing processes, such as robotic welding. For example, in some embodiments, a light source and/or detector can be coupled to a suitable robot (e.g., a six-axis welding robot). Models of parts described herein can be used to identify joints for welding or other features for automated machining processes. Many welding processes involve the joining of highly reflective materials, such as aluminum and stainless steel. Such materials often are also frequently curved and cast multiple reflections, which may render traditional laser-triangulation methods unsuitable. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. For example, some embodiments described herein reference a single light source, a single detector, and/or a single pattern. It should be understood, however, that multiple light sources can illuminate a part. Each light source can illuminate the part with a similar pattern, or different patterns. Patterns can be detected by multiple detectors. In embodiments with multiple light sources, it may be advantageous for each light source to illuminate the part with a different pattern such that detector(s) and/or compute devices processing images captured by detectors can identify which light source projected which pattern. Furthermore, although various embodiments have been described as having particular features and/or combinations of components, other embodiments are possible having a combination of any features and/or components from any of embodiments where appropriate as well as additional features and/or components. For example, although not described in detail above, in some embodiments, methods of determining a shape of a portion of a part may include a calibration phase during which distortion of the detector(s), the lens(es) on said detector(s), the distortion in the combination of detector(s) and lens(es), and/or the relative position of the camera(s) to a test surface or fixture onto which a pattern(s) is projected are determined. Some embodiments described herein relate to methods and/or processing events. It should be understood that such methods and/or processing events can be computer-implemented. That is, where method or other events are described herein, it should be understood that they may be performed by a compute device having a processor and a memory such as the compute device240. Methods described herein can be performed locally, for example, at a compute device physically co-located with a detector, light emitter, and/or part, and/or remotely, e.g., on a server and/or in the “cloud.” Memory of a compute device is also referred to as a non-transitory computer-readable medium, which can include instructions or computer code for performing various computer-implemented operations. The computer-readable medium (or processor-readable medium) is non-transitory in the sense that it does not include transitory propagating signals per se (e.g., a propagating electromagnetic wave carrying information on a transmission medium such as space or a cable). The media and computer code (also can be referred to as code) may be those designed and constructed for the specific purpose or purposes. Examples of non-transitory computer-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules, Read-Only Memory (ROM), Random-Access Memory (RAM) and/or the like. One or more processors can be communicatively coupled to the memory and operable to execute the code stored on the non-transitory processor-readable medium. Examples of processors include general purpose processors (e.g., CPUs), Graphical Processing Units, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Digital Signal Processor (DSPs), Programmable Logic Devices (PLDs), and the like. Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using imperative programming languages (e.g., C, Fortran, etc.), functional programming languages (Haskell, Erlang, etc.), logical programming languages (e.g., Prolog), object-oriented programming languages (e.g., Java, C++, etc.) or other suitable programming languages and/or development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code. Where methods described above indicate certain events occurring in certain order, the ordering of certain events may be modified. Additionally, certain of the events may be performed concurrently in a parallel process when possible, as well as performed sequentially as described above. Although various embodiments have been described as having particular features and/or combinations of components, other embodiments are possible having a combination of any features and/or components from any of the embodiments where appropriate. | 27,763 |
11859965 | DESCRIPTION OF THE EMBODIMENTS FIG.1is a block view of an analysis system according to an embodiment of the disclosure. With reference toFIG.1, the analysis system includes a measuring instrument110and an analysis apparatus120. Data may be transmitted between the measuring instrument110and the analysis apparatus120through a cable or through wireless communications, for instance. The measuring instrument110includes a diffractometer, such as an X-ray diffractometer (XRD) or an optical instrument, e.g., FRT or Tropel, which may respectively serve to measure the wafer and obtain a full width at half maximum (FWHM) of different positions on each wafer and a bow of the wafer. The measuring instrument110may be implemented in form of any device for measuring the FWHM and the bow of the wafer, which should however not be construed as a limitation in the disclosure. The XRD bombards a metal target with accelerated electrons to generate an X-ray and then irradiates the X-ray to the wafer to obtain a crystal structure. When the X-ray is emitted to a lattice plane at an incident angle θ, a diffraction peak is generated when conditions satisfy Bragg's Law (nλ=2d sin θ), where n is an integer, λ is w wavelength of the incident X-ray, d is an interplanar spacing in atomic lattice, and θ is an included angle between the incident X-ray and a scattered plane. The FWHM is obtained by calculating a half width of the highest diffraction peak position. The FWHM may represent the crystal quality. Therefore, the FWHM is measured as referential data. The analysis apparatus120is an electronic apparatus with an arithmetic function and may be implemented in form of a personal computer, a notebook computer, a tablet, a smart phone, or any apparatus with the arithmetic function, which should however not be construed as a limitation in the disclosure. The analysis apparatus120receives a plurality of measurement data of known wafers from the measuring instrument110, so as to perform training to obtain a forecast model (a regression equation) for subsequently predicting a bow of a wafer processed from an ingot according to the measurement data of a to-be-measured wafer after the ingot is obtained and processed to form the wafer. FIG.2is a flowchart of a material analysis method according to an embodiment of the disclosure. With reference toFIG.2, in step S205, a plurality of wafers processed from a plurality of ingots are measured by the measuring instrument110to obtain an average of bows of the processed wafers and a plurality of FWHM of the wafers. Here, each ingot undergoes one or a plurality of processing steps, such as slicing, lapping, polishing, and so on, so as to form wafers, and the measuring instrument110one by one measures the resulting wafers processed from each ingot to obtain the bow of each wafer. The analysis apparatus120then obtains the average of the bows of the wafers. In addition, the measuring instrument110respectively measures the FWHM of a plurality of designated positions on the wafers. In an embodiment, the FWHM of the designated positions on two wafers, i.e., a first wafer and a second wafer, at a head end and a tail end of another wafer may be measured.FIG.3is a schematic view of an ingot according to an embodiment of the disclosure. With reference toFIG.3, the wafers at the head and tail ends of an ingot2are taken as a first wafer21and a second wafer22. The measuring instrument110measures the FWHM of a plurality of designated positions on the first wafer21and the second wafer22, respectively. Here, five designated positions are designated to perform a sampling (measurement) step. The five designated positions are a center position and four representation positions respectively located in four quadrants. FIG.4is a schematic view of designated positions according to an embodiment of the disclosure. As shown inFIG.4, a wafer300includes designated positions P0to P4. In this embodiment, the wafer300is, for instance, one of the first wafer21and the second wafer22at the head and tail ends of a wafer processed from the ingot2. The center position (the designated position P0) on the wafer300is set as the origin (0, 0), the wafer is divided into four quadrants, and the representation positions P1to P4in the four quadrants are respectively selected. Here, for instance, a coordinate of the representation position P1is (45, 45), a coordinate of the representation position P2is (45, −45), a coordinate of the representation position P3is (−45, 45), and a coordinate of the representation position P4is (−45, −45). For instance, Table 1 shows the FWHM of the representation positions on the wafers (the first wafer21and the second wafer22at the head and tail ends) processed from the same ingot. In step S210, each key factor corresponding to one of the ingots is calculated according to the FWHM of each wafer. Specifically, based on the respective FWHM of the first wafer and the second wafer processed from the same ingot, a first coefficient of variation of the first wafer and a second coefficient of variation of the second wafer are calculated. Based on the first coefficient of variation and the second coefficient of variation, the key factor is calculated. An embodiment is provided below to explain detailed steps of calculating the key factor. Table 1 exemplifies the wafers at the head and tail ends, and the wafers are processed from a known ingot (e.g., the ingot2inFIG.3); namely, the FWHM of the designated positions P0to P4on the first wafer21and the second wafer22are shown. TABLE 1Coordinate of theFWHM of theFWHM of therepresentation positionfirst wafer 21second wafer 22P0(0, 0)97.8124.1P1(45, 45)89.4105.1P2(45, −45)92.4107.8P3(−45, 45)90.6105.1P4(−45, −45)101.9114.4 First, an average value of the FWHM and a standard deviation of the first wafer21are calculated, and an average value of the FWHM and a standard deviation of the second wafer22are calculated. A first coefficient of variation of the first wafer21is calculated according to the average value of the FWHM and the standard deviation of the first wafer21, and a second coefficient of variation of the second wafer22is calculated according to the average value of the FWHM and the standard deviation of the second wafer22. A method of calculating the standard deviation is provided below: STD=1N-1∑i=1N(xi-x_)2 Here, N is the number of the FWHM, xiis the i-th FWHM, andxis the average value of the FWHM. A method of calculating the coefficients of variation is provided below: CV=STDx_ After obtaining the first coefficient of variation of the first wafer21and the second coefficient of variation of the second wafer22, a difference between the first coefficient of variation and the second coefficient of variation is calculated, and an absolute value of the difference is obtained as the key factor corresponding to the ingot2, i.e., the key factor of the wafer processed from the ingot2. Table 2 shows the key factor corresponding to the ingot number 001 (the ingot2shown inFIG.3) obtained according to the data in Table 1. TABLE 2Ingot number 001First waferSecond waferAverage value94.42111.3Standard deviation5.2737088.10216Coefficient of variation0.0558540.072796Key factor0.016942 Based on the method provided above, key factors corresponding to a plurality of ingots and an average of bows of a plurality of wafers processed from each ingot are calculated, as shown in Table 3. TABLE 3Ingot numberKey factorAverage of bows0010.01694215.960020.02542260.480030.03792170.130040.02972998.84. . .. . .. . . After that, in step S215, a regression equation is obtained according to a plurality of key factors and the average of the bows.FIG.5is a curve diagram of a regression equation according to an embodiment of the disclosure. With reference toFIG.5, in this embodiment, the regression equation is, for instance, y=α+βx. The key factors and the average of the bows obtained from Table 3 are respectively taken as a y value and an x value, whereby α and β are found. After calculation, the following is obtained: α=2.3671, β=2322.6, and a correlation coefficient R is obtained as well, where R2=0.869, and the regression equation is y=−2.3671+2322.6x. Specifically, the regression equation provided in one or more embodiments of the disclosure merely serves as an example and should not be construed as a limitation in the disclosure. After the regression equation is obtained, when a to-be-measured ingot is obtained, it is likely to calculate a corresponding key factor by measuring a FWHM of a to-be-measured wafer corresponding to the to-be-measured ingot, and the key factor is input to the regression equation to obtain a predicted bow of a wafer processed from the to-be-measured ingot. To sum up, according to one or more embodiments of the disclosure, the measurement data of a known wafer may be applied to perform training, whereby the regression equation may be obtained and may serve as a forecast model. Moreover, it is likely to use the wafers at the head and tail ends of the to-be-measured ingot to obtain the predicted bow of the wafer processed from the to-be-measured ingot. Accordingly, before the ingot is processed, the predicted bow of the corresponding wafer is obtained by applying the regression equation, so as to predict the geometric quality of the to-be-processed ingot, thereby reducing unnecessary waste. It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations provided they fall within the scope of the following claims and their equivalents. | 9,787 |
11859966 | DETAILED DESCRIPTION The present subject matter will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the subject matter disclosed herein are shown. Indeed, the subject matter disclosed herein may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. FIGS.1A and1Billustrate an apparatus adapted to determine at least one surface characteristic of a construction and/or paving-related material sample according to one embodiment of the subject matter disclosed herein, the apparatus being indicated generally by the numeral100. Apparatus100includes at least one sample-interacting device200and a sample holder300configured to be capable of supporting a sample400of a paving-related material or other construction material. Note that the term “paving-related material” as used herein refers to, for example, uncompacted bituminous paving mixtures, soil bases and sub-bases, loose soils and aggregates, as well as field cores and laboratory prepared specimens of compacted bituminous paving material, while the term “construction material” as used herein is more general and includes, for example, paving-related materials, Portland cement, concrete cylinders, and the like. In situ field measurements refer to obtaining the characteristic of a pavement or soil material in the field using destructive or non-destructive methods. Sample-interacting device200may use, for example, a point source, a line source, or a wave source to provide, for instance, light, sound, ultrasound, radiation, physical contact, and/or other medium for allowing at least one surface characteristic of sample400to be determined. One skilled in the art will appreciate that such a device200may be appropriately configured to use the light, sound, ultrasound, radiation (including, for example, microwave radiation or infrared radiation), physical contact and/or other medium to perform, for example, a measurement of at least one surface characteristic, such as a dimension, of sample400using, for instance, a reflectance methodology, a transmission methodology, a duration methodology, a contact methodology, or any other suitable methodology, wherein device200may include, for example, at least one corresponding and appropriate emitter/detector pair, or appropriate sensors, for measuring the at least one surface characteristic. For instance, device200may be configured to use structured light, laser range finders, or x-rays for non-contact-type measurements; linear variable differential transformers (LVDT) or other physical mechanisms for contact-type measurements; or any other suitable measuring technology such as range cameras, range imaging, confocal scanning, conoscopic holography or imaging, focal plane imaging, raster scans with lines or points. For example, an optical methodology or a photographic methodology such as, for instance, stereo-vision techniques, may be used for performing 3D profiling. Various imaging devices such as scanners or cameras may also be suitable in this regard where the appropriate determination of a surface characteristic(s), such as a dimension, may be accomplished by associated software or image processing procedure executed on a computer device600associated with sample-interacting device(s)200. In some instances, device200may comprise, for example, a single or multi-dimensional profiler device such as that made by, for instance, Shape Grabber, Inc. of Ottawa, Ontario, Canada or National Optics Institute of Sainte-Foy, Quebec, Canada, or INO of Canada. However, one skilled in the art will appreciate that many other sample-interacting devices may be implemented within the spirit and scope of the subject matter disclosed herein. Sample holder300is configured to hold sample400with respect to sample-interacting device200so as to allow sample-interacting device200to determine the appropriate surface characteristic(s) of sample400. Such a surface characteristic may include, for example, a dimension, a texture, a roughness, or other identifiable surface aspect of sample400, including identification and/or quantification of voids, irregularities, or other features of the sample surface. In certain situations, sample-interacting device200may be configured such that the necessary or desired surface characteristic(s) of sample400can be determined with sample400held in one position by sample holder300. However, in instances, where sample400has a complex three-dimensional configuration, an appropriate determination or measurement may not be possible with sample400in a single position with respect to sample-interacting device200. Accordingly, in instances where a second determination or measurement is necessary or desirable to produce an accurate representation of, for example, the dimensional measurement(s) of sample400, sample400may be moved from a first position to a second position with respect to sample holder300for the second measurement. However, significant inaccuracies may be introduced if sample400is moved unless a common reference point with respect to sample400by which the two measurements must be coordinated is attained. Further, in other instances, sample400may be irregularly shaped or, in the case of aggregates, soils, sands, or the like, configured such that it may be inconvenient or otherwise not practically possible to hold sample400with respect to sample-interacting device200, or move sample400to another position, to allow the appropriate dimension(s) of sample400to be measured. Accordingly, one advantageous aspect of the subject matter disclosed herein in this regard is the implementation of a computer analysis device600capable of executing a software package for analyzing the surface characteristic(s) of sample400determined by at least one sample-interacting device200in order to extract desired information, while overcoming some of the inaccuracies encountered in obtaining a three-dimensional representation of a sample. For example, engineering/modeling/reverse engineering software such as, for instance, ProEngineer, Matlab, Geomagic Studio, or other appropriate package being executed by computer device600, can be configured to receive the at least one surface characteristic determined by sample-interacting device200. For instance, sample-interacting device200using a point source of light may be configured to detect the behavior of the light interacting with sample400, wherein the detected light may be indicative of coordinates or distances of each of the measured points on sample400with respect to sample-interacting device200. Accordingly, an increased number of measurements of sample400with such a point source, and the proximity of subsequent measurements to previous measurements may directly affect the resolution of the representation of sample400obtained from that process. That is, a dense “point cloud” may provide a significantly higher resolution of the surface characteristic(s) of sample400as compared to very few point measurements distributed across the surface of sample400. However, the resolution necessary to obtain appropriate and valid results of the at least one surface characteristic of sample400is not limited hereby in any manner and one skilled in art will appreciate that such resolution is a matter of choice associated with the desired result to be achieved. Sample-interacting device200may be configured to interact with one surface, multiple surfaces, or all surfaces of a sample. FIGS.1A and1Bfurther illustrate sample400being moved with respect to sample-interacting device200about a vertical axis defined by sample holder300, wherein such movement may be accomplished manually (by the operator physically rotating the sample400on the sample holder300) or in an automated manner such as by a motorized or mechanized system associated with and for rotating sample holder300so as to rotate sample400. The rotation for example could be accomplished by resting a cylindrical sample on a rolling mechanism, while spinning and rotating the sample with respect to the surface measuring device. In other instances, sample400may be stationary and sample-interacting device200moved around sample400. In still other instances, as shown inFIG.2A, a plurality of sample-interacting devices200may be implemented such that moving either sample400or sample-interacting device(s)200may not be necessary in order to determine or capture the desired surface characteristic(s) of sample400. One skilled in the art will also appreciate that, in some instances, that a sample holder300may not be a positive aspect of apparatus100. That is, in some instances, sample400may be, for example, supported by at least one sample-interacting device200, whereby at least one sample-interacting device200is configured to determine the desired surface characteristic(s) of sample400while providing support therefor. In other instances, sample-interacting device(s)200may be configured to act upon a sample400in situ and, as such, does not require a sample holder300for supporting sample400. More particularly, for example, ASTM E 965 is a standard for determining the surface texture of a roadway and involves spreading a calibrated sand on the roadway and then spreading that sand out across the roadway until a dispersed condition is met. The diameter of the sand patch is then measured, whereby the area of the sand patch and the known density of the calibrated sand may be used to determine the surface roughness of the roadway. This is typically the same type of sand used in ASTM D 1556. According to embodiments of the subject matter disclosed herein may be used to determine surface roughness by implementing a sample-interacting device200configured to be moved relative to the roadway so as to interact with sample400in situ, thereby obviating the need for a sample holder300per se. The surface characteristic(s) determined by sample-interacting device200would then be transferred to computer device600to determine the nature of the surface characteristic(s) and if desirable, at least one dimension of sample400(in this instance, the distance between sample-interacting device200and sample400can be indicative of the texture of the surface of sample400and thus an average separation distance can be determined, wherein the average separation distance may be related over an area to, for example, the volume of a void or an area characteristic of the roadway in that vicinity). As illustrated inFIG.2B, multiple images may be stitched together to form one complete image of sample400. In one or more alternate embodiments of the subject matter disclosed herein, as shown inFIGS.3A,3B,15, and16, sample holder300may be configured with a first portion320and a second portion340, wherein first and second portions320,340are configured to cooperate to hold or merely support sample400such that appropriate dimension or other measurement(s) can be determined by a dimension-measuring device (as one form of a sample-interacting device200). That is, in one embodiment, first portion320may be disposed at a selected position with respect to sample-interacting device200. Second portion340may then optionally engage sample400before second portion340is interfaced with first portion320in an appropriate manner. For example, first portion320may define a keyway (not shown) configured to receive a key (not shown) protruding from second portion340such that, when interfaced, the first and second portions320,340will hold sample400in a known position with respect to sample-interacting device200. In any instance, first and second portions320,340are configured so as to define a coordinate system with respect to sample-interacting device200. That is, when second portion340is interfaced with first portion320, sample400is located within a coordinate system recognized by sample-interacting device200. In other instances, first and second portions320,340may be used by an appropriate software analysis package being executed by a computer device600, as previously described, to define a coordinate system for analyzing sample400. First and second portions may rotate on several axes with respect to the interacting device200. In one example, if sample400comprises a generally cylindrical compacted field core, the second portion340of sample holder300may be configured as any appropriately shaped or designed element about the circumference of sample400. Accordingly, first portion320of sample holder300may be configured to receive second portion340such that the axis of sample400is generally horizontal. In such a configuration, second portion340may be rotated with respect to first portion320between measurements by sample-interacting device200such that the sample400is caused to rotate about its axis. In other instances, for example, where sample400comprises an aggregate, sample holder300may be configured as, for instance, one or more screens or trays380for supporting the aggregate (for example, two opposing screens380having the aggregate retained therebetween, or one surface can support the aggregate for imaging) with respect to sample-interacting device200so as to allow the appropriate dimensions or other surface characteristics of the components of the aggregate to be measured as shown, for example, inFIG.4. As such, one skilled in the art will appreciate that embodiments of the subject matter disclosed herein may be useful to determine the dimensions or other surface characteristics of many different configurations of samples400and thus may be used for such purposes as, for example, determining the volume of a cylindrical compacted field core, modeling the roughness or texture of a surface, obtaining the volume of an excavated void, or gradating components of an asphalt paving mix or aggregate such as size, shape, color, or other configurations. Once a first measurement of sample400in a first position is performed by sample-interacting device200, sample400can then be moved to a second position to allow a second measurement of sample400to be performed, where such measurements may be associated with, for example, a dimension of sample400. In such a manner, a more accurate determination of the appropriate surface characteristic(s) of sample400can be made so as to enable, for example, the volume of sample400to be more closely and accurately determined. Accordingly, in one embodiment as shown inFIGS.3A and3B, first and second portions320,340of sample holder300define a vertical axis360and first and second portions320,340are configured so as to be able to rotate about axis360between measurements by sample-interacting device200.FIGS.3A and3Bfurther show sample400rotating around axis360. For example, first and second portions320,340may be configured to rotate in 90-degree increments or 180-degree increments (or any suitable degree increment or even in a continuous sweep) between measurements by sample-interacting device200, while maintaining sample400within the established coordinate system. That is, first and second portions320,340may be configured such that, for instance, a reference point is maintained on first portion320, second portion340, and/or sample400as sample400is rotated about axis360. Thus, subsequent analysis of the resulting data can use the common reference point in order to reconcile the measured surface characteristic(s) from the particular view of each measurement. Further, multiple measurements of sample400from multiple views will also provide redundant data useful for verifying accuracy of the determined surface characteristic(s) of sample400, thereby providing another significant advantage of embodiments of the subject matter disclosed herein. In some instances, sample-interacting device(s)200may be used to perform repeated measurements of sample400such that an average of those measurements is used in subsequent analyses of the data. The use of such averages may, in some instances, provide a more accurate representation of the surface characteristic of sample400as compared to a single measurement. In light of the relationship of sample-interacting device200to sample400, as shown inFIGS.3A,3B,3C,4A, and4B, other embodiments of the subject matter disclosed herein may be configured such that first and second portions320,340hold sample400stationary, while sample-interacting device200is configured to move about sample400so as to perform the appropriate measurements. In still other instances, both sample-interacting device200and sample holder300may be movable with respect to each other, or mirrors may be used to enable sample-interacting device200to interact with sample400. Further, other embodiments of the subject matter disclosed herein may have sample holder300configured such that second portion340is movable with respect to first portion320where, for example, first portion320may be stationarily disposed with respect to sample-interacting device200. For a sample holder300configured in such a manner, second portion340holding sample400may be movable in many different manners with respect to first portion320, as will be appreciated by one skilled in the art. In any instance, such embodiments of apparatus100are configured such that sample400is maintained in registration with the coordinate system through any movement of sample-interacting device200and/or first and/or second portions320,340of sample holder300. Alternatively, apparatus100may be provided without second portion340as illustrated inFIG.3C. In any case, multiple views and/or measurements or other determinations of the surface characteristic(s) of sample400may result in a plurality of representations of sample400from different perspectives, wherein the views and/or measurements must then be combined in order to provide coherent and useful results. Where sample400and/or sample-interacting device200must be moved, or multiple perspectives of sample400are obtained, in order to provide three-dimensional surface characteristics of sample400, the software executed by computer device600, in cooperation with sample-interacting device200, may be configured to determine a coordinate system or other frame of reference for the various measurements or determinations of the surface characteristic(s) of sample400performed by sample-interacting device200. For example, the frame of reference may be designated, for example, at least partially according to sample holder300or according to a surface aspect or feature of sample400, such as a void or other irregularity. In other instances, the frame of reference may be artificial, such as a mark or other removable (or inconsequential) surface feature added to sample400prior to exposure to sample-interacting device200. As such, once a sufficient number of source-associated measurements have been executed, the various perspectives650of sample400obtained by sample-interacting device(s)200, as shown inFIG.2B(whereFIG.2Billustrates the plurality of perspectives of the sample400captured by the corresponding plurality of sample-interacting devices200shown inFIG.2A), can be combined or “stitched together” according to the coordinate system or other frame of reference into a single three-dimensional representation or model700of sample400. FIG.4Bis a schematic of an alternate sample holder300for an apparatus for determining at least one dimension of a construction material sample holder similar to the sample holder depicted inFIG.4Ain which the sample holder300is translating relative to an imaging device according to one embodiment of the subject matter disclosed herein. A directional arrow is provided to signify moving of the sample holder300, such as, for example, movement of a conveyor line on which the construction material is resting. Accordingly, imaging device200can be proximal to sample holder300, in this illustrative example, a conveyor line, and interact with the sample to determine characteristics thereof, including height, aggregate size, density, color, shape, texture, or other desired properties and characteristics. One skilled in the art will thus appreciate that apparatus100may be configured in many different manners in addition to that described herein. For example, apparatus100may include multiple sample-interacting or dimension-measuring devices200, each disposed to provide different perspectives of the sample400, or one or more sample-interacting devices200may each include multiple sources and/or detectors. In addition, various other mechanisms, such as mirrors, could be implemented to facilitate the determination of the desired surface characteristic(s) of sample400. Thus, the embodiments disclosed herein are provided for example only and are not intended to be limiting, restrictive, or inclusive with respect to the range of contemplated configurations of the subject matter disclosed herein. According to a further advantageous aspect of the subject matter disclosed herein, apparatus100may also be configured such that sample-interacting device200and/or computer device600is capable of determining the volume of sample400. One value often associated with the determination of the volume of sample400is the density thereof. As previously described, the general procedures heretofore implemented by recognized standards in the construction industry are often, for instance, cumbersome, inaccurate, or destructive to sample400. As such, in some instances, embodiments of the subject matter disclosed herein may also include a mass-determining device500operably engaged with sample holder300such that, as the volume of the sample400is being determined by the sample-interacting device200, mass of the sample400can also be determined concurrently. The density of sample400can thereby be expeditiously determined with minimal handling of the sample400. Such a mass-determining device500may comprise, for example, a load cell or other suitable device as will be appreciated by one skilled in the art. In still other instances, it may also be advantageous for the determination of the volume and/or the density of sample400by the apparatus100to be at least partially automated so as to reduce the subjectivity of handling by an operator. Accordingly, in such instances, apparatus100may also include a computer device600operably engaged with the sample-interacting device200, mass-determining device500, and/or sample holder300. Such a computer device600may be configured to, for instance, verify that sample400is properly placed with respect to sample holder300and/or the sample-interacting device200, coordinate the movement of sample400with the measurements performed by sample-interacting device200, determine the mass of sample400from mass-determining device500, and compute the density of sample400all in one automated procedure. Computer device600may also be configured to perform other procedures on the collected sample data that may be of further interest. For example, computer device600may be configured to compute the volume of sample400from a complex integration of a three-dimensional surface image of the sample400and/or may be configured to determine an actual volume of the sample400by determining the effect of surface voids or roughness in sample400along with boundary locations and dimensions. Computer device600may also vary in complexity depending on the computational requirements of apparatus100. For example, an image-intensive apparatus100using a plurality of sample-interacting devices200may require a significant capacity and an image-capable computer device600, while a less complex dimension-determining may require less computational capacity and, in light of such requirements, an appropriate computer device600is provided. Thus, one skilled in the art will appreciate that embodiments of the apparatus100may be used for many other forms of sample analysis in addition to those discussed herein. Many modifications and other embodiments of the subject matter disclosed herein will come to mind to one skilled in the art to which the subject matter disclosed herein pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. For example, one skilled in the art will appreciate that the apparatus and method as disclosed and described herein, in addition to providing an alternative to the density determination methodology outlined in the applicable density standards, may also be implemented within the methodology of other higher-level standards that call, for instance, for the determination of sample density using those density standards, or for the determination of sample dimensions such as, for example, a histogram of aggregate sizes. For example, several AASHTO/ASTM standards are directed to aggregate gradation and may specify the determination of an aggregate size histogram, wherein the apparatus and method as disclosed and described herein may be implemented to make that determination. Such standards include, for instance:AASHTO T 27 Sieve Analysis of Fine and Coarse Aggregates; AASHTO T 30 Mechanical Analysis of Extracted Aggregate;AASHTO MP 2 Standard Specification for SUPERPAVE Volumetric Mix Design;AASHTO T 312 Method for Preparing and Determining the Density or HMA Specimens by Means of the SHRP Gyratory Compactor;ASTM C 136 Sieve Analysis of Fine and Coarse Aggregates;ASTM D 5444 Test Method for Mechanical Size Analysis of Extracted Aggregate;ASTM D 3398 Test Method for Index of Aggregate Particle Shape and Texture;ASTM D 2940 Specification for Graded Aggregate Material For Bases or Subbases for Highways or Airports;ASTM D 448 Classification for Sizes of Aggregate for Road and Bridge Construction; andASTM D 1139 Standard Specification for Aggregate for Single and Multiple Bituminous Surface Treatments. Note that such a list is merely exemplary of some standards for aggregates in which aggregate gradation may be specified, and is not intended to be limiting, restrictive, or inclusive with respect to such higher-level standards which may specify a dimension, volume, density, and/or other sample property determination that may be accomplished using the apparatus and method as disclosed and described herein. Accordingly, additional embodiments of the subject matter disclosed herein may be directed to such higher level methods implementing the apparatus and method as disclosed herein. Further, other additional embodiments of the subject matter disclosed herein may, for example, be used to determine the texture of a sample. Some examples of ASTM standards requiring an examination of the sample texture, wherein the apparatus and method as disclosed and described herein may also be implemented to make that determination, include:ASTM E 965 Standard Test Method for Measuring Pavement Macro Texture Depth Using a Volumetric Technique;ASTM E 1274 Standard Test Method for Measuring Pavement Roughness Using a Profilograph; andASTM E 2157 Standard Test Method for Measuring Pavement Macro Texture Properties Using the Circular Track Method. Additionally, the following ASTM standards may be employed with the use of the disclosed subject matter contained herein:ASTM D6432-99(2005) Standard Guide for Using the Surface Ground Penetrating Radar Method for Subsurface Investigation;ASTM D6431-99(2010) Standard Guide for Using the Direct Current Resistivity Method for Subsurface Investigation;ASTM D6565-00(2005) Standard Test Method for Determination of Water (Moisture) Content of Soil by the Time-Domain Reflectometry (TDR) Method;ASTM D6639-01(2008) Standard Guide for Using the Frequency Domain Electromagnetic Method for Subsurface Investigations;ASTM D6780-05 Standard Test Method for Water Content and Density of Soil in Place by Time Domain Reflectometry (TDR);ASTM D6820-02(2007) Standard Guide for Use of the Time Domain Electromagnetic Method for Subsurface Investigation;Historical Standard: ASTM D2216-98 Standard Test Method for Laboratory Determination of Water (Moisture) Content of Soil and Rock by Mass;ASTM D4643-08 Standard Test Method for Determination of Water (Moisture) Content of Soil by Microwave Oven Heating;ASTM D4944-04 Standard Test Method for Field Determination of Water (Moisture) Content of Soil by the Calcium Carbide Gas Pressure Tester;ASTM D4959-07 Standard Test Method for Determination of Water (Moisture) Content of Soil By Direct Heating;ASTM D5030-04 Standard Test Method for Density of Soil and Rock in Place by the Water Replacement Method in a Test Pit;ASTM D5080-08 Standard Test Method for Rapid Determination of Percent Compaction;ASTM D2167-08 Standard Test Method for Density and Unit Weight of Soil in Place by the Rubber Balloon Method;ASTM D2974-07a Standard Test Methods for Moisture, Ash, and Organic Matter of Peat and Other Organic Soils;ASTM D4254-00(2006)e1 Standard Test Methods for Minimum Index Density and Unit Weight of Soils and Calculation of Relative Density;ASTM D6938-10 Standard Test Method for In-Place Density and Water Content of Soil and Soil-Aggregate by Nuclear Methods (Shallow Depth);ASTM D425-88(2008) Standard Test Method for Centrifuge Moisture Equivalent of Soils;ASTM D6642-01(2006) Standard Guide for Comparison of Techniques to Quantify the Soil-Water (Moisture) Flux;ASTM D558-11 Standard Test Methods for Moisture-Density (Unit Weight) Relations of Soil-Cement Mixtures;ASTM D 1556—Test Method for Density of Soil in Place of the Sand-Cone Method;ASTM C127-04 Standard Test Method for Density, Relative Density (Specific Gravity), and Absorption of Coarse Aggregate;ASTM D4791-10 Standard Test Method for Flat Particles, Elongated Particles, or Flat and Elongated Particles in Coarse Aggregate;ASTM C29/C29M-09 Standard Test Method for Bulk Density (Unit Weight) and Voids in Aggregate;ASTM D2940/D2940M-09 Standard Specification for Graded Aggregate Material For Bases or Subbases for Highways or Airports;ASTM D3398-00(2006) Standard Test Method for Index of Aggregate Particle Shape and Texture;ASTM D448-08 Standard Classification for Sizes of Aggregate for Road and Bridge Construction;ASTM C70-06 Standard Test Method for Surface Moisture in Fine Aggregate;ASTM D1241-07 Standard Specification for Materials for Soil Aggregate Subbase, Base, and Surface Courses;ASTM D692/D692M-09 Standard Specification for Coarse Aggregate for Bituminous Paving Mixtures;ASTM D3282-09 Standard Practice for Classification of Soils and Soil Aggregate Mixtures for Highway Construction Purposes;ASTM C925-09 Standard Guide for Precision Electroformed Wet Sieve Analysis of Nonplastic Ceramic Powders; andASTM D6913-04(2009) Standard Test Methods for Particle Size Distribution (Gradation) of Soils Using Sieve Analysis. An alternate embodiment of an apparatus for measuring a characteristic of a construction material is depicted inFIGS.5through8in which an apparatus810is provided. The apparatus810generally defines a material-interacting device812, which may have many of the same characteristics and capabilities of material-interacting devices described throughout this disclosure. The material-interacting device812may be carried by a frame816that may extend from a template814. The template814and frame816cooperate to carry the material-interacting device812and may be configured for translating the material-interacting device812in any desired direction through the use of a geared linkage, motor, step motor, optical, or any other desired translation method. This translation may be provided, for example, for positioning the material-interacting device812in a certain proximity or position relative to a material to be interacted with. In other embodiments, this translation may be provided, for example, for positioning the material-interacting device812among a plurality of positions in order to interact with the material among multiple positions. Alternatively, a system may be provided in which a plurality of translatable mirror assemblies are provided for capturing multiple images and interactions with the material-interacting devices disclosed herein. The template814may be provided for being positioned against a surface824of a construction material820. In this manner, the template812may provide leveling characteristics and positioning characteristics such that the material-interacting device812is in a desired position or orientation. A void822may be formed in the construction material820by, for example, excavating the construction material820to form the void822. The void822may include surface826. The void822may be an excavated hole in which a construction material sample has been excavated to determine the density or other desired characteristics thereof. The material-interacting device812is further configured to determine a characteristic of the void822, while, in one or more embodiments, the material-interacting device812may be in communication with an external device such as a computer device that is configured to determine a characteristic of the void. This characteristic may include any characteristic as described herein, and, in one or more embodiments, may include the volume of the excavated void822, the depth, width, color, surface area, texture, and/or moisture content, and combinations thereof. The material-interacting device812may be configured for being received within the void822such as depicted inFIG.7, and may also be configured for being rotationally received within the void822as depicted inFIG.8. However, the material interactive device812is not required to be placed within the void822to calculate a characteristic thereof, and may be placed outside of the void822. In one or more embodiments, the material-interacting device812may be configured for horizontal, vertical, or rotational movement within the void822. Other methods may incorporate a plurality of laser sources, reflective surfaces, and optical scanners to scan the void while minimizing the number of sources, detectors and carriage movement. In one or more embodiments, the material-interacting device812may be further configured to interact with other optical devices such as mirrors, detectors, couplers, splitters, polarizers, modulators, photo-emitters, photo-detectors, fibers, waveguides, and lights in order to interact with the material. Additionally, the material-interacting device812may include multiple sensors or multiple optical devices operating at multiple wavelengths. The material-interacting device812may also employ one or more stereo vision techniques, including capturing multiple images from respective different angles relative to the construction material820. The material-interacting device812may be configured for determining a volume of the void822formed in the construction material820. The void822is formed by excavating material from the construction material820, which may be, in one embodiment, soil removed from a road bed or other ground surface. In one method, a template is anchored or fastened to the ground, which offers a guide for excavating the construction material, and allows quick attachment and release of the optical profiler for measuring the hole. The excavated material is depicted inFIG.9and represented as830. The excavated material830may be provided on the mass determining device600, which may then determine the mass of the excavated material830. Once the mass is obtained by the mass determining device600, which may be in communication with the material-interacting device, and the volume is obtained by the material-interacting device812, a density can be obtained. This density represents the density of, in this illustrating example, the soil forming void822before being excavated from the ground. Further testing and calculations can be performed on the excavated material830such as determining the “wet” density, and then determining the “dry” density after the excavated material830has been dried. Alternate methods of moisture measurement may be implemented such as infrared (IR) measurements, capacitance, electromagnetic, or any other ASTM method provided herein. The advantages associated with apparatus810are readily apparent. For example, apparatus810may be portable and can therefore perform in-situ site analysis. This is important for speed and practicality purposes. Conventional methods utilizing the sand cone and rubber balloon methods required many measuring devices, were time consuming, and had limited effectiveness in measurement accuracy. Apparatus810is configured such that, operating alone, the volume of an excavated void can be determined. An apparatus for determining a characteristic of a construction material is depicted inFIG.10and is generally designated910. The apparatus910includes a material-interacting device812that is carried by a frame structure816that includes at least one translation device840. Drive beams840may be provided with a threaded, notched, or similar configuration that receives mechanical input from a device such as a motor “M” for varying the position of the material-interacting device812. Template814carries each of the drive beams840. A boom860may extend from one of the drive beams for carrying a vertically oriented drive beam. Template814is configured for being placed on the construction material820. The material-interacting device812is configured for interacting with the construction material and further configured for interacting with the void822defined in the construction material. The material-interacting device812is further configured for movement in up to, for example, three dimensions within the void822. Optical systems and components such as couplers, splitters, and dynamic or static mirrors may be substituted for direct mechanical positioning of the relationship between the interacting device and sample. A method1100is depicted in the flow chart ofFIG.11. The method1100may generally include interacting with a construction material to determine a characteristic thereof, excavating material from the construction material to form a void, interacting with the void to determine a characteristic thereof, and determining a respective measurement of the void based upon the determined characteristics. The interaction may include, for example, forming a first image before excavation, forming a second image after excavation, the second image being that of the void, and determining a measurement of the void based upon the determined images. This measurement may be, for example, the volume of the void. Conversely, one image can be obtained of the void for calculating the void volume, though an image is not required. A method1200is depicted in the flow chart ofFIG.12. The method1200may generally include interacting with a construction material to determine a characteristic thereof, excavating material from the construction material to form a void, interacting with the void to determine a characteristic thereof, and determining the volume of the void based upon the desired characteristics. Further, the method1200may include determining the weight (mass) of the excavated material, and then determining the density of the construction material in-situ, as it was before excavation. This density may be found, for example, by dividing the mass by the volume of the void. Determining the density of the construction material may be accomplished in any number of ways, including those depicted in the method1300ofFIG.13. Determining the density or moisture content may include determining the wet density. Determining the density may also include determining a dry density using non-nuclear moisture determination methods. Determining the density may also include determining the density volumetrically. Determining the density may also include determining the density and moisture by gravimetrical methods. Determining the density may also include determining the dry density using methods by, for example, heating the soil to remove moisture. An apparatus910is illustrated inFIG.14. The apparatus910includes an imaging device912or material-interacting device912carried by a frame916. The frame916is depicted as having an arcuate shape, but may take on any appropriately configured shape. The frame916is configured for being positioned about a construction material surface, such as, for example, a road surface. A void922or other deviation may be formed in the surface. Imaging device912may be translatable from a first position (in which the imaging device912is shown in solid lines) to a second position (in which the imaging device912is shown in broken lines). The imaging device912may also have more than two positions. Alternatively, the frame916may carry multiple imaging device912such that translation of the imaging device912is not required to obtain multiple images for use with, for example, stereographic imaging or other imaging methods described herein. The imaging device912is configured to determine one or more measurements to thereby determine one or more characteristics of void922or other suitable deviations using the one or more processes described herein.FIG.14shows an angle of separation between the respective imaging device912in the first and second positions of about 120 degrees, however any appropriate angle may be incorporated for the image analysis. An apparatus that may be used in accordance with embodiments described herein is shown inFIGS.15and16and is generally designated1010. The apparatus1010may include at least one translation mechanism1040, which may be a roller as illustrated or may be any other desired mechanism capable of translating the construction material sample300. The construction material sample300defines a longitudinal axis “LA” about which the construction material sample300is rotated by the translation mechanisms1040. Each of the arrows are provided in the illustrations to depict the translation movement of the translation mechanism1040and the imparted movement of the construction material sample300in response thereto. One or more additional translation mechanisms1020may also be provided for translating the construction material sample300in a yaw, pitch, roll, or similar orientation. A housing1012may be provided for receiving the construction material sample300and housing the translation mechanisms1040as illustrated inFIG.16. As illustrated inFIG.16, a light source1050may be provided. The light source1050may be a light point, a light line, laser source, coherent light, or a wave front, or any other suitably configured device for interacting with the construction material sample300. A material-interacting device1060may be further provided. The material-interacting device1060may be provided in a fixed-relationship relative to the construction material sample300. Alternatively, the material-interacting device1060may be translatable from a first position (in which the material-interacting device1060is shown in solid lines) to a second position (in which the material-interacting device1060is shown in broken lines). Alternatively, multiple material-interacting devices1060in variously selected positions may be employed. When the translation mechanism1040is actuated so that the construction material sample300is rotated, the material-interacting device1060captures multiple readings of the construction material sample300. In this manner, one or more characteristics such as density, volume, and the like as described with reference to the apparatuses, devices, and methods described herein can be determined by the material-interacting device1060. An apparatus for measuring and determining characteristics of a construction material sample according to one or more embodiments is illustrated inFIG.17and generally designated1710. The apparatus1710includes a panel1712that is translatable about a translation mechanism1714. A light source1716may be provided, and multiple light sources1716are illustrated inFIG.17. The light source1716may be a light point, a light line, laser source, coherent light, or a wave front. A material-interacting device1720may be provided. The material-interacting device1720may be provided in a fixed-relationship relative to a construction material1722provided on the panel1712. Alternatively, the material-interacting device1720may be translatable from a first position (in which the material-interacting device1720is shown in solid lines) to a second position (in which the material-interacting device1720is shown in broken lines). Alternatively, multiple material-interacting devices1720in variously selected positions may be employed. When the translation mechanism1714is actuated so that the panel1712is rotated, the material-interacting device1720captures multiple readings of the construction material samples1722. In this manner, one or more characteristics such as density, volume, shape, texture, angularity, size, and the like as described with reference to the apparatuses, devices, and methods described herein can be determined by the material-interacting device1720. A histogram based on these values can be obtained such that an “optical sieve” is developed. A system for measuring and determining characteristics of a construction material sample according to one or more embodiments is illustrated inFIG.18and generally designated1810. The system1810includes a conveyor-type assembly1812. Conveyor assembly1812may be unidirectional, bi-directional, or configured for alternating between directional movements. The conveyor assembly1812may be translated by a roller wheel assembly1814or any other desired apparatus. A material-interacting device1816similar to other material-interacting devices disclosed herein may be provided in any position relative to the conveyor assembly1812. Additionally, more than one material-interacting device1816may be employed. A hopper system1820or similar device for dispensing construction material samples1822onto the conveyor assembly1812may be provided. The construction material sample1824may translate with the conveyor assembly into a mixer, a cart or storage bin1824as illustrated. The material-interacting device1816may be provided in communication with a computing device1826for further manipulation of data captured by the material-interacting device1816. The material-interacting device1816may determine one or more characteristics such as density, volume, height, thickness, angularity, size, shape, texture and the like. Material-interacting device1816may be an optical scanning device, or, alternatively, an ultrasonic device or any other device disclosed herein. It may operate in a reflection mode or a transmission mode, sometimes referred to as a pitch and catch mode. The material-interacting device1816and computing device1826may be operably configured for creating a histogram or other statistical compilation of the one or more determined characteristics. For example, a histogram illustrated inFIG.19may illustrate aggregate size as a function of frequency as determined by the material-interacting device1816and computing device1826. Other characteristics may also be represented with a histogram similar to that which is illustrated inFIG.19. Other ASTM and AASHTO methods and standards may also be employed. Additional methods may be found in the Asphalt Institute Soils Manual MS-10 and a publication entitled “CONVENTIONAL DENSITY TESTING” printed by the North Carolina Department of Transportation, both publications of which are hereby incorporated by reference. Other methods may be found in the North Carolina Department of Transportation manual entitled “AGGREGATE BASE COUSE NUCLEAR DENSITY TESTING MANUAL” by Jim Sawyer and printed by the North Carolina Department of Transportation published Jun. 4, 2003, the contents of which are hereby incorporated by reference. Other methods may be found in the North Carolina Department of Transportation manual entitled “CONVENTIONAL DENSITY OPERATOR'S MANUAL” by Levi Regalado, edited by Jim Sawyer, and printed by the North Carolina Department of Transportation and published on Aug. 16, 2002, and revised on Oct. 11, 2004, the contents of which are hereby incorporated by reference. Additionally, methods of determining the moisture content of a sample of material excavated from a void may be employed. For example, methods of determining a moisture content are disclosed in U.S. Pat. Nos. 7,239,150, 7,569,810, and 7,820,960, the entire contents of which are hereby incorporated by reference. U.S. Pat. Nos. 7,239,150, 7,569,810, 7,581,446 and 7,820,960 disclose many methods of determining a moisture content, as well as methods for preparing soil or other material for testing, all of which are hereby incorporated by reference in their entirety. Including and in addition to those patents, manners of determining a moisture content may include direct heating, time-domain reflectometry (TDR), capacitive measurements including swept frequency capacitance, microwave heating, microwave impedance, calcium carbide meters known as “Speedy” meters, electromagnetic methods, magnetic resonance, and ground penetrating radar (GPR) techniques. The following examples are illustrative of processes that may be employed with one or more apparatuses or devices disclosed herein. As used herein, the term “squeeze” method is used for obtaining an idea of how close the soil is to optimum moisture content. The squeeze method may be for determining the optimum moisture of a soil mass and can be performed by an experienced technician with acceptable accuracy. The squeeze method may work well with cohesive soil. Any lumps and clods in the excavated soil material should be pulverized. The mass of soil should be mixed and fairly homogeneous. In the method, a handful of loose soil is taken in one hand of the operator and firmly squeezed into an elongated mass. The moisture is close to optimum moisture if:1. The mass exhibits cohesion. The soil should not break apart after releasing the soil from the hand after squeezing. If the soil does break apart, the user should add a small bit of water if the test calls for obtaining optimum moisture.2. Remains cohesive under stress. The user throws the mass of soil up in the air 4 or 6 inches high and catches the mass on descent. If the mass remains intact, the mass is close to optimum cohesiveness. If not, the user should add water if necessary to obtain optimum moisture.3. There is coolness of the palm. The user should feel a coolness in their palm when handling the soil, but there should be no visible moisture left in the user's hand upon releasing the soil.4. The penny print. During compaction using a mold compactor, if, at the end of the compaction, the ram rod should be cleaned and then struck in the middle of the mold. If the imprint left by the ram rod is a depth of about 1-2 mm deep, about the thickness of a penny, then it is close to optimum moisture. If a full print of the ram rod cannot be seen, then the soil is too dry. These criteria are true even if the mass is above optimum moisture. If it is above optimum, a noticeable film of moisture will appear on the hand, also leaving some of the dirt behind as well. In this case, the soil should be slowly dried in air if optimum moisture is required. In the following examples, the density of a soil base will be measured using methods and one or more apparatuses described herein to optically determine the volume of an excavated void in the soil, sub-base, ground and calculate the wet or dry density by weighing the excavated mass from the void. Example 1 In this example, embankments and subgrades including primarily soil and not much rock or aggregate are excavated and the volume determined. In this example, the moisture content is not determined for each test site. Some regulatory agencies refer to this as the “short test” as it is a time saver that assumes the soil compacted in a mold has been brought to optimum moisture by the operator. The results are then related to the ratio of the volume of soil compacted in the mold Vm to the volume removed in-situ or percent compaction=Vm/Vs. Since Vm water is adjusted by the operator to be at optimum water content, it is then assumed to be at maximum density after packing in the mold. Hence a ratio of 1 means that the embankment or subgrade is at optimum density.1) Prepare the test site by smoothing the surface;2) Level and secure the optical template or frame on the test site;3) Obtain a first or “flat” reading using the one or more material-interacting devices disclosed herein;4) Dig a test hole, starting off with a spoon and continuing with an auger. Soil should be collected on a soil pan;5) When hole is finished, remove the loose soil particles from the hole and contain it in a pan;6) Obtain a second reading using the material-interacting device;7) The volume of the hole can be determined by the difference between the second and first reading with the material-interacting device. If the volume is less than 910 cm{circumflex over ( )}3, the hole is too small, and the user should remove additional material and repeat step 6;8) If the hole is greater than 990 cm{circumflex over ( )}3, the hole is too large, the user should move to a different location and start over;9) Clean off excess soil from the auger and spoon and include in the soil pan;10) Mix the soil until it has a uniform water content;11) Check for optimum moisture using any experienced method such as the squeeze method;12) Dry or add water as needed;13) Move the soil to one side of the pan and divide into three equal layers;14) Place first layer into a mold and apply compactive effort of 25 blows, checking to make sure the soil is compacting as expected assuming optimum moisture conditions;15) Place the second layer in the mold including any rocks that were removed from the hole, and then apply compactive effort;16) Place the 3rdlayer in the mold and apply compactive effort. After the 16thblow, scrape any soil sticking to the ram rod and from the inside wall of the mold above the soil layer and apply the remaining blows;17) Using the mold template for the material-interacting device, place the material-interacting device on the mold and obtain a reading of the volume of space above the soil in the mold;18) The difference between the volume of the empty mold with a mold template and the soil filled material-interacting device mold-template volume is the volume of the soil occupying the mold; and19) Determine the percent compaction by dividing the volume of soil compacted in the mold (step 18) by the volume of the hole (step 7) times 100. Example 2 Sometimes the following test is referred to as the “long test” as it requires precise moisture measurements for each hole. In preparation, all loose soil in a 15 inch by 15 inch square is removed from the surface of the road and is brought to a smooth, flat, approximately level area by scraping with a steel straight edge or other suitable tool. A template for the material-interacting device is secured over the area and the material-interacting device is placed on the template and an initial pre-hole measurement of volume is obtained. The material-interacting device is removed and a hole is dug in the center of the template approximately 4 to 6 inches deep. The removed soil is placed in a container for weighing and determining moisture content by any gravimetric, thermal, suction, instrumented, electromagnetic, microwave, or chemical method. It is important that all of the soil removed is placed in the container as this is the mass related to the volume measurement. Once the hole is dug, the material-interacting device is placed again on the template and a new measurement of the void is obtained. The difference between the second material-interacting device and the first material-interacting device measurement is the volume of the hole. The volume of the hole should be no less than 780 cm{circumflex over ( )}3. The soil that is removed from the void is weighed and the moisture content is determined by any appropriate method. Non-nuclear methods are preferred, however, any approved method is acceptable. Once the dry weight of the soil is determined, and the volume of the void is known the dry density in-situ can be calculated. Wet Density (mass/volume)=Wet weight/Volume %M=(Wet wt.−Dry wt.)/Dry wt.×100 Dry Density=Wet Density/(100+Moisture content %)×1001) Level the electronic scale;2) Verify a 2 Kg weight is within 1 gram tolerance on the scale;3) Weigh empty mold and record;4) Prepare the test site by smoothing the surface;5) Level and secure the template on the test site;6) Obtain a first or “flat” reading using the optical hole reader (material-interacting device);7) Dig a test hole, starting off with a spoon and continuing with an auger. Soil should be collected on a soil pan;8) When hole is finished, remove the loose soil particles from the hole and include them in the pan;9) Obtain a second reading using the material-interacting device;10) The difference between the second and first reading with the material-interacting device is the volume of the hole. If the volume is less than 780 cm{circumflex over ( )}3, the hole is too small, remove additional material and repeat step 6;11) Clean off excess soil from the auger and spoon and include in the soil pan;12) Place soil in drying pan, record weight of wet soil;13) Mix soil until it has a uniform water content;14) Dry the soil. When using a burner, be sure not to overheat the soil. When using a microwave oven, follow ASTM D 4643;15) Weigh dry soil and record weight;16) Record dry density in-situ from steps 15 and 10;17) Remove additional soil from the hole and place in soil pan;18) Break up and pulverize the soil;19) Check for optimum moisture using the squeeze method;20) Dry or add water to the soil as necessary, and mix for uniform water content. Repeat steps 18-19 until optimum moisture content is obtained;21) Move the soil to one side of the pan and divide into three equal layers;22) Place first later into a Proctor mold and apply compactive effort of 25 blows; check to make sure soil is compacting as expected assuming optimum moisture;23) Place the second layer in the mold including any rocks that were removed from the hole, apply compactive effort;24) Place the 3rdlayer in the mold and apply compactive effort. After the 16thblow, scrape any soil sticking to the rammer and from the inside wall of the mold above the soil layer and apply the remaining blows;25) Scribe around the top (3rd) layer and then remove the mold collar;26) The top of the 3rdlayer should be ¼ to ½ inch above the top of the mold;27) Scrape off excess soil with the straight edge until the surface is flush with the top of the mold;28) Weigh the mold with the soil and record the weight. Subtract out the weight of the mold;29) Extract the soil pill from the mold;30) Using the straight edge, split the soil pill down the middle lengthwise;31) Obtain 300 g of soil by shaving the middle of the split pill from the top to bottom;32) Dry the 300 g of soil, using a thermal method, find the water content; and33) Obtain dry density with steps 32, 28 and the known volume of the mold. Percent compaction=Dry Density of soil in-situ (step 16) divided by Dry Density of the soil compacted in mold (step 33)×100 Example 3 This test is used to calculate the degree of compaction of embankments and subgrades or soil bases that contain 33% aggregate, or have been stabilized by an admixture of aggregate material. This method uses a steel ring 18 inches OD and 4.5 to 9 inches deep. The steel ring is placed over the area to be tested and the material within the ring is carefully loosened with a pick and removed with a scoop. The material removed is placed in the bucket for weighing. As the material is removed, the ring is lowered to the full depth of the layer by lightly tapping the top of the ring with a wooden mallet or similar object. After all the material has been removed, the ring is removed and the volume of the void is measured using optical methods.1) Level the electronic scale;2) Verify a 2 Kg weight is within 1 gram tolerance on the scale;3) Tare a bucket;4) Prepare the test site by smoothing the surface;5) Level and secure the template on the test site;6) Obtain a first or “flat” reading using the optical hole reader (material-interacting device);7) Place the sampling ring on the surface to be tested within the area of the template;8) Using a pick, loosen the material on the surface within the ring;9) Remove the material and place in the bucket tapping the ring into the void as you go;10) When hole is finished, remove the loose soil particles from the hole and include them in the bucket;11) Weigh the material and record;12) Remove the ring and obtain a second reading using the material-interacting device. (Alternatively, the measurement could be done with the ring in place). Volume can be calculated from the depth of the ring with it in place, or by the volume of the cylindrical ring with it removed;13) The difference between the second and first reading with the material-interacting device is the volume of the void;14) Find the density using 13 and 11;15) Dump the material on the ground;16) Quarter down the material and remix, do this twice. Purpose is to obtain a representative sample;17) Place 1000 g of soil in drying pan, record weight of wet soil;18) Dry the soil. When using a burner, be sure not to overheat the soil. When using a microwave oven, follow ASTM D 4643;19) Weigh dry soil and record weight;20) Record dry density in-situ from steps 19 and 13;21) Obtain material from the quartered section and place in a soil pan until about ⅔ full;22) Check for optimum moisture using the “squeeze” method;23) Dry or add water to the soil as necessary, and mix for uniform water content. Repeat steps 22-23 until optimum moisture content is obtained;24) Move the soil to one side of the pan and divide into three equal layers;25) Place first layer into the large mold and apply compactive effort of 56 blows; check to make sure soil is compacting as expected assuming optimum moisture. (Note, a 3/40 ft{circumflex over ( )}3 or 2123 cc mold should be used);26) Place the second layer in the mold including any rocks that were removed from the hole, apply compactive effort;27) Place the 3rdlayer in the mold and apply compactive effort. After the 35thblow, scrape any soil sticking to the rammer and from the inside wall of the mold above the soil layer and apply the remaining blows;28) Scribe around the top (3rd) layer and then remove the mold collar;29) The top of the 3rdlayer should be ¼ to ½ inch above the top of the mold;30) Scrape off excess soil with the straight edge until the surface is flush with the top of the mold;31) Weigh the mold with the soil and record the weight. Obtain the soil weight not including the mold;32) Extract the soil pill from the mold;33) Using the straight edge, split the soil pill down the middle lengthwise;34) Obtain 1000 g of soil by shaving the middle of the split pill from the top to bottom;35) Dry the soil, using a thermal method, find the water content;36) Weigh the dry soil and record; and37) Obtain dry density with steps 31, 35 and the known volume of the mold. Percent compaction=Dry Density of soil in-situ (step 14) divided by Dry Density of the soil compacted in mold (step 36)×100 Example 4 The following test is used to calculate the degree of compaction of embankments and subgrades or having a high degree of compaction; otherwise known as Coarse aggregate base course. This method uses a steel ring having an outer diameter of 18 inches and 4.5 to 9 inches deep. The steel ring is placed over the area to be tested and the base coarse material within the ring is carefully loosened with a pick and removed with a scoop. The material removed is placed in the bucket for weighing. As the material is removed, the ring is lowered to the full depth of the layer by lightly tapping the top of the ring with a wooden mallet or similar object. After all the material has been removed, the ring is removed and the volume of the void is measured using optical methods.1) Level the electronic scale;2) Verify a 2 Kg weight is within 1 gram tolerance on the scale;3) Tare a bucket;4) Prepare the test site by smoothing the surface;5) Level and secure the template on the test site;6) Obtain a first or “flat” reading using the optical hole reader (material-interacting device). (Note, other methods equivalent may not require a first reading);7) Place the sampling ring on the surface to be tested within the area of the template;8) Using a pick, loosen the material on the surface within the ring;9) Remove the material and place in the bucket tapping the ring into the void as you go;10) When hole is finished, remove the loose soil particles from the hole and include them in the bucket;11) Weigh the material minus the bucket and record;12) Remove the ring and obtain a second reading using the material-interacting device. (Alternatively, the measurement could be done with the ring in place). Volume can be calculated from the depth of the ring with it in place, or by the volume of the cylindrical ring with it removed;13) The difference between the second and first reading with the material-interacting device is the volume of the void;14) Find the wet density using 13 and 11;15) Dump the material on the ground;16) Quarter down the material and remix, do this twice. Purpose is to obtain a representative sample;17) Place 1000 g of soil in drying pan, record weight of wet soil;18) Dry the soil. When using a burner, be sure not to overheat the soil. When using a microwave oven, follow ASTM D 4643; 19) Weigh dry soil and record weight; and20) Record dry density in-situ from steps 19 and 14. Example 5: General Use All of the above examples used some sort of Proctor mold for % Compaction comparisons. Note that in general, the density of a subbase could be determined simply by removing the soil with a tool, scanning and determining the volume of the hole, and weighing the soil and determining the density. Further determining the moisture content allows for the dry density of the soil to be found.1) Level the electronic scale;2) Verify a 2 Kg weight is within 1 gram tolerance on the scale;3) Weigh empty mold and record;4) Prepare the test site by smoothing the surface;5) Level and secure the template on the test site;6) Obtain a first or “flat” reading using the optical hole reader (material-interacting device);7) Dig a test hole, starting off with a spoon and continuing with an auger. Soil should be collected on a soil pan;8) When hole is finished, remove the loose soil particles from the hole and include them in the pan;9) Obtain a second reading using the material-interacting device;10) The difference between the second and first reading with the material-interacting device is the volume of the hole;11) Clean off excess soil from the auger and spoon and include in the soil pan;12) Place soil in drying pan, record weight of wet soil;13) Mix soil until it has a uniform water content;14) Dry the soil. When using a burner, be sure not to overheat the soil. When using a microwave oven, follow ASTM D 4643;15) Weigh dry soil and record weight; and16) Record dry density in-situ from steps 15 and 10. In one or more embodiments, the material-interacting device812may also use confocal scanning. In a confocal laser scanning microscope, a laser beam passes through a light source aperture and then is focused by an objective lens into a small (ideally diffraction limited) focal volume within or on the surface of a specimen. In biological applications especially, the specimen may be fluorescent. Scattered and reflected laser light as well as any fluorescent light from the illuminated spot is then re-collected by the objective lens. A beam splitter separates off some portion of the light into the detection apparatus, which in fluorescence confocal microscopy will also have a filter that selectively passes the fluorescent wavelengths while blocking the original excitation wavelength. After passing a pinhole, the light intensity is detected by a photodetection device (usually a photomultiplier tube (PMT) or avalanche photodiode), transforming the light signal into an electrical one that is recorded by a computer. The detector aperture obstructs the light that is not coming from the focal point. The out-of-focus light is suppressed: most of the returning light is blocked by the pinhole, which results in sharper images than those from conventional fluorescence microscopy techniques and permits one to obtain images of planes at various depths within the sample (sets of such images are also known as z stacks). The detected light originating from an illuminated volume element within the specimen represents one pixel in the resulting image. As the laser scans over the plane of interest, a whole image is obtained pixel-by-pixel and line-by-line, whereas the brightness of a resulting image pixel corresponds to the relative intensity of detected light. The beam is scanned across the sample in the horizontal plane by using one or more (servo controlled) oscillating mirrors. This scanning method usually has a low reaction latency and the scan speed can be varied. Slower scans provide a better signal-to-noise ratio, resulting in better contrast and higher resolution. Information can be collected from different focal planes by raising or lowering the microscope stage or objective lens. The computer can generate a three-dimensional picture of a specimen by assembling a stack of these two-dimensional images from successive focal planes. Additionally, the material-interacting device812may be a range image device. The sensor device which is used for producing the range image is sometimes referred to as a range camera. Range cameras can operate according to a number of different techniques, some of which are presented here. Stereo Triangulation A stereo camera system can be used for determining the depth to points in the scene, for example, from the center point of the line between their focal points. In order to solve the depth measurement problem using a stereo camera system, it is necessary to first find corresponding points in the different images. Solving the correspondence problem is one of the main problems when using this type of technique. For instance, it is difficult to solve the correspondence problem for image points which lie inside regions of homogeneous intensity or color. As a consequence, range imaging based on stereo triangulation can usually produce reliable depth estimates only for a subset of all points visible in the multiple cameras. The correspondence problem is minimized in a plenoptic camera design, though depth resolution is limited by the size of the aperture, making it better suited for close-range applications. The advantage of this technique is that the measurement is more or less passive; it does not require special conditions in terms of scene illumination. The other techniques mentioned here do not have to solve the correspondence problem but are instead dependent on particular scene illumination conditions. Sheet of Light Triangulation If the scene is illuminated with a sheet of light this creates a reflected line as seen from the light source. From any point out of the plane of the sheet, the line will typically appear as a curve, the exact shape of which depends both on the distance between the observer and the light source and the distance between the light source and the reflected points. By observing the reflected sheet of light using a camera (often a high resolution camera) and knowing the positions and orientations of both camera and light source, it is possible to determine the distances between the reflected points and the light source or camera. By moving either the light source (and normally also the camera) or the scene in front of the camera, a sequence of depth profiles of the scene can be generated. These can be represented as a 2D range image. Structured Light-3D Scanner By illuminating the scene with a specially designed light pattern, structured light, depth can be determined using only a single image of the reflected light. The structured light can be in the form of horizontal and vertical lines, points, or checker board patterns. Time-of-Flight The depth can also be measured using the standard time-of-flight technique, more or less similar to radar or LIDAR, where a light pulse is used instead of an RF pulse. For example, a scanning laser, such as a rotating laser head, can be used to obtain a depth profile for points which lie in the scanning plane. This approach also produces a type of range image, similar to a radar image. Time-of-flight cameras are relatively new devices that capture a whole scene in three dimensions with a dedicated image sensor and therefore have no need for moving parts. Interferometry By illuminating points with coherent light and measuring the phase shift of the reflected light relative to the light source it is possible to determine depth, at least up to modulo the wavelength of the light. Under the assumption that the true range image is a more or less continuous function of the image coordinates, the correct depth can be obtained using a technique called phase-unwrapping. By illuminating points with coherent light and measuring the phase shift of the reflected light relative to the light source it is possible to determine depth, at least up to modulo the wavelength of the light. Under the assumption that the true range image is a more or less continuous function of the image coordinates, the correct depth can be obtained using a technique called phase-unwrapping. In general, wavelength measurements are not useful for measurement on the order of the dimensions of an excavation. Wavelength dimensional methods are concerned with objects in the nearfield and cm type dimensions do not need that kind of accuracy or significant digits. However, if some kind of mineralogical composition or petrologic study was of interest, This might be implemented by focusing down a few centimeters, and then incorporating the interferometer techniques incorporating both farfield and nearfield objectives. For example, a characteristic might be 2.546 mm+0.5 lambda away from the reference. Coded Aperture Depth information may be partially or wholly inferred alongside intensity through reverse convolution of an image captured with a specially designed coded aperture pattern with a specific complex arrangement of holes through which the incoming light is either allowed through or blocked. The complex shape of the aperture creates a non-uniform blurring of the image for those parts of the scene not at the focal plane of the lens. Since the aperture design pattern is known, correct mathematical deconvolution taking account of this can identify where and by what degree the scene has become convoluted by out of focus light selectively falling on the capture surface, and reverse the process. Thus the blur-free scene may be retrieved and the extent of bluring across the scene is related to the displacement from the focal plane, which may be used to infer the depth. Since the depth for a point is inferred from its extent of blurring caused by the light spreading from the corresponding point in the scene arriving across the entire surface of the aperture and distorting according to this spread, this is a complex form of stereo triangulation. Each point in the image is effectively spatially sampled across the width of the aperture. In accordance with one or more embodiments, a locating and tracking device may be employed within a system utilizing an apparatus, method, or system disclosed herein. Such a system is disclosed in US Patent Publication No. 20110066398, the entire contents of which are hereby incorporated by reference. Such a system may record information such as Project number, county, GPS location, data, test site name, first and second optical measurements, mold and mold collar volumes and serial numbers, weights, moisture contents, wet density, dry density, % compaction, Engineer, Inspector. A fully automated system could record results in a spread sheet. The mass determining device could be in communication with a computer and the computer in communication with the optical system. Step by step procedures for the operator could be displayed on a display panel in one or more embodiments. Various techniques described herein may be implemented with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the disclosed embodiments, or certain aspects or portions thereof, may take the form of program code (i.e., executable instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computer will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device and at least one output device. One or more programs are preferably implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations. The described methods and apparatus may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as an EPROM, a gate array, a programmable logic device (PLD), a client computer, a video recorder or the like, the machine becomes an apparatus for practicing the presently disclosed subject matter. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates to perform the processing of the presently disclosed subject matter. Therefore, it is to be understood that the subject matter disclosed herein is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. | 79,089 |
11859967 | Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating embodiments of the present disclosure and not for purposes of limiting the same. DETAILED DESCRIPTION To safely and effectively operate a medical instrument system, medical tools may need to be properly installed, positioned, identified, authenticated and/or otherwise received and recognized when mounted to a medical system such as a robot-assisted medical system. Systems and methods for detecting tool presence and identity have been described in International Publication No. WO 2020/014201 filed Jul. 9, 2019, which is incorporated by reference herein in its entirety. The technology described herein may provide error detection and correction for a baseline sensor value used to determine a tool installation. FIG.1Aillustrates a tool recognition system100that includes a tool recognition assembly102(shown in a cross-sectional view) and a control system104. In various embodiments, the control system104may be a component of the tool recognition assembly102and/or a component of a robot-assisted manipulator assembly. The tool recognition assembly102may include a mounting member108through which a passage110extends. A receiving member106such as a catheter or other cannulated device may extend through the passage110and into a passageway in a patient anatomy. A proximal target reader112is mounted near a proximal portion114of the mounting member108, and a distal target reader116is mounted near a distal portion118of the mounting member108. The mounting member108may be formed of a plastic, a ceramic, or another type of material that minimizes interference with the target readers112,116. The ends of the target readers112,116are separated by a distance D1. The target readers112,116may comprise an inductive sensor (e.g., an inductor or inductive coil that detects a change in inductance caused by ferromagnetic and conductive properties of a material), a capacitive sensor, a Hall effect sensor, a photogate sensor, an optical sensor, a magnetic switch, a barcode scanner, a radio frequency identification (RFID) scanner, a relative position sensor, or combinations thereof that are capable of reading corresponding one or more targets on a tool to be inserted into the receiving member106. Any combination of different types of target readers may be implemented in the tool recognition assembly102. In some examples, a tool recognition assembly102may include a single target reader or three or more target readers. In some examples, target readers with inductive sensors may detect instantaneous inductance values. The target readers112,116may be in communication with the control system104to process data from the target readers (e.g., changes in inductance readings, changes in a magnetic field, changes in intensity of light, changes in colors of light, etc.). The control system may include at least one processor126and at least one memory128. The control system may receive the data from the target readers112,116periodically at regular or irregular intervals or continuously. For example, the target readers112,116may communicate the data to the control system responsive to a change in the data sensed by the target readers (e.g., changes in inductance, changes in resistance, changes in capacitance, changes in a magnetic field, changes in intensity of light, changes in colors of light, etc.) In another example, the data from the target readers are regularly communicated to the control system, either periodically or continuously, and the control system is tasked with determining when the data has changed. FIG.1Billustrates the tool recognition system100with an elongate tool120extending into the receiving member106. The tool120includes a target122having a length D2(FIG.1C). In this example, the length D2may be shorter than the length D1but may be sufficiently long that the target122may concurrently extend within both target readers112,116. The tool120is sized for insertion through the passage110and the receiving member106. As described in greater detail below, the presence and absence of the target122may be sensed, detected, or otherwise recognized by the target readers112,116. For example, the targets may comprise a ferromagnetic material (e.g., a metal cylinder, a metallic coating), one or more apertures, a surface or material with varied optical absorption characteristics, a barcode, an RFID chip, or combinations thereof that may be sensed, detected, or otherwise recognized by a target reader. In one example, a target reader may detect the presence of a target by detecting an inductance and/or a change in inductance when the target is placed in proximity to a target reader. In some examples, the tool may include a single target122, but in other examples may include two or more target readers. There may be a different number of targets than there are target readers. The tool120may be any of a variety of tools including an imaging tool (e.g. a vision probe), an investigation tool (e.g., a biopsy probe), or a treatment tool (e.g. an ablation probe). The tool recognition assembly102may be configured to detect if the tool120is fully inserted into the receiving member106. The tool120may be considered fully inserted, for example, when the tool120is inserted to such a degree as to permit the tool120being used within the body of a patient; inserted to such a degree that a distal end of the tool120is at, or within a predetermined distance of, a distal end of the receiving member106; inserted to such a degree that the tool120extends to a proximal end of the receiving member106; inserted to such a degree that the tool120extends through the mounting member108; inserted to such a degree that a distal portion of the tool120extends a predetermined distance past a distal end of the receiving member; or combinations thereof. FIG.1A, may illustrate a configuration of the tool recognition assembly102and the receiving member106after a set-up process in which the tool recognition assembly102and the receiving member106are mounted to a platform such as a robot-assisted manipulator assembly. In this configuration, the tool recognition assembly may initiate a latching process in which a baseline value for each of the target readers112,116is registered. If, for example, the target readers include inductance sensors, the baseline value for each of the target readers may be a baseline inductance value corresponding to the inductance value when the passage110through the target readers is occupied by the receiving member106but unoccupied by the target122, the tool120, or other foreign objects. Thus, in this example, the baseline value for each of the target readers may correspond to the inductance value when the receiving member106extends through the target readers112,116, but the receiving member106is empty, with no tools, instruments, or other objects yet inserted through in the receiving member. FIG.2Aillustrates a graph200of the measured sensor values of the target readers112,116over time. The graph200illustrates the sensor data202from the proximal target reader112and sensor data204from the distal target reader116. A baseline value206is latched or registered for the proximal target reader112, and a baseline value208is latched or registered for the distal target reader116. In some examples, the sensor data may be an inductance measurement (L) measured in units of henry (H). Each of the target readers112,116may include a wire coil connected to a LC (inductor/capacitor) resonator. A target reader may include an inductance to digital converter that measures the change in resonant frequency that is caused by a change in inductance, which in turn may be caused by a change in the permeability of the material inside the wire coil. An inductance value L(t) may depend on one or more physical parameters of the coil, including number of turns, length, and cross section. A ratiometric approach may reduce the sensitivity to these parameters. An inductance ratio may be computed a β(t)=L(t)/LB, where L(t) is a measured inductance at time t and LBis a baseline inductance (e.g. an inductance corresponding to a known coil state). A coil state S(t) may have a binary value of “0” if the coil is empty and “1” if the coil is occupied. S(t) is determined to be “1” if β(t) is greater than an upper inductance ratio threshold (THRHI) and is determined to be “0” if β(t) is less than a lower inductance ratio threshold (THRLO). If THRLO<β(t)<THRHI, S(t) is determined to be S(t−1) which represents the coil state from the previous sample time. In some examples, the THRLOmay be 1.014 and the THRHImay be 1.025. FIG.1Billustrates the tool120extending within the passage110with the target122positioned proximal of the proximal target reader112. Thus, in this configuration, the target122may be undetected by the proximal target reader112and the distal target reader116. With reference toFIG.2A, the period between t0and t1may correspond to the configuration ofFIGS.1A and1Bwhen the target122is outside of both the proximal target reader112and the distal target reader116. The occupied or detected state for the proximal target reader112may correspond to a measured sensor value202approximately equal to the baseline sensor value206for the proximal target reader112and a measured sensor value204approximately equal to the baseline sensor value208for the distal target reader116. FIG.1Cillustrates the tool120extending within the passage110with the target122positioned within the proximal target reader112. Thus, in this configuration, the target122may be detected by the proximal target reader112but not the distal target reader116. With reference toFIG.2A, the period between t1and t2may correspond to the configuration ofFIG.1Cwhen the target122is within the proximal target reader112but not within the distal target reader116. The occupied or detected state for the proximal target reader112may correspond to a measured sensor value202greater than the baseline sensor value206for the proximal target reader112and a measured sensor value204approximately equal to the baseline sensor value208for the distal target reader116. FIG.1Dillustrates the tool120extending within the passage110with the target122positioned within both the proximal target reader112and the distal target reader116. Thus, in this configuration, the target122may be detected by both proximal target reader112and the distal target reader116. With reference toFIG.2A, the period between t2and t3may correspond to the configuration ofFIG.1Dwhen the target122is within the proximal target reader112and within the distal target reader116. The occupied or detected state for the proximal target reader112may correspond to a measured sensor value202greater than the baseline sensor value206for the proximal target reader112and a measured sensor value204greater than the baseline sensor value208for the distal target reader116. FIG.1Eillustrates the tool120extending within the passage110with the target122positioned within the distal target reader116but not within the proximal target reader112. Thus, in this configuration, the target122may be detected by the distal target reader116but not the proximal target reader112. With reference toFIG.2A, the period between t3and t4may correspond to the configuration ofFIG.1Ewhen the target122is within the distal target reader116but not within the proximal target reader112. The occupied or detected state for the distal target reader116may correspond to a measured sensor value204greater than the baseline sensor value208for the distal target reader116and a measured sensor value202approximately equal to the baseline sensor value206for the proximal target reader112. This configuration may also correspond to a fully installed tool configuration. FIG.2Aillustrates the insertion sequence for the period t0to tR.FIG.2Balso illustrates a retraction sequence for the period tR to t7. In some examples, the retraction sequence may have a shorter duration than the insertion sequence. The period between tR and t4may correspond to configuration ofFIG.1Ewhere the tool120extends within the passage110with the target122positioned within the distal target reader116but not within the proximal target reader112. The period between t4and t5may correspond to configuration ofFIG.1Dwhere the tool120extends within the passage110with the target122positioned within both the distal target reader116and the proximal target reader112. The period between t5and t6may correspond to configuration ofFIG.1Cwhere the tool120extends within the passage110with the target122positioned within the proximal target reader112but not the distal target reader116. The period between t6and t7may correspond to configuration ofFIG.1Bwhere the tool120extends within the passage110with the target122positioned outside of both the proximal target reader112and the distal target reader116. FIG.2Billustrates a graph220of the ratio β(t) of the measured sensor values to the baseline sensor values over time. The graph220illustrates the calculated ratio222for the proximal target reader112and the calculated ratio224for the distal target reader116. In this example, if the ratio β(t) is greater than 1 (i.e., the measured inductance is greater than the baseline inductance), the ratio may indicate the presence of the target122within the target reader. Accordingly,FIG.2Billustrates that the proximal target reader112senses presence of the target122during the period of approximately t1to t3and during the period of approximately t4to t6, andFIG.2Billustrates that the distal target reader116senses presence of the target122during the period of approximately t2to t5. An advantage of determining target presence or absence based on the ratio method may be that it does not require a detailed knowledge of sensor characteristics, such as number of coil turns, length and cross-sectional area of the mounting member108, that may be difficult to control in manufacturing FIG.2Cillustrates a graph230, that may be derived from the ratio data of graph220, including a target reader state232for the proximal target reader112and a target reader state234for the distal target reader116. The graph illustrates binary states for the target readers with a “0” indicating “no target detected” and a “1” indicating “target detected.” Accordingly,FIG.2Cillustrates that the proximal target reader112senses the target122during the period of approximately t1to t3and during the period of approximately t4to t6, andFIG.2Cillustrates that the distal target reader116senses the presence of the target122during the period of approximately t2to t5. FIG.3is a flowchart illustrating an example method300for determining if a tool is installed in a receiving member. For example, the method300may be used to determine if the tool120is fully installed as shown inFIG.1E. In other examples, the tool120may be considered fully inserted when the tool120is inserted to such a degree as to permit the tool120being used within the body of a patient, inserted to such a degree that a distal end of the tool120is within a predetermined distance of a distal end of the receiving member106, inserted to such a degree that the tool120extends through the mounting member108, inserted such that the distal of the tool120is flush with a distal end of the receiving member, inserted to such a degree that a distal portion of the tool120extends a predetermined distance past a distal end of the receiving member, or combinations thereof. At a process302, a first baseline sensor value is determined. For example and as shown inFIG.1A, after a set-up process in which the tool recognition assembly102and the receiving member106are mounted to a platform such as a robot-assisted manipulator assembly, the tool recognition assembly102may initiate a latching process in which a baseline value for each of the target readers112,116is determined. Typically, the baseline sensor value is determined when the target readers are unoccupied by the target122, the tool120, or other foreign objects. Sometimes an error occurs, and the latching process is performed while the tool120or other foreign object is in one or both of the target readers. For example, the tool120may be present in the receiving member106when the receiving member106is inserted into the tool recognition assembly102, and thus the baseline sensor value after latching will erroneously reflect the presence of the tool120. A process for correcting a baseline error that results from an erroneous set-up process is described below atFIGS.4A and4B. At a process304, a tool may be received into a tool recognition assembly. For example and as shown inFIGS.1B-1E, the tool120may be inserted into the tool recognition assembly102. At a process306, sensor data from one or more target readers may be compared to the baseline sensor value. For example, the sensor data202from the target reader112may be compared to the baseline value206, and the sensor data204from the target reader116may be compared to the baseline value208. One type of comparison is the ratio β(t) data222and224or the state data232,234. At a process308, a determination may be made as to whether a target on the tool is recognized based on the comparison. For example, the ratio data222with values greater than approximately “1” indicates that the target122is detected by the target reader112between times t1and t3and between times t4and t6, and the ratio data224with values greater than approximately “1” indicates that the target122is detected by the target reader116between times t2and t5. The ratio data222with values of approximately “1” indicates that the target122is not detected by the target reader112between times t0and t1, times t3and t4and times t6and t7. The ratio data224with values of approximately “1” indicates that the target122is not detected by the target reader116between times t0and t2and between times t5and t7. Determining whether a sensor value received from a target reader corresponds to a detected target by using a comparison to a baseline value may assume that the target reader is unoccupied by the target122, the tool120, or other foreign objects during a latching process. As described below with reference toFIGS.4A and4B, sometimes an error occurs, and the latching process is performed while the tool120or other foreign object is present in one or both target readers. Correction of such and error is further described with reference toFIG.5. At a process310, a determination may be made as to whether the tool is fully installed. For example, the ratio data222and224may indicate that the tool is fully installed, as shown inFIG.1E, during the period t3to t4. In some examples, detecting whether or not the tool120is fully inserted (or otherwise acceptably positioned for operation may comprise comparing sensor data from the target readers112,116to a pre-established model insertion signature. As used herein, “pre-established model insertion signatures” or “model insertion signatures” refer to insertion signatures that have been generated by a modeling software application, inputs from a user interface, measurements logged during an installation of another tool, etc. that have been established to represent positions of a tool while being inserted into the tool recognition assembly. The tool120may be determined to be acceptably positioned for operation and thus fully inserted when a sequence of sensor data from the target readers112,116match the model insertion signature indicating a fully inserted tool and may be determined not to be acceptably positioned for operation and thus not fully inserted when readings from the target readers112,116do not match the model insertion signature indicating a fully inserted tool. The sensor data from target readers112,116that correspond to the model insertion signature indicating a fully inserted tool can include various characteristics, such as a sequence of sensor data from the target readers112,116(e.g., sensor data202,204), a sequence of ratio β(t) data for the target readers112,116(e.g., the ratio data222,224), a sequence of status data for the target readers112,116(e.g., state data232,234), a threshold duration of target readings by the target readers112,116, or combinations data values from the target readers112,116. The method300is not limited to determining installation for a single tool or a single classification of tool. The systems and methods described herein may be used to detect installation of more than one tool or type of tools, including imaging tools, investigation tools such as biopsy tools, and/or treatment tools such as ablation tools. For example, inductance-based sensing may be adapted to support recognition of multiple tools simultaneously. As an example, an upper inductance ratio threshold THRHIof 1.025 may be associated with an imaging tool, and an upper inductance ration threshold THRHIof 1.080 may be associated with an ablation tool. For a sensed inductance ratio of 1.030, the system may detect the presence of an imaging tool because the ratio of 1.030 exceeds the imaging tool threshold ratio of 1.025 but is substantially below (or a predetermined amount below) the ablation tool threshold ratio of 1.080. If the sensed inductance ratio is 1.100, the system may detect the presence of an ablation tool because the ratio 1.100 exceeds the ablation tool threshold ratio of 1.080. Determining whether a sensor value received from a target reader corresponds to a detected target by comparison to a baseline value at process308may assume that the target reader is unoccupied by the target122, the tool120, or other foreign objects during a latching process. Sometimes an error occurs, and the latching process is performed while the tool120or other foreign object is present in one or both target readers.FIG.4Aillustrates a graph400of the measured sensor values of the target readers112,116over time. In some examples, the sensor measurement may be an inductance measurement (L) measured in units of henry (H). The graph400illustrates the sensor data402from the proximal target reader112and sensor data404from the distal target reader116as the tool120is inserted and retracted from the tool recognition assembly102. A baseline value406is latched or registered for the proximal target reader112, and a baseline value408is latched or registered for the distal target reader116. Without further data analysis, it may be unclear from the data402,404that the latched baseline values do not correspond to unoccupied or absent readings from the target readers112,116. Rather, the latched baseline values correspond to occupied or present readings from the target readers112,116. This error may occur, for example, if the tool120, the target122, or foreign objects are present in the passage110during the set-up process when process in which the tool recognition assembly102and the receiving member106are mounted to a platform such as a robot-assisted manipulator assembly. The sensor data402,404may have the same inductance values as the sensor data202,204, respectively, and thus an analysis of the sensor data402,404alone may not indicate that the latched baseline values406,408do not correspond to unoccupied or absent readings from the target readers112,116. An analysis of the ratio β(t) values, however may identify the error. FIG.4Billustrates a graph420of the ratio β(t) of the measured sensor value to the baseline sensor value over time as a tool120is inserted and retracted through a tool recognition assembly102. The graph420illustrates the calculated ratio422for the proximal target reader112and the calculated ratio424for the distal target reader116. In this example, the ratio β(t) is less than 1 for the period t0to t1, the period t3to t4, and the period after t6for target reader112. The ratio β(t) is less than 1 for the period t0to t2and the period after t5. During these periods the measured inductance is less than the baseline inductance which may indicate that the baseline sensor values are in error. As the tool120is inserted through the tool recognition assembly102during the period from t0to tR, the ratio does not rise above 1, which may also indicate that the baseline sensor values are in error. The recognition of the error condition may result in the reversal of the tool at time tR. Prior methods for correcting the error condition have involved removing the tool120, the tool recognition assembly102, and/or the receiving member106from the platform (e.g., a robot-assisted manipulator assembly) and repeating a set-up process. In the correction method described below atFIG.5, the baseline sensor value may be corrected by retraction and reinsertion of the tool without significant disruption to the workflow caused by removing and reattaching the tool recognition assembly. Various properties of the sensor data detected by the target readers112,116may affect the determination of whether a particular reading from the target readers112,116may contribute to a detected insertion signature. For example, the strength (i.e. whether the measured sensor value exceeds a detection threshold), duration, multiple thresholds, or a combination of strength, duration, and multiple thresholds of the readings can be used to determine when a target is detected by the target readers112,116. Additionally or alternatively, derivative properties of the signals read by the target readers112,116such as the rate of change of the signal (e.g., slope), may be used in the determination of a detected insertion signature. When an inductive element is used as the target122, the target readers112,116can produce an inductance measurement signal that varies as the target122approaches the target reader, as the target122is proximate the target readers112,116, and as the target122moves away from the target readers112,116. An amplitude (or strength) of the inductance measurement can indicate a presence of a target122in a detection zone of the target readers112,116. The strength (i.e. amplitude threshold) of the inductance measurement, duration of the inductance measurement, multiple thresholds, and a combination of strength, duration of the inductance measurement, and multiple thresholds read by the target readers112,116can be used to determine whether the target122has been detected in the detection zone of the target reader112,116. Additionally, a slope, inductance ratios, and/or other derivatives of the inductance measurement signal can be used to indicate a presence or an absence of the target in the detection zone of a target reader112,116. One way to represent the target detection and non-detection respectively is to use a binary (e.g., ‘1’ or ‘0’) signal to indicate the presence or absence of a target in the detection zone of the respective target reader406,407as determined by the strength, duration, slope, ratios, and combinations thereof of the inductance measurement signal as well as other derivatives of the inductance measurement signal from the target readers112,116. FIG.5is a flowchart illustrating an example method500for adjusting a baseline sensor value. Method500provides a process for correcting a baseline error that may result from an error in the set-up process. Correcting the error using method500may be more efficient than decoupling the tool120, the tool recognition assembly102and/or the receiving member106from the robot-assisted manipulator assembly and repeating the set-up process. At a process502, a first baseline value may be received. For example, after a set-up process in which the tool recognition assembly102and the receiving member106are mounted to a platform such as a robot-assisted manipulator assembly, the tool recognition assembly102may initiate a latching process in which a baseline value for the target reader116is determined. In this example, the baseline value latches when the target reader116is occupied by the target122, the tool120, or another foreign object, thus generating the erroneous baseline value.FIG.6Aillustrates a graph600of the measured sensor values of the target reader116during the period t4to t7when a downward baseline adjustment occurs. The graph600illustrates the sensor data404from the distal target reader116. The initial inductive baseline value408is latched or registered for the distal target reader116. At a process504, a downward baseline adjustment threshold is determined based on the first baseline sensor value. For example and with reference toFIG.6A, a downward baseline adjustment threshold610is an inductance threshold that may be determined by dividing the initial baseline value608by the upper inductance ratio threshold (e.g. THRHI=1.025) plus a robustness margin (e.g. a value in the range 0.005 to 0.025). Thus, in this example the downward adjustment threshold610is computed as (the initial baseline value608)/(1.025+0.020). At a process506, monitored sensor data may be received from the tool recognition assembly. For example, the inductive data404which may be instantaneous inductance values from the target reader116may be received and monitored as the tool120is inserted into the tool recognition assembly102. As described above, if tool120is inserted through the tool recognition assembly102and the ratio β(t) does not rise above 1, an error with the baseline sensor values may be indicated. The recognition of the error condition may result in the retraction of the tool beginning at time tR. In some examples, the insertion time t0to tR may be longer than the retraction time tR to t7. At a process508, the monitored sensor data may be compared to the baseline adjustment threshold for a predetermined duration. For example, the instantaneous inductive data404may begin to drop at approximately t=t5aand may cross the baseline adjustment threshold610at approximately t=t5. A monitoring window TAVGmay begin when the inductive data404drops below the baseline adjustment threshold610. In some embodiments the monitoring window TAVGmay have a duration that is, for example, sufficiently long to establish confidence that the measured inductance is remaining below the baseline adjustment threshold610. In some examples, a monitoring window TAVGmay have a duration of between 0.5 seconds and 1.0 seconds. At a process510, if a comparison criterion is satisfied, an adjusted or second baseline sensor value may be established using the monitored sensor data. In some examples, the comparison criterion may be whether the monitored sensor data remains below the baseline adjustment threshold for the predetermined duration. If so, the adjusted baseline sensor value may be lower than the initial baseline sensor value. In some examples, the comparison criterion may be whether the average of the monitored sensor data is below the baseline adjustment threshold for the predetermined duration. If so, the adjusted baseline sensor value may be lower than the initial baseline sensor value. In some examples, the comparison criterion may be whether a ratio of the monitored sensor data to the baseline adjustment threshold is lower than 1 for the predetermined duration. If so, the adjusted baseline sensor value may be lower than the initial baseline sensor value. In some examples, the comparison criterion may be whether the monitored sensor data is greater than the baseline adjustment threshold for the predetermined duration. If so, the adjusted baseline sensor value may be greater than the initial baseline sensor value. In some examples, the adjusted baseline sensor value may be an average of the measured sensor data during the duration of the monitoring window. For example, if the measured inductance at the target reader116remains below the baseline adjustment threshold610for the duration of the monitoring window TAVG, the adjusted baseline value612may be established at time t7. The adjusted baseline value612may be established as the average inductance value over the monitoring window TAVG. The value of the adjusted baseline value may be expressed as: LB,new(e.g.,adjustedbaselinevalue610)=TkerTAVG∑tstarttendL(t)=∑nstartnendL(n) where Tkeris the kernal servo period, tstartand tendare the time boundaries of the adjustment window and nstart, nendare the corresponding kernal cycles. Because the monitoring window TAVGincludes measured inductance during the period between t5and t5bwhen the inductance values are ramping down, the adjusted baseline value612may be higher than the relatively steady inductance values during the period after t5b. Accordingly, the window used for averaging the inductance values may be delayed such that the window begins after the ramp down has been completed. In this example, the monitoring window TALTfor calculating an adjusted baseline value614may begin at t5b. The adjusted baseline value614may be the average inductance value during the window TALT. In some examples, use of the monitoring window TALTmay allow the baseline adjustment threshold610to be raised slightly to allow for more prompt detection of the need to lower the baseline value. FIG.6Billustrates a graph620of the ratio β(t) of the measured sensor value to the baseline sensor value over a period during which a downward adjustment of the baseline value occurs. The graph620illustrates the calculated ratio622for the distal target reader116. In this example, the ratio β(t) is less than 1 for the period t5ato t7, indicating an error in the establishment of the baseline value and indicating that the ratio β(t) is not an accurate guide for determining whether a target is present or absent in the target reader116. At the end of the TAVGor TALTwindow, the baseline value is adjusted and, beginning at time t7, the ratio β(t) has a value of 1 or greater than 1. As adjusted, the ratio β(t) may be used to determine whether the target is present or absent in the target reader116. The method500may allow a user to adjust a baseline value by inserting and retracting the tool120and may avoid the need to repeat the set-up process, including removing and re-mounting the tool recognition assembly102and the receiving member106to the robot-assisted manipulator assembly. In some examples, after the process510establishes a new baseline value, the processes504-510may be repeated, thus determining a new baseline adjustment threshold and allowing further adjustment of the baseline value if the monitored sensor values drop below the new baseline adjustment threshold. In some examples, a downward adjustment of the baseline value may be triggered by the proximal target reader112as the tool120is inserted, when the proximal target reader112transitions from occupied to empty. A subsequent baseline adjustment may occur when the tool120is retracted and the proximal target reader112transitions from empty to occupied. In some examples, a downward adjustment of the baseline value may be triggered by the distal target reader116as the tool120is retracted and transitions from occupied to empty, but no further baseline adjustment would be triggered until the instrument112is re-inserted. In some examples, the initial baseline latching may occur when one of the target readers is occupied and the other is not. For example, the initial baseline latching may occur while the distal target reader116is empty and the proximal target reader112is occupied. In this example, the distal target reader116may transition to an occupied state as expected. The proximal target reader112may appear to stay at empty state until the baseline value is adjusted and will subsequently transition to an occupied state as the tool120is retracted. In some examples an upward baseline adjustment may be performed in a manner similar and symmetrical to the downward adjustment described for method500. An erroneously low initial baseline value may result, for example, from transient electromagnetic noise at the time of inductance latching or from fluid contamination of the inductive coils of the target readers. While upward baseline value adjustment may improve detection of expected state transitions (e.g., if the baseline value is too low, the target readers may indicate an “occupied” state regardless of whether the target is actually present) and may improve robustness to a wider variety of instruments, it may also reduce the chance of correctly detecting that a target is present in a target reader.FIG.7illustrates a graph700of the measured sensor values702of a target reader during a period when an upward baseline adjustment occurs. As shown, an initial baseline value704may be adjusted upward to an upward adjusted baseline706if the inductance value persists above an upward adjustment threshold708for a period TAVG. In some examples, the upward baseline adjustment threshold708is an inductance threshold that may be determined by multiplying the initial baseline value608by the upper inductance ratio threshold (e.g. THRHI=1.025) plus a margin of error (e.g. 0.005−0.025). In this example the upward adjustment threshold708is computed as (the initial baseline value608)×(1.025+0.010). In some examples, the baseline value may not be adjusted upward unless both target readers are simultaneously in an empty state. In some examples, the upward adjustment threshold may be determined as a fraction of the latched baseline value. In the case up an upward baseline adjustment, an algorithm for determining the upward baseline adjustment may continue to run even after the upward baseline has occurred. The new baseline value may be computed in the most recent time window for which the inductance has been above the baseline adjustment threshold. The baseline adjustment threshold for upward adjustment may be fixed and the baseline adjustment threshold for downward adjustment may be changed each time the baseline value is updated. Because any of a variety of types of tools may be erroneously placed within the receiving member when the initial baseline value is latched, the method500is not limited to adjusting a baseline sensor value for a single tool or a single classification of tool. The systems and methods described herein may be used to identify tool detection errors and make correction regardless of the tool or other unexpected objects that generate the error. For example, the inductance ratio threshold associated with each of a variety of tools or classification of tools may be used to determine an associated baseline adjustment threshold which may, in turn, be used to detect a baseline value error and adjust the baseline value, as described. FIG.8illustrates a perspective view of a tool recognition assembly800that may be implemented as the tool recognition assembly102and may be used in any of the methods described herein.FIG.8illustrates the tool recognition assembly800into which a receiving member802(e.g., receiving member106, a catheter, or other elongate device) may extend. In this embodiment the tool recognition assembly800includes a mounting member804. In various embodiments, the mounting member804may be mounted to a manipulator assembly as described in greater detail inFIG.9. The tool recognition assembly800can incorporate one or more target readers configured to detect one or more targets on a tool and/or catheter. In the example shown inFIG.8, a target reader806is coupled to a proximal end of the mounting member804, and another target reader807is coupled to a distal end of the mounting member. In this embodiment, the mounting member804is shown as a cylinder or bobbin with channels810separated by an elongated body812. The mounting member804may be formed of a plastic, a ceramic, or another type of material that minimizes interference with the target readers806,807. Each of the target readers806,807extends into a corresponding one of the channels810to couple to the mounting member804. A distance D3may span a length between the outer sensing bounds of the target readers806,807. A distance D4may span a length between the inner sensing bounds of the target readers806,807. The target readers806,807may comprise an inductive sensor (e.g., an inductor or inductive coil that detects a change in inductance caused by ferromagnetic and conductive properties of a material), a capacitive sensor, a Hall effect sensor, a photogate sensor, an optical sensor, a magnetic switch, a barcode scanner, a radio frequency identification (RFID) scanner, a relative position sensor, or combinations thereof that are capable of reading corresponding one or more targets on a tool to be inserted into the receiving member802of the tool recognition assembly800. Any combination of different types of target readers may be implemented in the tool recognition assembly800. In this example, an exemplary tool814(e.g., tool120) may include a target816that can be read by the one or more target readers806,807on the tool recognition assembly800. The target may have a length D5. The length D4may have a predetermined relationship to the distances D3, D4. For example, the length D4may be shorter than the length D3but longer than the length D4so that the target816may concurrently extend within both target readers806,807. In some examples, the length D5may be the same as or longer than length D3. The tool814is sized for insertion into the mounting member804and receiving member802along an insertion trajectory path818. The tool814may extend through the mounting member804and receiving member802. In the embodiment ofFIG.8, the mounting member804includes two channels810and may therefore accommodate two target readers806,807—one in each channel810. In alternative embodiments, a mounting member may comprise any number of channels and may accommodate any number of target readers. In some embodiments, the mounting member may lack channels but may nevertheless accommodate any number of target readers via other coupling mechanisms. In some embodiments, there may be fewer target readers than channels, where some channels can be empty. In some embodiments, the mounting member may have a non-cylindrical shape and may be any type of bracket or mounting mechanism for mounting one or more target readers in a location proximate to the receiving member. In some embodiments, the receiving member may have an open channel or any shape for receiving and allowing longitudinal movement of a tool. In some embodiments, the mounting member (or regions of the reader mount) may be considered to be an element or elements of one or more of the target readers in that the mounting member may play a role in the detection of one or more targets on the tool. For example, the channels may be of a different composition than the rest of the mounting member and may facilitate detection of one or more targets on the tool. In the embodiment ofFIG.8, the tool814may include any number of targets positioned along the length of the tool. There may be a different number of targets than there are target readers. FIG.9illustrates the tool recognition assembly800coupled to an instrument carriage850of a robot-assisted manipulator assembly. In alternative embodiments, the tool recognition assembly800may be coupled to manual manipulators or other structures used for receiving a tool. InFIG.9, the tool recognition assembly800is coupled to the instrument carriage850proximal of an expandable support structure852that may be used to support an extended length of the receiving member802outside of the patient anatomy. For example, the tool recognition assembly800may be press fit onto a proximal mount (not shown) on the expandable support structure852. In some embodiments, a tool recognition assembly or other tool detection sensors may be located in other locations. For example, the target readers may be located on a quick connect coupling between a vision probe and a catheter or on a motor pack of the teleoperational manipulator assembly. In some embodiments, a tool recognition assembly may recognize that a tool is absent from a tool holder, thus indicating that the tool may be in another location such as the catheter. In some embodiments, the systems and methods disclosed herein may be used in a medical procedure performed with a robot-assisted medical system as described in further detail below. As shown inFIG.10, a robot-assisted medical system1000may include a manipulator assembly1002for operating a medical instrument1004in performing various procedures on a patient P positioned on a table T in a surgical environment. The medical instrument1004may correspond to the tool120, or any tool or instrument described herein. The manipulator assembly1002may be robot-assisted, non-robot assisted, or a hybrid assembly with select degrees of freedom of motion that may be motorized and/or robot-assisted and select degrees of freedom of motion that may be non-motorized and/or non-robot assisted. A master assembly1006, which may be inside or outside of the surgical environment, generally includes one or more control devices for controlling manipulator assembly1002. Manipulator assembly1002may include an instrument carriage or other support member that supports medical instrument1004and may optionally include a plurality of actuators or motors that drive inputs on medical instrument1004in response to commands from a control system1012. The actuators may optionally include drive systems that when coupled to medical instrument1004may advance medical instrument1104into a naturally or surgically created anatomic orifice. Other drive systems may move the distal end of medical instrument in multiple degrees of freedom, which may include three degrees of linear motion (e.g., linear motion along the X, Y, Z Cartesian axes) and in three degrees of rotational motion (e.g., rotation about the X, Y, Z Cartesian axes). Additionally, the actuators can be used to actuate an articulable end effector of medical instrument1004for grasping tissue in the jaws of a biopsy device and/or the like. Robot-assisted medical system1000also includes a display system1010for displaying an image or representation of the surgical site and medical instrument1004generated by a sensor system1008which may include an endoscopic imaging system. Display system1010and master assembly1006may be oriented so an operator O can control medical instrument1104and master assembly1006with the perception of telepresence. In some embodiments, medical instrument1004may include components for use in surgery, biopsy, ablation, illumination, irrigation, or suction. Optionally medical instrument1004, together with sensor system1008may be used to gather (e.g., measure or survey) a set of data points corresponding to locations within anatomic passageways of a patient, such as patient P. In some embodiments, medical instrument1004may include components of the imaging system which may include an imaging scope assembly or imaging instrument that records a concurrent or real-time image of a surgical site and provides the image to the operator or operator O through the display system1010. In some embodiments, imaging system components may be integrally or removably coupled to medical instrument1004. However, in some embodiments, a separate endoscope, attached to a separate manipulator assembly may be used with medical instrument1004to image the surgical site. The imaging system may be implemented as hardware, firmware, software or a combination thereof which interact with or are otherwise executed by one or more computer processors, which may include the processors of the control system1012. The sensor system1008may include a position/location sensor system (e.g., an electromagnetic (EM) sensor system) and/or a shape sensor system for determining the position, orientation, speed, velocity, pose, and/or shape of the medical instrument1104. Robot-assisted medical system1000may also include control system1012, which may include the control system104. Control system1012includes at least one memory and at least one computer processor for effecting control between medical instrument1004, master assembly1006, sensor system1008, and display system1010. Control system1012also includes programmed instructions (e.g., a non-transitory machine-readable medium storing the instructions) to implement a plurality of operating modes of the robot-assisted medical system including a navigation planning mode, a navigation mode, and/or a procedure mode. Control system1012also includes programmed instructions (e.g., a non-transitory machine-readable medium storing the instructions) to implement some or all of the methods described in accordance with aspects disclosed herein, including, for example, moving a mounting bracket coupled to the manipulator assembly to the connection member, processing sensor information about the mounting bracket and/or connection member, and providing adjustment signals or instructions for adjusting the mounting bracket. Control system1012may optionally further include a virtual visualization system to provide navigation assistance to operator O when controlling medical instrument1004during an image-guided surgical procedure. Virtual navigation using the virtual visualization system may be based upon reference to an acquired pre-operative or intra-operative dataset of anatomic passageways. The virtual visualization system processes images of the surgical site imaged using imaging technology such as computerized tomography (CT), magnetic resonance imaging (MRI), fluoroscopy, thermography, ultrasound, optical coherence tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like. In the description, specific details have been set forth describing some embodiments. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. Elements described in detail with reference to one embodiment, implementation, or application optionally may be included, whenever practical, in other embodiments, implementations, or applications in which they are not specifically shown or described. For example, if an element is described in detail with reference to one embodiment and is not described with reference to a second embodiment, the element may nevertheless be claimed as included in the second embodiment. Thus, to avoid unnecessary repetition in the following description, one or more elements shown and described in association with one embodiment, implementation, or application may be incorporated into other embodiments, implementations, or aspects unless specifically described otherwise, unless the one or more elements would make an embodiment or implementation non-functional, or unless two or more of the elements provide conflicting functions. The methods described herein may be illustrated as a set of operations or processes. The processes may be performed in a different order than the order shown, and one or more of the illustrated processes might not be performed in some embodiments. Additionally, one or more processes that are not expressly illustrated may be included before, after, in between, or as part of the illustrated processes. In some embodiments, one or more of the processes may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors (e.g., the processors of a control system) may cause the one or more processors to perform one or more of the processes. Any alterations and further modifications to the described devices, instruments, methods, and any further application of the principles of the present disclosure are fully contemplated as would normally occur to one skilled in the art to which the disclosure relates. In addition, dimensions provided herein are for specific examples and it is contemplated that different sizes, dimensions, and/or ratios may be utilized to implement the concepts of the present disclosure. To avoid needless descriptive repetition, one or more components or actions described in accordance with one illustrative embodiment can be used or omitted as applicable from other illustrative embodiments. For the sake of brevity, the numerous iterations of these combinations will not be described separately. For simplicity, in some instances the same reference numbers are used throughout the drawings to refer to the same or like parts. The systems and methods described herein may be suited for navigation and treatment of anatomic tissues, via natural or surgically created connected passageways, in any of a variety of anatomic systems, including the lung, colon, the intestines, the kidneys and kidney calices, the brain, the heart, the circulatory system including vasculature, and/or the like. While some embodiments are provided herein with respect to medical procedures, any reference to medical or surgical instruments and medical or surgical methods is non-limiting. For example, the instruments, systems, and methods described herein may be used for non-medical purposes including industrial uses, general robotic uses, and sensing or manipulating non-tissue work pieces. Other example applications involve cosmetic improvements, imaging of human or animal anatomy, gathering data from human or animal anatomy, and training medical or non-medical personnel. Additional example applications include use for procedures on tissue removed from human or animal anatomies (without return to a human or animal anatomy) and performing procedures on human or animal cadavers. Further, these techniques can also be used for surgical and nonsurgical medical treatment or diagnosis procedures. One or more elements in embodiments of this disclosure may be implemented in software to execute on a processor of a computer system such as control processing system. When implemented in software, the elements of the embodiments of this disclosure may be code segments to perform various tasks. The program or code segments can be stored in a processor readable storage medium or device that may have been downloaded by way of a computer data signal embodied in a carrier wave over a transmission medium or a communication link. The processor readable storage device may include any medium that can store information including an optical medium, semiconductor medium, and/or magnetic medium. Processor readable storage device examples include an electronic circuit; a semiconductor device, a semiconductor memory device, a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM); a floppy diskette, a CD-ROM, an optical disk, a hard disk, or other storage device. The code segments may be downloaded via computer networks such as the Internet, Intranet, etc. Any of a wide variety of centralized or distributed data processing architectures may be employed. Programmed instructions may be implemented as a number of separate programs or subroutines, or they may be integrated into a number of other aspects of the systems described herein. In some examples, the control system may support wireless communication protocols such as Bluetooth, Infrared Data Association (IrDA), HomeRF, IEEE 802.11, Digital Enhanced Cordless Telecommunications (DECT), ultra-wideband (UWB), ZigBee, and Wireless Telemetry. Note that the processes and displays presented may not inherently be related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the operations described. The required structure for a variety of these systems will appear as elements in the claims. In addition, the embodiments of the invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. This disclosure describes various instruments, portions of instruments, and anatomic structures in terms of their state in three-dimensional space. As used herein, the term position refers to the location of an object or a portion of an object in a three-dimensional space (e.g., three degrees of translational freedom along Cartesian x-, y-, and z-coordinates). As used herein, the term orientation refers to the rotational placement of an object or a portion of an object (e.g., in one or more degrees of rotational freedom such as roll, pitch, and/or yaw). As used herein, the term pose refers to the position of an object or a portion of an object in at least one degree of translational freedom and to the orientation of that object or portion of the object in at least one degree of rotational freedom (e.g., up to six total degrees of freedom). As used herein, the term shape refers to a set of poses, positions, or orientations measured along an object. While certain illustrative embodiments of the invention have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that the embodiments of the invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art. | 58,021 |
11859968 | DETAILED DESCRIPTION OF EMBODIMENTS This disclosure provides detailed descriptions of inventive concepts and improvements which are applicable, but not limited to, a retractable measuring device such as a tape measure. Traditionally a tape measure has a leader which is positioned opposite the body of the tape measure. As the leader and tape is pulled from the body, the most pertinent information regarding the measurement remains at the body/tape interface and is unavailable to the user when they are holding the leader—especially when measuring large distances away from the body. To address this issue, the inventive concepts presented herein include control circuitry residing within the body of the tape measure that has knowledge of the current measurement, is configured to support a local and remote user interface, and has the ability to interact with a clutch or locking mechanism to restrict the ability of the tape to further deploy or retract from the tape body in response to a preset condition. A tape measure is used through the majority of the figures for the purpose of illustrating the inventive concepts. The inventive concepts presented, however, may also be employed to other hardware which includes a retractable extension, such as a line, cord, or tape. Furthermore, the hardware and inventive concepts herein, may be utilized as a part of a larger system, such as providing information on the length of an extension ladder. Moving to the figures,FIG.1shows the retractable measuring device as a tape measure100. The tape measure includes a body10, a section of tape12which is deployed from the body, and a traditional leader14positioned at the end of the tape opposite the body. It should be understood that the section of tape12shown inFIG.1represents a portion of a longer continuous tape that resides within the body of the tape measure; and wherein said tape is capable of being deployed and retracted from the body. A user interface16exists on the body10of the tape measure. The user interface may include any number of inputs, such as buttons, touch screens, proximity detection or touch sensors. The user interface may also include audio, visual, or tactile outputs, such as speakers, displays or illuminated devices (i.e. LEDs), or haptic motors within the body for producing vibration, respectively. Through the described user interface, it is envisioned several user benefits can be achieved. In the most basic embodiment, the user interface may provide a digital readout of the length of tape that has been deployed. In another embodiment, the user may program (i.e., preset) a desired length of tape to be deployed via the buttons, and then cause the tape measure to provide audio and/or visual indication when the desired length has been deployed. As an extension of the previous embodiment, the tape measure may provide audio and/or visual indication in advance of reaching the desired length—thus raising awareness that the exact measurement is approaching. In yet another embodiment, in response to a preset desired length of tape being deployed, the tape measure may latch or restrict the movement of the tape traversing from the measuring tape body. FiguresFIG.2andFIG.3show a constructed and exploded view of a tape measure respectively. As shown inFIG.2., the tape measure body10comprises a front housing2and back housing4, and an electronic module6. The electronic module6includes a processing unit or logic device, and user interface16. In addition to the user interface16on the side of the unit, some embodiments may include a forward visual indicator8, such as an LED, on the front-top of the unit in close proximity to where the tape is dispensed to provide feedback to the length of tape deployed. FIG.3provides the exploded view of the tape measure thereby allowing a view of the internal tape and reel system26. The tape and reel system includes a spool18upon which the trailing section of the tape12is wound. In one embodiment, the spool further includes rotation orientational features (such as holes, magnets, or indica)56disposed around the perimeter of the spool. The rotation orientational features56are received by a rotational sensor to provide information on the amount the reel has spun. The number of times the reel has spun is proportional to the amount of tape that has been deployed (i.e., length of tape or leading section). The rotational sensor is connected to the electronic module6such that the information on the amount the reel has spun is available to the processing unit. Additional mechanical features shown inFIG.3include a hub20which is shown on the inner wall of the back housing4. The spool18rotates around the hub. In some embodiments a spring22, which increases in tension as the tape is deployed, may be employed to aid in retracting the tape12. As a method of use, the user sets a desired target length via the user interface16. As tape12is pulled from the body10, the rotational sensor54senses the rotation orientational features56passing by the sensor and sends information back to the processing unit on the electronic module6. The processing unit receives the information and computes the amount or length of tape that has been deployed. When the length of tape deployed is equivalent to the target length an audible alert sounds through the speaker36and a visual alert is displayed through the user interface and/or the forward visual indicator. In some embodiments, dynamic alerts exist corresponding to the difference between the length of tape that has been deployed and the target length. An example of a dynamic alert may include tones that traverse in and out of phase. To say it another way, as the distance approaches the desired length, the speaker36begins emitting a periodic tone. As the distance continues to get closer, the frequency increases until a constant tone is achieved at the desired distance. As the tape then proceeds to shorten/lengthen from the target distance, the frequency decreases. FIG.4shows a cut-away view of the tape measure100to illustrate functional blocks of internal components.FIG.4is not intended to show scale, but rather to show the interconnections between components either presented previously or as alternative embodiments. Mechanical elements which are common in a retractable tape measure include the retractable tape and reel system in which a tape12is attached to a spool18. The spool18shares an axis to a concentric hub20which protrudes from a fixed position in the body assembly. In some embodiment a spring22engages the spool18and the body assembly such that the spring tension increases as the tape12is deployed from the tape measure100. Other embodiments may include mechanical means such as a lever or handle in communication with the spool to manually retract or wind the tape. A novel method for winding and deploying the tape may include a DC motor. A mechanical braking system including a breaking lever24may be present in some embodiments. A control circuit30includes a logic device32(i.e., microprocessor, microcontroller, programmable logic device, etc.). The logic device32communicates with various functional blocks shown inFIG.4. These functional blocks include the user interface previously presented and the internal electro-mechanical devices necessary to carry out the desired requirements. The user interface, as shown inFIG.4, includes the display34, the speaker36, and buttons38. The user interface may also include other visual indicators such as the forward visual indicator. As presented within this disclosure, elements of the user interface may reside on multiple faces of the body. In some embodiments the logic device may communicate directly with or incorporate specific features including wireless communication (e.g. Bluetooth or wifi capability), GPS capability, inertial measurement unit, and/or magnetometer. In the case where these features are not integrated into the logic device, these features may be supported through an ASIC31or plurality of ASICs in communication with the logic device on the control circuit30. In embodiments configured with wireless communication capability, the logic device may communicate wirelessly (e.g. Bluetooth, wi-fi, Zigbee, etc.) with external devices, for example, a smart phone or remote user interface. The wireless communication may be bi-directional to the external device. In some embodiments, the control circuit may wirelessly broadcast information at specific intervals (similar to a Bluetooth low-energy sensor or beacon) to be received by the external device. Information broadcast wirelessly may include length of tape deployed, battery charge status, compass orientation, measurement angles relative to a fixed position, or GPS positioning information. In embodiments configured with an inertial measurement unit (IMU) that supports axis orientation may aid in waking up the unit. Another feature that is supported by the IMU includes the ability to orient information on the user interface display such that it is easily read by the user. As an example, the text appearing on the user interface may be presented at a rotation of 0°, 90°, 180°, or 270° to present improved readability of information to the user. In embodiments wherein the ASIC31includes a magnetometer, positional information including angular measurements relating to points of a compass (i.e., compass orientation relative to the earth's magnetic field) are available to the logic device32. The benefit of the angular measurement in combination with length is presented later in the disclosure through an example where a single user is tasked with laying out a sports field. In embodiments wherein the ASIC31includes a GPS chip or module, positional information is available to the logic device32. While for some application this positional information may be too coarse relative to measurement requirements, it may be used for logging or evidence purposes. In other applications, and given the improvements in accuracy through technological advance, the GPS positional information may supplement the accuracy of the tape measurement. The control circuit30is powered by a battery40. In one embodiment the battery may be replaceable by the user (i.e., removable AA, AAA, coin cell, etc.). In other embodiments, the battery40may be rechargeable via a connector port42(i.e., USB or another type of connector) configured to receive a mating plug capable of providing sufficient power to charge the battery. In the case of a rechargeable battery, the control circuit30may also include circuitry to enable communication with the logic device for monitoring and regulating the recharging event—in other words, the logic device has knowledge of the battery charge. A key requirement of the present invention is that the logic device is aware of how much of the tape has been deployed. To say it another way, the logic device must know the measurement of the tape. Various sensors may be employed to accomplish this task. The control circuit30is shown as having sensor receiving circuitry50which is in communication with the logic device32. Non-limiting examples of sensor receiving circuitry50include analog-to-digital convertors, current or voltage measurement circuitry, encoder circuitry, circuitry to support hall-effect sensors, or circuitry to send and receive optical encoder signals. Two configurations are shown inFIG.4which enable measurement of, and thereby computing by the logic device, the length of tape deployed from the body of the tape measure. The first configuration of measurement includes measuring the rotation of the spool18in relation to the hub20or body. The second configuration of measurement includes a linear measurement of the tape moving or traversing across a sensor (herein linear sensor52) as it is being deployed or retracted. In the rotational measurement configuration, one type of sensor (herein referred to as the rotational sensor54) which supports rotational measurement would be an encoder. An alternate embodiment for rotational measurement may utilize the spool18itself to act as the disk of the encoder. In this alternate embodiment, the sensor may comprise a series of holes, magnets, or visual indica as rotation orientational features56on the spool. In such an embodiment a corresponding sensor such may serve as the rotational sensor58such as an optical sensor58for holes or visual indica or hall effect sensor58for magnets. In the linear measurement configuration, a linear sensor52may interact directly with the tape as it traverses across the sensor. One embodiment for the linear sensor may comprise a wheel or gear which is in communication with the surface of the tape, whereby the length of the tape may be computed by the logic device32as being in proportion to the number of turns of the wheel or gear. In an alternate embodiment, the linear sensor52may be a reflective optical sensor configured to detect either indicators or markings on the surface of the tape. In yet another embodiment, the linear sensor52may be an optical sensor which detects light passing through indicator holes in the tape. In order to calibrate either the linear or rotational measurements, the logic device must know a zero-reference point. In one embodiment, this may be accomplished through the inputs of the user interface, such as a power button, a zero button, a specific sequence such as holding a button down for a preset amount of time, or detection of activity from the linear or rotational sensors (52and54). In an alternate embodiment, the zero-reference may be established by a zero-reference sensor60as shown inFIG.4. The zero-reference sensor is in communication with the logic device32. The zero-reference sensor may be a momentary switch with a button62that transitions states when in contact with the leader14(e.g., the state of the switch changes when the leader of the tape comes in contact or departs from the body of the tape measure). Alternatively, the zero-reference sensor may be an optical sensor which interacts with a unique arrangement on the tape (such as a hole). Utilizing a combination of the aforementioned sensors, the computation of the length of tape12which has been deployed from the body may be computed by the logic device. Initially, the zero reference is established either by a reset through the user interface or by a state change of the zero-reference sensor60. Using a rotational sensor54, the length of tape deployed is equivalent to the spiral circumference of the tape on the spool multiplied by the number of rotations. The term spiral circumference is used specifically in this application to account for the circumference and diameter of the tape increasing with every wind around the spool. Likewise, it should be understood that the circumference and diameter of the tape wrapped around the spool decreases as the length of deployed tape increases. Using the linear sensor52and indicators on the tape, the length of the tape deployed is equivalent to the linear length between indicators multiplied by the number of indicators read. Using a wheeled or geared linear sensor52in contact with the tape, the length is equivalent to the rotations of the wheel or gear multiplied by the circumference of the wheel or gear. InFIG.4, a final element is an electronic braking mechanism70. The purpose of the electronic braking mechanism is to restrict movement or hold the tape so that it may not traverse across. An example of the electronic braking mechanism70may include a solenoid with a plunger or an electrically actuated clutch plate which presses against the tape. Another example of the electronic braking mechanism70may include a motor control which drives a latching mechanism. Yet another example may include actuation of an electro-magnetic coil which secures either the metallic tape or a ferrous material (i.e., permanent magnet or metal bar) positioned opposite the coil thereby pinching a non-metallic tape. The electronic breaking mechanism may alternatively be embodied as a motor connected to the spool by a motor shaft. The motor may be a DC brush motor where the spool is held in position by providing either a series of alternating polarities from the control board to essentially lock the spool from rotating forward or backward. In another embodiment, the DC brush motor may be driven in just one direction to retract the tape if the user extends past the preset target length. In another embodiment, a step motor may be used to lock the spool between poles (i.e., holding torque). As shown inFIG.5, an independent or complementary configuration is to have a display or remote user interface on the leading tip of the tape. As an example of functionality this display can communicate with the body (e.g., Bluetooth) and display the current measurement data. Also included in the remote user interface80are UI buttons84for remote distance setting from the leading tip.FIG.5illustrates the remote user interface80positioned at the end of the tape12opposite the body. The remote user interface80provides the same functionality of the traditional leader (i.e., prevents the tape from being retracted into the body and provides a leading edge86for the measurement). The remote user interface adds further functionality by providing the user information regarding the current measurement on the secondary display84. The remote user interface may comprise identical elements as the previously presented user interface. As shown, the user interface may include a secondary display82and secondary button inputs84. Alternatively, the user interface may include a single or plurality of LEDs as a visual indicator in lieu of the display. In the primary embodiment, communication between the control circuit and the remote user interface80is achieved via a wireless link. FIG.6Ashows a bottom isometric view of the remote user interface80, the body10of the tape measure configured to receive the remote user interface, and the tape12; similar elements are shown inFIG.6B. A secondary battery exists to power the remote user interface. As the primary battery exists in the body of the tape measureFIGS.6A and6Billustrate an arrangement to charge the secondary battery from the primary battery. As shown, the remote user interface80includes a pair of charging tabs88(e.g. power and ground) which provide the differential voltage necessary to charge the secondary battery. The power tabs88engage a pair of charging buttons90which are mounted on the body of the tape measure. The charging buttons90are connected to the primary battery inside the body of the tape measure.FIGS.6Bshows remote user interface receiving dock92which is configured to receive the remote user interface80. FIG.7provides a bottom view of the charging arrangement and the secondary battery64residing inside the remote user interface80. As indicated by arrow A, charging is enabled when the remote user interface80is docked in the body10of the tape measure and the charging tabs88contact the charging buttons90. Control circuitry may exist within the body of the tape measure to allow the logic device to further enable or disable charging of the remote user interface. FIG.8A and8Billustrate the ability for the display34to orient towards a right-hand or left-hand read out. For the purpose of orienting information on the display, the display is purposefully shown on the top of the unit. Specifically to the drawing,FIG.8Ais shown as being in left-hand orientation with readout being preferably readable from the back housing4facing the user;FIG.8Bis shown as being in right-hand orientation with the readout44being preferably readable from the front housing2facing the user. The orientation may be set through a user interface or in response to the orientation detected through the IMU. AlsoFIG.8A and8Billustrate the ability for the tape measure display to switch between imperial unit mode and metric unit mode. FIG.9shows a flow chart of the man-machine interaction when using the tape measure. User steps are presented on the left side of the figure and the machine or Tape Measure steps are presented on the right side. Steps are individually numbered and bi-directionally arrows crossing the dotted line indicate the machine response. It should be understood this is a non-limiting example as alternative embodiments for similar operations have been presented herein. Reference to the user interface for this figure may include the user interface on the body of the tape measure, the remote user interface positioned at the distal end of the tape opposite the body, a forward visual indicator, or a wirelessly connected UI existing on an external device. Initially, at block110, the user powers on the device. The tape measure responds by powering up in block112. At block114, the user enters a target length through a user interface. The tape measure responds by storing the target length in memory of the logic device at block116. At block118the user begin extending the tape by positioning the tape body at a fixed position and pulling on the leader. If the device has not been zeroed by other means, the extended length measurement is cleared at block120in response to the tape being extracted. As the user continues to extend the leading edge of the tape at block122, the tape measure enters a decision loop142. Within the decision loop, the tape measure continually computes the extend length by using the linear or rotational sensors at block124. The computed extended length is compared to the target length at block128. If the computed extended length is equal to the target length, the tape measure exits the decision loop142. If the computed length is not equal to the target length, the tape measure may respond by emitting alerts such as tones or flashing visual indicators as specified intervals at block126and continues to block124to compute the extended length. As the user continues to extend the tape to a point where the user has reached the target length, block130, the tape measure may take any combination of actions. One action, block132, is for the tape measure to emit a constant audio tone or a single audio event such as a chime. Another action, block134, is for the forward visual indicator or display to turn solid or present some other visual indica that the target measurement has been reached. In embodiments including an electromechanical brake (e.g., clamp, latch, motor, etc.). the brake may be activated as shown in block136. In block138, the user may choose a variety of next steps including powering the unit off, modifying the target length, or interacting with a user interface to release the brake. The tape measure responds appropriately by either powering off or releasing the brake to prepare for the next measurement as indicated by block140. FIG.10illustrates a non-limiting method of use where a user is required to layout a baseball diamond without assistance. In the preferred embodiment, the user will utilize the distance and compass angular measurements provided by the magnetometer. The initial position for home plate200is chosen and the body of the tape measure is secured such that the location is fixed at home plate, but the base of the body is able to rotate along a longitudinal axis. To identify the location of first base201, the user preset a target length of90feet, establishes the initial rotation around the longitudinal axis as zero, and extends the tape in a linear direction along path206. When the computed length equals the extended length, the tape measure provides an alert, sets a brake or latch, and the position of first base is established. To establish third base203, the user may then walk with the tape still extended to90feet until the user interface identifies the rotational information (angle heading216) as being 90° (i.e., the angle between vector206and210). The user may change the target length to 60′6″ causing the brake to momentarily release and allowing the tape to retract to 60′6″. Once the tape has retracted to the new set target length, the user may transition to the point where the rotational information is an angle heading214of 45° (vector208) to establish the pitcher mound204. To establish second base202, the user once again changes the preset value to127feet and continues along path212while maintaining an angular heading of 45° until the target length equals the extend length. FIG.11provides an example for recalibration of the tape measure. While variation of the steel tape of a tape measure is uncommon due to wear and tear, it may be present in other measurement and marking devices such as nylon fabric tapes or the string of a chalk line. In this example an established distance is known in advance and marked by a fixture, such as the distance between cross field soccer goal posts. Before marking or measuring any additional features, the user may first extend the tape or chalk line between posts of opposing soccer goals as indicated by L1. For this example, the distance is known in advance to be 90 meters. The user may then enter calibration mode and enter the distance into the unit via the user interface. The logic device is then able to interpolate and thereby recalibrate to account for any stretch or other non-conforming properties of the tape. | 25,378 |
11859969 | DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments of a measurement device according to the presently disclosed subject matter are explained according to the attached drawings. First Embodiment (Measurement Device) First, a configuration of the measurement device according to the first embodiment of the presently disclosed subject matter is explained with reference toFIG.1.FIG.1is a diagram illustrating the measurement device according to the first embodiment of the presently disclosed subject matter. In the following explanation, a three-dimensional orthogonal coordinate system for which an XY plane is a horizontal plane and a Z direction is a vertical direction (perpendicular direction) is used. A measurement device10is a device for measuring a shape, roughness or a contour or the like of a surface of an object W to be measured. The measurement device10is attached to a column (not illustrated) and is made movable in XYZ directions to the column by an actuator (not illustrated) provided on the column. The column to which the measurement device10is attached is fixed to a table (not illustrated) where the object W to be measured is to be mounted. As illustrated inFIG.1, the measurement device10includes a probe part14, an arm part16, a swing shaft20, a scale22, a swing shaft fixing part24and a scale head26. Here, illustrations of the exterior (casing or the like) of the measurement device10are omitted. The probe part14is fixed so as to be roughly straight to the arm part16. The probe part14and the arm part16are attached so as to integrally swing around the swing shaft20fixed to the swing shaft fixing part24. For the swing shaft20, an attaching angle to the column of the measurement device10is adjusted so as to be roughly parallel to the XY plane. Hereinafter, the probe part14and the arm part16are referred to as a swing part18. Here, the configuration of the swing part18is not limited to an example of being roughly straight as illustrated inFIG.1. For example, the probe part14or the arm part16may have an L-shaped bend part and the probe part14and the arm part16may be attached so as to be roughly parallel. On a distal end of the probe part14, a probe12is provided. The probe12extends in a lower direction (−Z direction) in the figure. When the probe12is brought into contact with the surface of the object W to be measured, which is mounted on the table, with a predetermined pressure, the swing part18swings around the swing shaft20according to a height and ruggedness of the surface of the object W to be measured at a contact position. Note that the configuration of the probe part14is not limited to the example illustrated inFIG.1. For example, the configuration may be a T-shaped stylus for which a probe is provided in an up-down direction in the figure of the probe part14or an L-shaped stylus for which a projection amount of the probe in a lower direction in the figure is longer than that in the example illustrated inFIG.1. To a scale attaching position16B on a proximal end part side of the arm part16, the scale22is attached, and the scale22is displaced according to swinging of the swing part18. The arm part16is a member which connects a swing center20C of the swing shaft20and the scale head26(which defines a distance between the swing center20C of the swing shaft20and the scale head26). The scale22is a circular arc scale (angle scale) formed in a circular arc shape along a swing direction of the arm part16, and scale markers indicating a rotation angle (corresponding to a scale head detection angle ϕ inFIG.2) of the scale22are formed along a circular arc direction of the scale22. The scale22is attached so that a center (zero point) of the scale markers of the scale22coincides with a scale head read point (read position) to be read by the scale head26, in the case where the swing part18is horizontal (hereinafter, referred to as a reference position). The scale head26is a device which reads displacement of the scale22according to the swinging of the swing part18. While a kind of the scale head26is not limited in particular, as the scale head26, for example, a photoelectric sensor or a non-contact type sensor including an imaging element for reading the scale marker may be used. In the present embodiment, materials of individual members are selected so as to satisfy a condition of β=α+γ in the case where the thermal expansion coefficients (linear thermal expansion coefficients) of the probe part14, the arm part16and the scale22are α, β and γ respectively (details are to be described later). To the measurement device10, a control device50is connected, and the displacement of the scale22read by the scale head26is outputted to the control device50. The control device50controls the actuator provided on the column, and acquires a detection signal for the displacement at each position on the surface of the object W to be measured while relatively moving the object W to be measured and the probe12of the measurement device10. Thus, the shape, roughness or contour or the like of the surface of the object W to be measured can be measured. As illustrated inFIG.1, the control device50includes a controller52, an input unit54and a display56. As the control device50, for example, a personal computer or a workstation or the like may be used. The controller52includes a CPU (Central Processing Unit) for controlling the individual units of the control device50, a memory (for example, a ROM (Read Only Memory)) where a control program for the control device50or the like is stored, and a storage (for example, an HDD (Hard Disk Drive)) where various kinds of data are stored. The controller52outputs control signals for controlling the individual units of the control device50according to operation input from the input unit54, and outputs control signals for controlling the measurement device10and control signals for controlling the actuator or the like for moving the measurement device10or the like. The input unit54is a device for receiving the operation input from an operator, and includes a keyboard, a mouse and a touch panel, for example. The display56is a device for displaying images, and is an LCD (Liquid Crystal Display), for example. The display56displays, for example, a GUI (Graphical User Interface) for operations of the control device50, the measurement device10and the actuator or the like and measurement results of the shape, roughness or contour or the like of the surface of the object W to be measured. (Influence Exerted to Measurement Result by Ambient Temperature) (Case where thermal expansion coefficients of arm part16and scale22are equal (β=γ)) Next, the configuration for suppressing influence exerted on the measurement result by the ambient temperature is explained. First, the case where the thermal expansion coefficients of the arm part16and the scale22are equal (β=γ), that is, the example of not satisfying the condition β=α+γ of the present embodiment is explained with reference toFIG.2. FIG.2is a diagram illustrating the case where the thermal expansion coefficients of the arm part16and the scale22are equal (β=γ). InFIG.2, the movements of the individual parts of the measurement device10are simplified and illustrated. Portion (a) ofFIG.2illustrates a state where an axis AX of the swing part18is horizontal (reference position θ=0), and portions (b) and (c) ofFIG.2illustrate the state where the swing part18is inclined by an angle θ from the reference position. Then, portion (c) ofFIG.2illustrates the state where the arm part16and the swing part18are thermally expanded in portion (b) ofFIG.2. When an ambient temperature T is a reference temperature T0, a distance L from a distal end part14E (position corresponding to a distal end position of the probe12in contact with the surface of the object W to be measured) of the probe part14to the swing center20C of the swing part18is defined as L0, and a distance M from the swing center20C of the swing part18to the scale attaching position16B of the arm part16is defined as M0. As illustrated in portion (b) ofFIG.2, in the case where the ambient temperature T is the reference temperature T0(the case where there is no thermal expansion), when the swing part18is inclined by the angle θ from the reference position and the probe part14, the arm part16and the scale22move to positions of signs14R,16R and22R respectively, the scale head detection angle ϕ is equal to a rotation angle θ from the reference position of the arm part16. In this case, displacement x1of the distal end part14E of the probe part14is expressed by an expression (1) below. x1=L0·sin θ=L0·sin ϕ (1) When the expression (1) is generalized without considering the thermal expansion, a computation expression of displacement xFof the distal end part14E of the probe part14is expressed by an expression (2) below. xF=L0·sin ϕ (2) As illustrated in portion (c) ofFIG.2, when the ambient temperature T changes to T=T0+ΔT, the probe part14, the arm part16and the scale22are designated by signs14RE,16RE and22RE respectively due to the thermal expansion. In this case, the distance L from the distal end part14E to the swing center20C of the swing part18changes as an expression (3) below. L=L0(1+αΔT) (3) At the time, actual displacement xTof the distal end part14E of the probe part14is expressed by an expression (4) below. xT=L·sin θ=L0·sin θ(1+αΔT) (4) From the expression (2) and the expression (4), an error xerrbetween a true value xTand a calculated value xFof the displacement of the distal end part14E of the probe part14due to the change of the ambient temperature T to T=T0+ΔT is expressed by an expression (5) below. xerr=xT−xF xerr=L0·αΔT·sin θ (5) (Case where thermal expansion coefficients of arm part16and scale22are different (β≠γ)) Next, the influence of the thermal expansion in the case where the thermal expansion coefficients of the arm part16and the scale22are different (β≠γ) is explained with reference toFIG.3.FIG.3is a diagram for explaining the influence of the thermal expansion in the case where the thermal expansion coefficients of the arm part16and the scale22are different (β≠γ). As illustrated inFIG.3, the thermal expansion coefficient β of the arm part16and the thermal expansion coefficient γ of the scale22are different (β≠γ) and the scale22is attached to a proximal end part of the arm part16. Therefore, an angle reference center22C to be a reference of the angle of the scale22is shifted from the swing center20C. That is, it is ϕ≠θ. When the thermal expansion is taken into consideration, the distance (≈a length of the arm part16) M from the swing center20C of the swing part18to the scale attaching position16B of the arm part16is expressed by an expression (6) below. M=M0(1+βΔT) (6) On the other hand, the position of the scale22is expanded by the thermal expansion coefficient γ of the scale22with the attaching position of the scale22as the reference. Thus, a distance β from the scale attaching position16B of the arm part16to the angle reference center22C is expressed by an expression (7) below. R=M0(1+γΔT) (7) When a distance between the swing center20C and the angle reference center22C is ΔM, an expression (8) below is obtained from the expression (6) and the expression (7). ΔM=M−R ΔM=M0(β−γ)ΔT(8) As illustrated inFIG.3, when an angle ρ is defined, the angles θ, ϕ and ρ satisfy a relation of ϕ=θ+ρ. Since M1=M−ΔMcosθ is obtained, an expression (9) below is obtained. tan ρ=ΔM·sin θ/M1 tan ρ=ΔM·sin θ/(M—ΔM·cos θ) (9) When approximation for which ρ and θ are minute angles is used, an expression (10) below is obtained. ρ≈ΔM·sin θ/M0=(β−γ)ΔT(10) When a computation expression (2) for the displacement xFof the distal end part14E of the probe part14is transformed using the expression (10), it is transformed as follows. xF=L0·sin ϕ xF=L0·sin(θ+ρ) xF=L0(sin θ cos ρ+cos θ sin ρ) When the approximation for which ρ is the minute angle is used, an expression (11) below is obtained. xF≈L0(sin θ+ρ·cos θ) xF≈L0·sin θ{1+(β−γ)ΔT·cos θ} (11) On the other hand, since the actual displacement xTis obtained by the expression (4), the error xerris expressed by an expression (12) below. xerr=xT−xF xerr=L0·sin θ(1+αΔT)−L0·sin θ{1+(β−γ)ΔT·cos θ} xerr=L0ΔT·sin θ{α−(β−γ)cos θ} (12) Here, when the condition of β=α+γ is satisfied, an expression (13) below is obtained. xerr=L0ΔTα·sin θ{1−cos θ} (13) Thus, while the error xerrin the case of not satisfying the condition β=α+γ of the present embodiment (expression (5)) is xerr=L0·αΔT·sin θ (the case of β=γ), xerrin the case of satisfying the condition described above is xerr=L0ΔTα·sin θ{1−cos θ}. Generally, a detection range in the measurement device10is near θ=0°. At the time, it is (1−cos θ)<<1. Thus, by selecting the materials of the individual members so as to satisfy the condition of β=α+γ, the error xerrbetween the true value xTand the calculated value xFof the displacement of the distal end part14E of the probe part14can be substantially reduced. Accordingly, the influence exerted on the measurement result of the measurement device10by the ambient temperature T can be suppressed. EXAMPLE In the case of using carbon fiber (CFRP: Carbon Fiber Reinforced Plastics) as the material of the probe part14, iron as the material of the scale22and glass as the material of the arm part16, the thermal expansion coefficients α, β and γ are α=3.6×10−6, γ=8.5×10−6, and β=12.1×10−6. According to the above-described combination of the materials, the condition of β=α+γ can be satisfied. (Modification 1) While the thermal expansion coefficients α, β and γ of the probe part14, the arm part16and the scale22satisfy the condition of β=α+γ in the present embodiment, the presently disclosed subject matter is not limited thereto. When the expression (12) is transformed, an expression (14) below is obtained. xerr=L0ΔTα−sin θ{1−{(β−γ)/α}cos θ} (14) When comparing the expression (5) and the expression (14), the error xerrin the expression (14) is a value for which the expression (5) is multiplied with {1−{(β−γ)/α}cos θ}. Practically, when the error xerrdue to the change of the ambient temperature T can be reduced to ½ or less, it can be defined that there is significant resistance to the change of the ambient temperature T. A condition of practically useful thermal expansion coefficients is expressed by an expression (15a) below. |1−{(β−γ)/α}cos θ|≤1/2 (15a) Here, since the detection range in the measurement device10is near θ=0°, when approximation to cos θ≈1 is performed, an expression (15b) below is obtained. |1−(β−γ)/α|≤1/2 (15b) When the expression (15b) is solved for R, an expression (16) below is obtained. (α+γ)−1/2α≤β≤(α+γ)+1/2α (16) Thus, when the thermal expansion coefficient β of the arm part16is within a range of ±½α with (α+γ) as the reference, it can be defined that there is practically significant resistance to the change of the ambient temperature T. (Modification 2) While the probe part14, the scale22and the swing shaft fixing part24are formed of a single material respectively in the present embodiment, it is also possible to adjust the thermal expansion coefficients α, γ and β by combining the plurality of materials respectively. FIG.4is a diagram illustrating a measurement device according to the modification 2. In a measurement device10-1illustrated inFIG.4, the probe part14is formed by joining members14A,14B and14C formed of three different materials having different thermal expansion coefficients. When the thermal expansion coefficients of the members14A,14B and14C are α1, α2and α3and the lengths are l1, l2and l3respectively, the thermal expansion coefficient α of the entire probe part14is expressed by an expression (17) below. α=(α1l1+α2l2+α3l3)/(l1+l2+l3) (17) Generally, the thermal expansion coefficient is a value intrinsic to the material and it is difficult to adjust it to an arbitrary value. Then, by combining the plurality of materials and adjusting the lengths of the individual materials, it becomes possible to adjust the thermal expansion coefficients of the probe part14, the scale22and the swing shaft fixing part24to arbitrary values. Thus, the measurement device which satisfies the condition of β=α+γ is easily created. Second Embodiment FIG.5is a diagram illustrating a measurement device according to the second embodiment of the presently disclosed subject matter. In the following explanation, same signs are attached for the configuration similar to the embodiment described above and the explanation is omitted. A measurement device10-2according to the present embodiment includes an exchangeable probe30attachable and detachable to/from the measurement device10-2, instead of the probe part14. The exchangeable probe30includes a first member30A provided with the probe12and a second member30B. A proximal end part of the second member30B has such a shape that attachment (for example, engagement and fitting) to a probe attaching base part16A is possible. In the present embodiment, the exchangeable probe30and the probe attaching base part16A form a probe part32together, and the probe part32and the arm part16form a swing part34together. The thermal expansion coefficients of the first member30A and the second member30B are α1and α2respectively, the lengths are l1and l2, the thermal expansion coefficient of the probe attaching base part16A is α3and the length (the length between a left end part in the figure and the swing center20C of the swing shaft20) is l3. In this case, similarly to the modification 2, the thermal expansion coefficient α of the entire probe part32formed of the exchangeable probe30and the probe attaching base part16A is expressed by an expression (18) below. α=(α1l1+α2l2+α3l3)/(l1+l2+l3) (18) Thus, when the condition to be satisfied is β=α+γ, a condition of an expression (19) below should be satisfied. (α1l1+α2l2+α3l3)/(l1+l2+l3)=β−γ (19) According to the present embodiment, similarly to the modification 2, the thermal expansion coefficient α can be adjusted to an arbitrary value by the combination of the members configuring the exchangeable probe30and the length. In addition, according to the present embodiment, since the thermal expansion coefficient α can be adjusted only by the exchangeable probe30, the influence exerted on the measurement result by the ambient temperature T can be suppressed even in an existing measurement device (the measurement device for which γ and β are not adjusted). Furthermore, in the present embodiment, a margin may be given to β similarly to the modification 1. In the present embodiment, it is preferable that the shape of the probe attaching base part16A is turned to such a shape (for example, a diameter and the shape of a fitting hole) that only the exchangeable probe30satisfying the condition of the expression (19) is attachable. Thus, an exchangeable probe not suitable for suppressing the influence exerted on the measurement result by the ambient temperature T can be prevented from being attached to the measurement device. Third Embodiment While the condition that the thermal expansion coefficient α of the probe part14, the thermal expansion coefficient β of the arm part16and the thermal expansion coefficient γ of the scale22satisfy is obtained in order to suppress the influence exerted on the measurement result by the change of the ambient temperature in the embodiment described above, it is also possible to measure the temperature change amount ΔT of the ambient temperature T and correct the measurement result using the temperature change amount ΔT. FIG.6is a diagram illustrating a measurement device according to the third embodiment of the presently disclosed subject matter. In the following explanation, the same signs are attached for the configuration similar to the embodiment described above and the explanation is omitted. A measurement device10-3according to the present embodiment includes a temperature sensor60. The temperature sensor60is for measuring the ambient temperature (air temperature) of an environment where measurement is performed using the measurement device10-3, and is provided on a surface of a casing of the measurement device10-3, for example. Here, as the temperature sensor60, it is also possible to use a contact type or non-contact type temperature sensor (for example, a radiation thermometer or a thermistor) for measuring the temperature (for example, a surface temperature) of at least one of the probe part14and the arm part16as the ambient temperature. In the present embodiment, the controller52acquires a measured value of the ambient temperature T from the temperature sensor60when measuring the displacement xFof the distal end part14E of the probe part14, and stores the displacement xFand the ambient temperature T in association with each other in the storage. Then, the controller52calculates the actual displacement xTof the distal end part14E of the probe part14based on the displacement xF(measured value) and the temperature change amount ΔT of the ambient temperature T. Specifically, the actual displacement xTof the distal end part14E of the probe part14is calculated from the displacement xF(measured value) using a correction coefficient c indicated in an expression (20) below. cXF=xT(20) As already described, the displacement xFof the distal end part14E of the probe part14in the case of taking the thermal expansion into consideration is obtained by the expression (11). xF≈L0·sin θ{1+(β−γ)ΔT·cos θ} (11) On the other hand, the actual displacement xTof the distal end part14E of the probe part14is obtained by the expression (4). xT=L·sin θ=L0·sin θ(1+αΔT) (4) When the expression (11) and the expression (4) are substituted for the expression (20) and the approximation (cos θ≈1) for which θ is the minute angle is used, an expression (21) below is obtained. c=xT/xF =(1+αΔT)/{1+(β−γ)ΔT·cos θ} ≈(1+αΔT)/{1+(β−γ)ΔT}(21) That is, when the approximation for which θ is the minute angle is used, the correction coefficient c is obtained by the thermal expansion coefficient α of the probe part14, the thermal expansion coefficient β of the arm part16, the thermal expansion coefficient γ of the scale22and the temperature change amount ΔT of the ambient temperature T. By substituting the correction coefficient c expressed by the expression (21) for the expression (20) and correcting the displacement xF(measured value) of the distal end part14E of the probe part14, the actual displacement xTof the distal end part14E can be calculated. Thus, the influence exerted on the measurement result by the change of the ambient temperature can be suppressed. REFERENCE SIGNS LIST 10,10-1,10-2,10-3. . . measurement device,12. . . probe,14. . . probe part,16. . . arm part,18. . . swing part,20. . . swing shaft,22. . . scale,24. . . swing shaft fixing part,26. . . scale head,26P . . . scale head read point,30. . . exchangeable probe,32. . . probe part,34. . . swing part,50. . . control device,52. . . controller,54. . . input unit,56. . . display,60. . . temperature sensor | 23,155 |
11859970 | The reference signs are used in the figures as following: 1—bracket;2—first drive mechanism;21—first support shaft;211—threaded portion;212—smooth portion;22—mounting slide block;23—support slider;24—first knob;25—first measuring assembly;251—first fixing frame;252—offset measurement mark;3—detection mechanism;31—swing frame;32—second drive mechanism;321—drive rod;322—second support shaft;323—transmission assembly;3231—first gear;3232—second gear;324—second knob;33—support arm;34—conductive probe;35—second measuring assembly;351—second fixing frame;352—scale;4—drive assembly;41—connecting shaft;42—drive cylinder;43—transmission block. DETAILED DESCRIPTION In order to make the technical problems to be solved, technical solutions, and beneficial effects by the present application clearer, the following further describes the present application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, but not to limit the present application. It should be noted that when an element is referred to as being “fixed to” or “arranged on” another element, it can be directly on the other element or indirectly on the other element. When an element is said to be “connected to” another element, it can be directly connected to the other element or indirectly connected to the other element. It should be understood that the terms “length”, “width”, “upper”, “lower”, “front”, “rear”, “left”, “right”, “vertical”, “horizontal”, “top”, “bottom”, “inner”, “outer”, etc. indicate the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, only for the convenience of describing the present application and simplifying the description, but not indicating or implying the pointed device or the element must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation of the present application. In addition, the terms “first” and “second” are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the present application, “a plurality of” means two or more, unless otherwise specifically defined. Please refer toFIG.1-3together. A biasing device for detecting a conductor position provided by this embodiment includes a bracket1, a first drive mechanism2, and a detection mechanism3. Here the specific structures and connection method of the bracket1, the first drive mechanism2, and the detection mechanism3are not limited. The first drive mechanism2includes a first support shaft21rotatably arranged on the bracket1and a mounting slide block22arranged on the first support shaft21and fixedly connected to the detection mechanism3. The mounting slide block22extends along the first support shaft21when the first support shaft21rotates. It moves relative to the bracket1, the detection mechanism3is fixedly connected to the mounting slide block22, and is movably arranged relative to the bracket1along the extension direction of the first support shaft21, used to determine whether the measured conductor is located in the detection area of the designated position, optionally, the first support shaft21is provided with a first knob24fixedly to drive the first support shaft21to rotate relative to the bracket1. The operator drives the first support shaft21to rotate by turning the first knob24, which drives the mounting slide block22to move relative to bracket1along the extension direction of the first support shaft21, thereby driving the detection mechanism3to be adjusted to a designated detection area; optionally, the detection device also includes a detection circuit, and the detection circuit is electrically connected to the conductive probe34. When the measured conductor of the wire is located at the designated position, a plurality of conductive probes34are in contact with the measured conductor, and a plurality of conductive probes34are sufficient. If the measured conductor is electrically connected, the detection circuit is turned on; if the measured conductor is not in the designated position, the conductive probe34cannot be fully contacted with the measured conductor, and the detection circuit is turned off. Therefore, according to whether the detection circuit is turned on, it can be judged whether the measured conductor is in the designated position. When the cross-sectional area of the exposed conductor of the spliced wire is different, the position of the detection mechanism3is reversed in advance to compensate for the offset of the heat shrinkable tube, thereby ensuring that the heat shrinkable tube is accurately wrapped on the exposed conductor surface at the designated position of the wire in the subsequent process, and the scope of application is increased. Refer toFIG.4, in this embodiment, the detection mechanism3also includes a swing frame31and a first measuring assembly25. The first measuring assembly25includes a first fixing frame251fixedly connected to the swing frame31and an offset measurement mark252arranged on the first fixing frame251. The offset measurement mark252coordinates with the reference mark fixed on the bracket1to measure the offset of the detection mechanism3. The offset measurement mark252is firmly installed at the prearranged position on the swing frame31through the first fixing frame251. When the detection mechanism3is shifting, the offset measurement mark252on the swing frame31has an equivalent offset relative to the reference mark on the bracket1, which enables accurate measurement of the offset of detection mechanism3, and further ensures that the heat shrinkable tube can be accurately wrapped on the exposed conductor surface of the wire. Please refer toFIGS.2-4together, in this embodiment, the detection mechanism3includes a swing frame31and a second drive mechanism32. The swing frame31is fixedly connected to the mounting slide block22. The detection device includes a drive assembly4for driving the swing frame31to rotate relative to the bracket1. The drive assembly4drives the detection mechanism3to rotate relative to the bracket1to swing the conductive probe34to the detection area, and moves away from the detection area after the detection is completed, so as to prevent the conductive probe34from affecting the next processing operation. The drive assembly4includes a connecting shaft41, a drive cylinder42arranged on the bracket1, and a transmission block43rotatably arranged on the first support shaft21. The connecting shaft41penetrates between the transmission block43and the mounting slide block22and links the transmission block43and the mounting slide block22in a circumferential direction. The drive cylinder42drives the transmission block43on the first support shaft21to rotate relative to the first support shaft21. The connecting shaft41penetrates between the transmission block43and the mounting slide block22, so that the transmission block43and the mounting slide block22are arranged in circumferential linkage. The drive cylinder42has a telescopic piston rod, the piston rod is telescopically arranged, and the piston rod is rotatably connected to the transmission block43. When the piston rod telescopes, the transmission block43is pulled to rotate around the first support shaft21. When the transmission block43rotates, the connecting shaft41drives the mounting slide block22to rotate around the first support shaft21, and drives the detection mechanism3to swing to the detection area. Optionally, the first support shaft21is provided with a plurality of mounting slide blocks22for installing the detection mechanism3, and the plurality of mounting slide blocks22are respectively arranged on both sides of the transmission block43, so that the detection mechanism3is connected more firmly, and at the same time, during the swing frame31is in the swinging process, the medium force is more uniform, which improves the reliability of the detection device. In one embodiment, whether the detection circuit is turned on can be prompted by a light or sound device, for example, a light or a sound alarm is arranged in the detection circuit. If the detection circuit is turned on, the light will glow or the audible alarm will sound to remind the operator that the detection circuit is turned on. In addition, the electrical signal after the detection circuit is turned on can be used as the condition for the next processing. The detection circuit is turned on and the electrical signal is output. The upper computer controls the next processing equipment to continue working according to the electrical signal; if the detection circuit is not turn on, no electrical signal is output, and the host computer controls the next processing equipment not to work. Please refer toFIGS.3-5together, as a specific implementation of the biasing device for detecting a conductor position provided in this embodiment, the detection mechanism3includes a swing frame31and a second drive mechanism32, and the first drive mechanism2and the second drive mechanism32are independent of each other The second drive mechanism32includes a hollow drive rod321sleeved on one end of the first support shaft21and rotatably arranged on the bracket1, a second support shaft322rotatably arranged on the swing frame31, and a transmission assembly323connected between the drive rod321and the second support shaft322; the operator drives the transmission assembly323to rotate by driving the drive rod321to rotate, and then drives the second support shaft322to rotate. Optionally, a second knob324is fixedly arranged on the drive rod321to drive the drive rod321to rotate relative to the bracket1. One end of the drive rod321is connected to the first support shaft21, and the other end is fixedly connected to the second knob324; optionally, the first support shaft21includes a threaded portion211and a smooth portion212fixedly connected to the threaded portion211, the end of the smooth portion212away from the threaded portion211is inserted into the drive rod321, and the mounting slide block22for fixed connection to the detection mechanism3is installed on the threaded portion211. When the support shaft21rotates, it can drive the mounting slide block22to move relative to the bracket1along the extension direction of the first support shaft21, and then drive the detection mechanism3to move to the designated detection area, and put the smooth portion212into the hollow drive rod321to prevent the first support shaft21from driving the drive when the first support shaft21rotates. The rotation of the rod321affects the detection accuracy, and can reduce the external resistance encountered when the first support shaft21rotates, which is convenient for operators to operate. Optionally, the first drive mechanism2further includes a support slider23for fixedly connecting the swing frame31. The support slider23is slidably arranged on the smooth portion212, so that the detection mechanism3is connected more firmly, and at the same time, the swing frame31receives a more uniform force during the swing, which improves the reliability of the commission detection device. Optionally, the transmission assembly323includes a first gear3231fixedly connected to the outer wall of the drive rod321and a second gear3232meshed with the first gear3231and fixedly connected to the second support shaft322. The operator drives the second knob324to rotate to drive the drive rod321to rotate and drive the first gear3231and the first gear3231rotate and drive the second gear3232engaged with them to rotate, thereby driving the second support shaft322fixedly connected to the second gear3232to rotate, thereby adjusting the gap between the conductive probes34and adjusting the length of the detection area. Please further refer toFIG.1,FIG.3andFIG.4. In this embodiment, the detection mechanism3also includes a support arm33arranged on the second support shaft322and a conductive probe34rotatably arranged on the support arm33, so that the conductive probe34is firmly connected to the second support shaft322, a detection area is formed between the conductive probes34to determine whether the measured conductor is located at a designated position; the support arm33is a rod-shaped structure, one end of the support arm33is threadedly connected to the second support shaft322, and the conductive probe34is rotatably arranged on the other end; Optionally, a U-shaped frame is arranged at one end of the support arm33connected to the conductive probe34, and a pin shaft for connecting the conductive probe34is arranged on the U-shaped frame, so that the conductive probe34can rotate around the pin shaft; optionally, the detection mechanism3also includes an elastic rearranged member. One end of the elastic rearranged member is connected to the support arm33, and the other end is connected to the conductive probe34. The elastic rearranged member is used to keep the conductive probe34in the initial position state. When the conductive probe34rotates relative to the support arm33and leaves the initial position state, the elastic rearranged member is used to rearranged the conductive probe34; optionally, the elastic rearranged member is a tension spring; when the conductive probe34rotates relative to the support arm33, the elastic rearranged member is elongated to generate elastic force, which can help the conductive probe34to rearrange. During the detection process, if the conductive probe34exerts too much pressure on the conductor, the conductive probe34can move relative to the support arm33, which not only makes the conductive probe34and the conductor closely contact, but also prevents the conductive probe34from applying excessive force on the conductor and causing damage. The conductor or conductive probe34rearranges the conductive probe34through an elastic rearranged member to prevent the conductive probe34from moving relative to the support arm33from being unable to continue detection. Optionally, a first fixing frame251is fixedly arranged on the swing frame31, and a centering reference line is arranged on the first fixing frame251, which is used to align the conductive probe34to realize the centering adjustment of the position of the conductive probe34and improve the accuracy of the position of the conductive probe34, Which helps to ensure that the heat shrinkable tube can be accurately wrapped on the exposed conductor surface of the wire. In one embodiment, the second support shaft322includes a first threaded portion and a second threaded portion. The thread direction of the first threaded portion is opposite to the thread direction of the second threaded portion. When the second support shaft322rotates, the support arm33located in the first threaded portion and the support arm33located in the second threaded portion can be close to or away from each other, so as to adjust the distance between two adjacent conductive probes34and increase the scope of application. Optionally, the detection mechanism3also includes a second measuring assembly35, so that the operator can more intuitively obtain the distance between two adjacent conductive probes34. The second measuring assembly35includes a second fixing frame351fixedly connected to the swing frame31and a second fixing frame351arranged in the second fixing frame. The scale352on the frame351and used to measure the distance between two adjacent conductive probes34, the scale352is firmly installed in the prearranged position through the second fixing frame351, so that the distance between two adjacent conductive probes34can be accurately measured. To further ensure that the heat shrinkable tube can be accurately wrapped on the exposed conductor surface of the wire. Optionally, a compression spring is sleeved on the second support shaft322, one end of the compression spring is pressed against the swing frame31, and the other end is pressed against the support arm33, which can eliminate the thread gap, keep the position of the support arm33stable, and avoid detection shaking during the detection process to avoid affect the detection effect. The present application also provides a wire processing equipment, including a wire processing device and the above-mentioned conductor position detection device that could be biased, and the detection device is used for detecting the position of the conductor before the wire processing. The biasing device for detecting a conductor position is provided with a first drive mechanism2on bracket1, and the first drive mechanism2includes a first support shaft21rotatably arranged on the bracket1and a mounting slide block22arranged on the first support shaft21. When the support shaft21rotates, it moves relative to the bracket1along the extension direction of the first support shaft21, the detection mechanism3is fixedly connected to the mounting slide block22, and is movably arranged relative to the bracket1along the extension direction of the first support shaft21to determine whether the measured conductor is located at a designated position, the first support shaft21rotates to drive the mounting slide block22to move relative to the bracket1along the extension direction of the first support shaft21, and then drives the detection mechanism3to adjust to the designated detection area; when the measured conductor is at the designated position, a plurality of the conductive probes34and the measured conductor are in contact, a plurality of conductive probes34can be electrically connected through the measured conductor, and the detection circuit is turned on; if the measured conductor is not in a designated position, the conductive probe34cannot all contact the measured conductor, and the detection circuit is turn off. Therefore, according to whether the detection circuit is turned on, it can be judged whether the measured conductor is at the designated position; when the cross-sectional area of the exposed conductor of the spliced wire is different, the position of the detection mechanism3is reversed in advance to compensate for the offset of the heat shrinkable tube, Thereby ensuring that the heat shrinkable tube is accurately wrapped on the exposed conductor surface at the designated position of the wire in the subsequent process, and the scope of application is increased. The specific structure of the wire processing device is not limited here. Optionally, the wire processing device is a wire heat shrinking machine; after the detection device determines that the measured conductor is located at the designated position, the conductive probe34moves away from the detection area so that the wire processing device can subject the wire to processing operations, the insulating heat shrinkable tube is sleeved on the wire and covered on the surface of the exposed conductor. In one embodiment, the wire processing equipment further includes a cover fixedly connected to the bracket1, a reference mark is arranged on the cover, a centering reference line is arranged on the first fixing frame251of the swing frame31, and the reference mark on the cover can be fixed to the centering reference line on the fixing frame251is used to measure the offset of the detection mechanism3; when the detection mechanism3is offset, the centering reference line on the first fixing frame251will have the same amount of offset relative to the reference mark on the cover to realize accurate measurement of the offset of the detection mechanism3to further ensure that the heat shrinkable tube can be accurately wrapped on the exposed conductor surface of the wire. The above description is only a preferred embodiment of this embodiment, and is not intended to limit this embodiment. Any modification, equivalent replacement and improvement made within the spirit and principle of this embodiment shall be included within the protection scope of the embodiment. | 20,237 |
11859971 | DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. An angle sensor may be designed to determine an angular position of a target object (e.g., a rotatable object) in a given application. For example, an angle sensor may be used in an electronic power steering (EPS) application to determine an angular position of a steering column. In some applications, it may be necessary to ensure functional safety of the angle sensor. In general, functional safety can be defined as an absence of unreasonable risk (e.g., to a system, to an environment, to people, and/or the like) due to hazards caused by malfunctioning behavior (e.g., a systematic failure, a random failure, or the like) of the angle sensor. In the automotive context, an Automotive Safety Integrity Level (ASIL) scheme is used to dictate functional safety requirements for an angle sensor. The ASIL scheme is a risk classification scheme defined by the International Organization for Standardization (ISO) 26262 standard (titled Functional Safety for Road Vehicles), which provides a standard for functional safety of electrical and/or electronic systems in production automobiles. An ASIL classification defines safety requirements necessary to be in line with the ISO 26262 standard. An ASIL is established by performing a risk analysis of a potential hazard by looking at severity, exposure, and controllability of a vehicle operating scenario. A safety goal for that hazard in turn guides the ASIL requirements. There are four ASILs identified by the standard: ASIL A, ASIL B, ASIL C, ASIL D. ASIL D dictates the highest integrity requirements, while ASIL A dictates the lowest. A hazard with a risk that is low (and, therefore, does not require safety measures in accordance with ISO 26262) is identified as quality management (QM). In some cases, it is desirable or required that an angle sensor achieves a high ASIL. For example, it may be desirable or required that an angle sensor used in a given application achieves ASIL B, ASIL C, or ASIL D. To ensure functional safety in an angle sensor, a safety mechanism that allows malfunctioning behavior to be identified and signaled should be implemented. Some implementations described herein provide a safety mechanism for an angle sensor. In some implementations, the angle sensor includes a first angle measurement path to determine an angular position based on sensor values from a first set of angle sensing elements, and a second angle measurement path to determine the angular position based on sensor values from a second set of angle sensing elements. The first and second sets of angle sensing elements may be different types of sensing elements. For example, the first set of angle sensing elements may be a set of magnetoresistive (MR) sensing elements (e.g., a set of anisotropic magnetoresistance (AMR) elements, giant magnetoresistance (GMR) elements, tunnel magnetoresistance (TMR) elements, or the like) and the second set of angle sensing elements may be a set of Hall-based sensing elements (e.g., a set of angle sensing elements that operate based on the Hall effect). In some implementations, the measurement range provided by the first set of angle sensing elements may be different from the measurement range provided by the second set of angle sensing elements. The measurement range provided by the first set of angle sensing elements may be 360 degrees (°). For example, the set of angle sensing elements202may include a set of GMR sensing elements, a set of TMR sensing elements, a set of Hall-based sensing elements, or the like. The measurement range provided by the set of angle sensing elements204may be 180°. For example, the set of angle sensing elements204may include a set of AMR sensing elements. Each of the first and second sets of angle sensing elements may include one or more components configured to obtain respective sets of sensor values for determining an angular position of a target object. A set of sensor values may include a value of a signal indicating a y-component of the angular position (also referred to as a sine value) and a value of a signal indicating an x-component of the angular position (also referred to a cosine value). The angle sensor includes a safety path to perform a set of safety checks associated with the first angle measurement path and/or the second angle measurement path based on the sine values and the cosine values measured by the first and second sets of angle sensing elements. The set of safety checks may include a segment comparison check. The safety path may segment the measurement range provided by the first set of angle sensing elements and the measurement range provided by the second set of angle sensing elements based on the intersections of the respective x-component signals and the y-component signals. For example, the safety path may segment the 360° measurement range associated with the first set of angle sensing elements into 45° segments and may segment the 180° measurement range associated with the second set of angle sensing elements into 22.5° segments. The safety path may perform a safety check based on determining whether a range of angles associated with a segment of the 180° measurement range and a range of angles associated with a segment of the 360° measurement range. The range of angles associated with the 180° measurement range may include the x-component and y-component determined by the second set of angle sensing elements. The range of angles associated with a segment of the 360° measurement range may include the x-component and y-component determined by the first set of angle sensing elements. The safety path may enable a failure (e.g., in the first angle measurement path or in the second angle measurement path) path to be detected based whether the range of angles associated with a segment of the 180° measurement range that includes the x-component and y-component determined by the second set of angle sensing elements is within a range of angles associated with a segment of the 360° measurement range that includes the x-component and y-component determined by the first set of angle sensing elements. By utilizing the segmentation of the measurement ranges, the safety path may perform the safety check without any compensation of temperature and magnetic field strength variation, since those effects can be assumed to affect the x and y channels with a sufficient matching accuracy. FIGS.1A and1Bare diagrams associated with example operations of a system100comprising a safety mechanism for an angle sensor102, as described herein. As shown inFIG.1A, the system100includes the angle sensor102comprising an angle measurement path104, an angle measurement path106, a safety path108, and a digital output component110. As further shown, the system100includes a controller112. The components of the system100are described below, followed by a description of an example operation of the system100. In some implementations, the angle measurement path104, the angle measurement path106, and the safety path108are integrated on a monolithic semiconductor device (e.g., a single chip). An angle measurement path (e.g., the angle measurement path104, the angle measurement path106) includes one or more components associated with determining an angular position θ (theta) of a target object (not shown) based on a set of sensor values. For example, the set of sensor values can include a value of a signal indicating a y-component of the angular position θ (also referred to as a sine value) and a value of a signal indicating an x-component of the angular position θ (also referred to a cosine value). Here, a given angle measurement path may determine an angular position θ of the target object based on the y-component and the x-component (e.g., by calculating an arctangent of the y-component divided by the x-component). In some implementations, the angle measurement path104and the angle measurement path106utilize the same type of sensing elements. In some implementations, the angle measurement path104and the angle measurement path106utilize different types of sensing elements, meaning that the angle measurement path104and the angle measurement path106are diverse measurement paths. In some implementations, a measurement range on the angle measurement path104is different from a measurement range on the angle measurement path106. The safety path108includes one or more components associated with performing one or more safety checks associated with the angle sensor102. In some implementations, the one or more safety checks include a segment comparison check. Additional details regarding example implementations of segment comparison check are provided below with respect toFIGS.2A-2D. In some implementations, the one or more safety checks include a vector length check associated with the angle measurement path104. In some implementations, the one or more safety checks include a vector length check associated with the angle measurement path106. In some implementations, the one or more safety checks include a comparison check associated with the angular position θ as determined on the angle measurement path104and the angular position θ as determined on the angle measurement path106. In some implementations, as shown inFIG.1A, the safety path108is configured to receive sensor values (e.g., a sine value and a cosine value) from the angle measurement path104, sensor values from the angle measurement path106, information associated with a vector length raassociated with the sensor values from the angle measurement path104, information associated with a vector length rbassociated with the sensor values from the angle measurement path106, information associated with the angular position θadetermined on the angle measurement path104, information associated with the angular position θbdetermined in the angle measurement path106, and/or one or more items of information in association with performing the one or more safety checks, as described herein. In some implementations, the safety path108is configured to provide a safety indication (e.g., a failure indication, an error indication, a deactivation indication, an OK indication, or the like) to the digital output component110. The digital output component110includes one or more components associated with generating and transmitting one or more outputs (e.g., an output carrying sensor data, an output carrying an indication of a result of the one or more safety checks, or the like). In some implementations, as shown inFIG.1A, the digital output component110may receive one or more signals from the angle measurement path104, the angle measurement path106, and the safety path108, and may generate and transmit the one or more outputs accordingly. In some implementations, the digital output component110transmits the one or more outputs to the controller112. The controller112includes one or more components associated with controlling one or more electrical systems and/or electrical subsystems based on information provided by the sensor102. The controller112may include, for example, a microcontroller (μC) or an electronic control unit (ECU), among other examples. In some implementations, the controller112may be capable of calibrating, controlling, adjusting, and/or the like, the one or more electrical systems and/or electrical subsystems based on information received from the sensor102. For example, in some implementations, the controller112may be configured to determine an angular position θ of the target object and/or one or more other items of information (e.g., a rotational speed of the target object, a rotational direction of the target object, or the like), determine information associated with the one or more safety checks for the sensor102, and/or provide such information or perform one or more operations in association with controlling the one or more electrical systems and/or electrical subsystems based on such information. In some implementations, the controller112is connected to the sensor102such that the controller112can receive information (e.g., one or more signals) from the sensor102via one or more transmission interfaces and/or via one or more output terminals. An example operation of the system100is illustrated inFIG.1A. As shown by reference150, the angle measurement path104determines an angular position θa. In some implementations, the angle measurement path104determines the angular position θabased on sensor values provided by the set of angle sensing elements on the angle measurement path104(e.g., a set of MR sensing elements, such as a set of AMR sensing elements). In some implementations, the angle measurement path104provides one or more signals to the safety path108. The one or more signals provided by the angle measurement path104to the safety path108may include, for example, one or more signals indicating the sensor values from the angle measurement path104(e.g., an x-component value xaand a y-component value ya), a vector length racomputed from the sensor values (e.g., when the angle measurement path104is configured to compute the vector length ra)), and/or the angular position θa. Further, in some implementations, the angle measurement path104provides a signal indicating the angular position θato the digital output component110. As shown by reference152, the angle measurement path106determines an angular position θb. In some implementations, the angle measurement path106determines the angular position θbbased on sensor values provided by the set of angle sensing elements on the angle measurement path106(e.g., a set of Hall-based sensing elements or a set of MR sensing elements, such as a set of GMR sensing elements or TMR sensing elements). In some implementations, the angle measurement path106provides one or more signals to the safety path108. The one or more signals provided by the angle measurement path106to the safety path108may include, for example, one or more signals indicating the sensor values from the angle measurement path106(e.g., an x-component value xband a y-component value yb), a vector length rbcomputed from the sensor values (e.g., when the angle measurement path106is configured to compute the vector length rb), and/or the angular position θb. Further, in some implementations, the angle measurement path106provides a signal indicating the angular position θbto the digital output component110. As shown by reference154, the safety path108determines the vector length raassociated with the angle measurement path104and the vector length rbassociated with the angle measurement path106. In some implementations, the safety path108determines the vector length raby receiving an indication of the vector length rafrom the angle measurement path104, as described above (e.g., when the angle measurement path104is configured to compute the vector length ra). Alternatively, in some implementations, the safety path108determines the vector length raby computing the vector length rabased on the sensor values received from the angle measurement path104. Similarly, the safety path108determines the vector length rbby receiving an indication of the vector length rbfrom the angle measurement path106, as described above (e.g., when the angle measurement path106is configured to compute the vector length rb). Alternatively, in some implementations, the safety path108determines the vector length rbby computing the vector length rbbased on the sensor values received from the angle measurement path106. In some implementations, a given vector length r (e.g., the vector length ra, the vector length rb) is determined using the following equation: r=sqrt(X2+Y2) where X is the x-component of the angular position θ, and Y is the y-component of the angular position θ. That is, the vector length r corresponds to a magnitude of the electrical vector, whose elements are given by the x-component (cosine) channel and the y-component (sine) channel of a given angle measurement path. Notably, the vector length r is independent of the angular position θ. As shown by reference number156, the safety path108may perform segment mapping. The safety path108may perform segment mapping as described below with respect toFIGS.2A-2D. As shown by reference number158, the safety path108performs one or more safety checks. In some implementations, a safety check performed by the safety path108is based on the segment mapping as described below with respect toFIGS.2A-2D. In some implementations, a safety check performed by the safety path108is based on the vector length ra, the angular position θa, the vector length rb, and/or the angular position θb. In some implementations, the one or more safety checks include one or more vector length checks. For example, the one or more safety checks may include a vector length check associated with the vector length raand/or a vector length check associated with the vector length rb. In an ideal scenario, a given vector length r (e.g., the vector length ra, the vector length rb) remains constant during operation of the sensor102(e.g., due to the principle that cos2θ+sin2θ=1). If, for example, a sensor channel (e.g., the x-component channel or the y-component channel) of a given angle measurement path (e.g., the angle measurement path104or the angle measurement path106) experiences a stuck-at fault, the vector length r will change as function of the angle θ. This change in the vector length r can be detected by the vector length check performed by the safety path108. Therefore, when performing the vector length check, the safety path108determines whether the vector length r stays within an allowable vector length range (e.g., a vector length range defined by a minimum vector length rminand a maximum vector length rmax).FIG.1Bis a diagram illustrating a visualization of a vector length check. In the visualization shown inFIG.1B, the vector length check is a determination of whether the vector length r is within the shaded region defined by the minimum vector length rminand the maximum vector length rmax. Notably, digital signal processing performed by the sensor102can provide compensation for imperfections of components of the sensor102(e.g., the set of angle sensing elements, one or more analog-to-digital converters (ADCs), or the like). For example, a digital signal processor (DSP) of the sensor102may receive raw (i.e., uncompensated) sensor values as input, perform compensation, and output compensated sensor values. Parameters for this compensation can be based on calibration and/or autocalibration. For example, offsets of the raw sensor values can drift with temperature. Here, relevant parameters to compensate such drifts can be determined during end-of-line testing (i.e., calibration) and stored in a memory (e.g., a non-volatile memory (NVM)) of the sensor102. These parameters can then be used during operation of the sensor102for providing compensation, which leads to reduced offsets of the compensated sensor values over temperature. Notably, a well-compensated angle measurement path shows negligible variation in amplitude of sensor values and, therefore, the vector length r associated with the given angle measurement path may be independent of temperature. Additionally, for saturated sensing elements (e.g., MR sensing elements), the vector length r does not depend significantly on a magnitude of the magnetic field. In some implementations, the minimum vector length rminand the maximum vector length rmaxcan be determined based on taking such variations and margins into consideration. That is, the allowable vector length range can be smaller for a well-compensated sensor102(e.g., as compared to an angle sensor with no or poor compensation), thereby improving functional safety of the sensor102. In some implementations, the minimum vector length rminand the maximum vector length rmaxare stored in the memory of the sensor102(e.g., after calibration). During operation, the safety path108compares the computed vector length r to the stored minimum vector length rminand maximum vector length rmax. Here, if the vector length r is not within the allowable vector length range (i.e., if the computed vector length is less than the minimum vector length rminor is greater than the maximum vector length rmax), then the safety path108may, for example, signal an error to the digital output component110. In some implementations, the safety path108performs a vector length check associated with the angle measurement path104. That is, the safety path108may determine whether the vector length rais within an allowable vector length range. Additionally, or alternatively, in some implementations, the safety path108performs a vector length check associated with the angle measurement path106. That is, the safety path108may determine whether the vector length rbis within an allowable vector length range (e.g., the same allowable vector length range as used for the check of the vector length raor a different allowable vector length range than that used for the check of the vector length ra). In some implementations, the one or more safety checks include a comparison check associated with the angular position θaand the angular position θb. In some implementations, the safety path108performs the comparison check by determining whether the angular position θa(e.g., the angular position determined on the angle measurement path104) matches the angular position θb(e.g., the angular position determined on the angle measurement path106). That is, the safety path108may perform the comparison check by determining whether a difference between the angular position θaand the angular position θbis less than a threshold value (e.g., a tolerance value). In some implementations, information indicating the threshold may be stored on the memory of the sensor102. During operation, the safety path108compares the computed difference between the angular position θaand the angular position θbto the threshold. Here, if the difference does not satisfy (e.g., is greater than) the threshold, then the safety path108may, for example, signal an error to the digital output component110. In some implementations, the safety path108provides information indicating a result of the one or more safety checks to the digital output component110. For example, as indicated above, the safety path108may provide an indication of an error associated with the x-component check, an error associated with the y-component check, an error associated with the vector length check associated with the angle measurement path104, an error associated with the vector length check associated with the angle measurement path106, and/or an error associated with the comparison check. As another example, the safety path108may provide an indication that a given safety check has passed (e.g., an indication that the angle measurement path104and/or the angle measurement path106has passed the x-component check, an indication that the angle measurement path104and/or the angle measurement path106has passed the y-component check, an indication that the angle measurement path104has passed the vector length check, an indication that the angle measurement path106has passed the vector length check, and/or an indication that the angle measurement paths104/106have passed the comparison check). Returning toFIG.1A, as shown by reference numbers160and162, the digital output component110may provide angle data and an indication of a result of the one or more safety checks to the controller112. In some implementations, the angle data includes an indication of the angular position θaand/or an indication of the angular position θb. In some implementations, the indication of the result of the one or more safety checks may include an indication of whether a given safety check has failed or passed. Alternatively, in some implementations, the indication of the result of the safety check may include an indication that a given safety check has failed (i.e., the digital output component110may provide an indication for the given safety check only when the given safety check fails). As indicated above,FIGS.1A and1Bare provided as examples. Other examples may differ from what is described with regard toFIGS.1A and1B. Further, the number and arrangement of components shown inFIG.1Aare provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.1A. Furthermore, two or more components shown inFIG.1Amay be implemented within a single component, or a single component shown inFIG.1Amay be implemented as multiple, distributed components. Additionally, or alternatively, a set of components (e.g., one or more components) shown inFIG.1Amay perform one or more functions described as being performed by another set of components shown inFIG.1A. FIGS.2A-2Dare diagrams of example implementations of the system100comprising a safety mechanism (e.g., safety path108) for an angle sensor (e.g., angle sensor102) using segmentation, as described herein. InFIGS.2A and2B, components of the angle measurement path104are indicated by a white color, components of the angle measurement path106are indicated by a hatched pattern, components of the safety path108are indicated by a light gray color, and the digital output component is indicated by a dark gray color. Additionally, the safety path108includes one or more vector length check components212(e.g., a vector length check component212a, a vector length check component212b), a segment mapping component214a, a segment mapping component214b, a segment comparison component216, and an angle comparison component218. Generally, as illustrated inFIGS.2A-2Dfor example, the angle measurement path104includes a set of angle sensing elements202(e.g., a sensing element202xfor sensing the x-component of a magnetic field and a sensing element202yfor sensing the y-component of the magnetic field), a set of measuring elements206(e.g., a measuring element206x1for measuring the x-component sensed by the sensing element202xand a measuring element206y1for measuring the y-component sensed by the sensing element202y), and an angle calculation component210a. Similarly, the angle measurement path106includes a set of angle sensing elements204(e.g., a sensing element204xfor sensing the x-component of a magnetic field and a sensing element204yfor sensing the y-component of the magnetic field), a set of measuring elements206(e.g., a measuring element206x2for measuring the x-component sensed by the sensing element204xand a measuring element206y2for measuring the y-component sensed by the sensing element204y), and an angle calculation component210b. A set of angle sensing elements (e.g., the set of angle sensing elements202or the set of angle sensing elements204) is a set of components for sensing a magnetic field at the angle sensor102. In some implementations, as described above, each set of angle sensing elements202/204includes a sensing element202/204configured to sense an x-component of the magnetic field and a sensing element202/204configured to sense a y-component of the magnetic field. In some implementations, a given set of angle sensing elements202/204may include MR sensing elements, which are elements comprised of a magnetoresistive material (e.g., nickel-iron (NiFe)), where an electrical resistance of the magnetoresistive material depends on a strength and/or a direction of the magnetic field present at the magnetoresistive material. Here, the given set of angle sensing elements202/204may operate based on an AMR effect, a GMR effect, or a TMR effect, among other examples. Further, in some implementations, a given set of angle sensing elements202/204may include a set of Hall-based sensing elements that operate based on the Hall effect. In some implementations, a given sensing element202/204may provide an analog signal, corresponding to a strength of a component of the magnetic field, to a measuring element206. A measuring element206may include an ADC that converts analog signals from a set of angle sensing elements202/204to a digital signal. For example, the measuring element206x1may include an ADC that converts analog signals, received from the set of angle sensing elements202, into digital signals to be processed by a DSP of the measuring element206x1. In some implementations, as shown inFIG.2A, the angle measurement path104and the angle measurement path106are diverse measurement paths. Thus, the set of angle sensing elements202on the angle measurement path104may in some implementations include a set of MR sensing elements, while the set of angle sensing elements204on the angle measurement path106may include a set of Hall-based sensing elements. As another example, the set of angle sensing elements202on the angle measurement path104may in some implementations include a first set of MR sensing elements (e.g., a set of AMR elements), while the set of angle sensing elements204on the angle measurement path106may include a second set of MR elements (e.g., a set of GMR elements or a set of TMR elements, among other examples). In some implementations, the measurement range provided by the set of angle sensing elements202is different from the measurement range provided by the set of angle sensing elements204. As shown inFIG.2A, the measurement range provided by the set of angle sensing elements202may be 360 degrees (°). For example, the set of angle sensing elements202may include a set of GMR sensing elements, a set of TMR sensing elements, a set of Hall-based sensing elements, or the like. The measurement range provided by the set of angle sensing elements204may be 180°. For example, the set of angle sensing elements204may include a set of AMR sensing elements. The use of diverse angle measurement paths104/106provided by the sets of angle sensing elements202/204provides both redundancy of angle measurement and diversity of sensing principle, thereby enhancing functional safety of the angle sensor102. In some implementations, the set of sensors202/204may integrate gain and offset calibration including temperature compensation into the angle measurement paths104/106to account for deviations in the set of sensors202/204fabrication spread, nonlinearities, aging dependencies, and/or temperature dependencies which may result from the use of different types of sensors. In some implementations, the angle measurement paths104/106may also compensate harmonic components of the x-component signal and the y-component signal in order to achieve high accuracy of the angle measurement and high coverage of the safety path108. The calibration and compensation of the angle measurement paths104/106can be done by parameters stored in an NVM based on end of line measurements and/or may utilize auto calibration algorithms, as discussed above. As shown inFIG.2A, the angle calculation component210areceives the x-component value and the y-component value measured by the measuring element206x1and206y1, respectively, and calculates the angular position θaby calculating an arctangent of the y-component divided by the x-component. In some implementations, the angle calculation component210acalculates a vector length rabased on the x-component value and the y-component value, as described above. The angle calculation component210amay provide one or more signals indicating the vector length racomputed from the x-component value and the y-component value (e.g., when the angle calculation component210ais configured to compute the vector length ra) to the vector length check component212aand/or the one or more signals indicating the angular position θato the angle compare component218. Similarly, the angle calculation component210breceives the x-component value and the y-component value measured by the measuring element206x2and206y2, respectively, and calculates the angular position θbby calculating an arctangent of the y-component divided by the x-component. In some implementations, the angle calculation component210bcalculates a vector length rbbased on the x-component value and the y-component value, as described above. The angle calculation component210bmay provide one or more signals indicating the vector length rbcomputed from the x-component value and the y-component value (e.g., when the angle calculation component210bis configured to compute the vector length rb) to the vector length check component212band/or one or more signals indicating the angular position θbto the angle compare component218. The angle compare component218may receive one or more signals indicating the angular position θaand one or more signals indicating the angular position θbfrom the angle calculation component210aand the angle calculation component210b, respectively. The angle compare component218may perform an angle comparison check based on the angular position θaand the angular position θb. In some implementations, the angle compare component218may perform an angle comparison check based on the angular position θaand the angular position θbin a manner similar to that described above with respect toFIG.1A. The angle compare component218may output a signal indicating a result of the angle comparison check to the digital output component110. As also shown inFIG.2A, the vector length check component212breceives the x-component value and the y-component value measured by the measuring element206x2and the measuring element206y2, respectively, and performs a vector check based on the x-component value and the y-component value. In some implementations, the vector length check component212bperforms the vector check in a manner similar to that described above with respect to FIGS.1A and1B. The vector length check component212bmay output a signal indicating a result of the vector length check component to the digital output component110. As shown inFIG.2A, the segment mapping component214areceives the x-component value x1from the measuring element206x1and receives the y-component value y1from the measuring element206y1. Similarly, the segment mapping component214breceives the x-component value x2from the measuring element206x2and receives the y-component value y2from the measuring element206y2. The segment mapping component214amay segment the measurement range of the set of angle sensing elements202to generate a plurality of segments. Each segment may be associated with a range of angles corresponding to the measurement range of the set of angle sensing elements202. In some implementations, the segment mapping component214asegments the measurement range associated with the sensing elements202based on detecting a zero crossing of the x-component signal (e.g., x1=0), a zero crossing of the y-component signal (e.g., y1=0), and one or more points corresponding to the absolute value of the x-component signal being equal to the absolute value of the y-component signal. For example, as shown inFIG.2B, the segment mapping component214amay segment the 360° measurement range of the sensing elements202into a first 45° segment based on detecting a zero crossing of the y-component signal at point220and a point corresponding to the absolute value of the x-component signal being equal to the absolute value of the y-component signal at point224. The segment mapping component214amay segment the 360° measurement range of the sensing elements202into a second 45° segment based on detecting the point corresponding to the absolute value of the x-component signal being equal to the absolute value of the y-component signal at point224and a zero crossing of the x-component signal at point228. The segment mapping component214amay continue in a similar manner to segment the 360° measurement range of the sensing elements202into a series of 45° segments. The segment mapping component214bmay segment the measurement range of the set of angle sensing elements204to generate a plurality of segments. Each segment may be associated with a range of angles corresponding to the measurement range of the set of angle sensing elements204. In some implementations, the segment mapping component214bsegments the measurement range associated with the sensing elements204based on detecting a zero crossing of the x-component signal (e.g., x2=0), a zero crossing of the y-component signal (e.g., y2=0), and one or more points corresponding to the absolute value of the x-component signal being equal to the absolute value of the y-component signal. For example, as shown inFIG.2B, the segment mapping component214bmay segment the 180° measurement range of the sensing elements204into a first 22.5° segment based on detecting a zero crossing of the y-component signal at point220and a point corresponding to the absolute value of the x-component signal being equal to the absolute value of the y-component signal at point222. The segment mapping component214bmay segment the 180° measurement range of the sensing elements204into a second 22.5° segment based on detecting the point corresponding to the absolute value of the x-component signal being equal to the absolute value of the y-component signal at point222and a zero crossing of the x-component signal at point226. The segment mapping component214bmay continue in a similar manner to segment the 180° measurement range of the sensing elements204into a series of 22.5° segments. The segment mapping component214amay determine a segment that includes the x-component value x1and the y-component value y1. The segment mapping component214amay determine the segment based on a relationship between the x-component value x1and the y-component value y1. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in a first segment corresponding to 0° through 45° when the x-component value x1is greater than the y-component value y1and the y-component value y1is greater than zero. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in a second segment corresponding to 45° through 90° when the y-component value y1is greater than the x-component value x1and the x-component value x1is greater than zero. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in a third segment corresponding to 90° through 135° when the y-component value y1is greater than the negative of the x-component value x1(e.g., y1>−x1) and the negative of the x-component value x1is greater than zero. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in a fourth segment corresponding to 135° through 180° when the negative of the x-component value x1is greater than the y-component value y1and the y-component value y1is greater than zero. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in a fifth segment corresponding to 180° through 225° when the y-component value y1is less than zero and the x-component value x1is less than the y-component value y1. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in a sixth segment corresponding to 225° through 270° when the x-component value x1is less than zero and the y-component value y1is less than the x-component value x1. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in a seventh segment corresponding to 270° through 315° when the negative of the y-component value y1is greater than the x-component value x1and the x-component value x1is greater than zero. The segment mapping component214amay determine that the x-component value x1and the y-component value y1are included in an eighth segment corresponding to 315° through 360° when the x-component value x1is greater than the negative of the y-component value y1and the negative of the y-component value y1is greater than zero. The segment mapping component214bmay determine a segment that includes the x-component value x2and the y-component value y2. The segment mapping component214bmay determine the segment based on a relationship between the x-component value x2and the y-component value y2. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a first segment corresponding to 0° through 22.5° when the x-component value x2is greater than the y-component value y2and the y-component value y2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a second segment corresponding to 22.5° through 45° when the y-component value y2is greater than the x-component value x2and the x-component value x2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a third segment corresponding to 45° through 67.5° when the y-component value y2is greater than the negative of the x-component value x2and the negative of the x-component value x2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a fourth segment corresponding to 67.5° through 90° when the negative of the x-component value x2is greater than the y-component value y2and the y-component value y2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a fifth segment corresponding to 90° through 112.5° when the y-component value y2is less than zero and the x-component value x2is less than the y-component value y2. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a sixth segment corresponding to 112.5° through 135° when the x-component value x2is less than zero and the y-component value y2is less than the x-component value x1. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a seventh segment corresponding to 135° through 157.5° when the negative of the y-component value y2is greater than the x-component value x2and the x-component value x2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in an eighth segment corresponding to 157.5° through 180° when the x-component value x2is greater than the negative of the y-component value y2and the negative of the y-component value y2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a ninth segment corresponding to 180° through 202.5° when the x-component value x2is greater than the y-component value y2and the y-component value y2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a tenth segment corresponding to 202.5° through 225° when the y-component value y2is greater than the x-component value x2and the x-component value x2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in an eleventh segment corresponding to 225° through 247.5° when the y-component value y2is greater than the negative of the x-component value x2and the negative of the x-component value x2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a twelfth segment corresponding to 247.5° through 270° when the negative of the x-component value x2is greater than the y-component value y2and the y-component value y2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a thirteenth segment corresponding to 270° through 292.5° when the y-component value y2is less than zero and the x-component value x2is less than the y-component value y2. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a fourteenth segment corresponding to 292.5° through 315° when the x-component value x2is less than zero and the y-component value y2is less than the x-component value x1. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a fifteenth segment corresponding to 315° through 337.5° when the negative of the y-component value y2is greater than the x-component value x2and the x-component value x2is greater than zero. The segment mapping component214bmay determine that the x-component value x2and the y-component value y2are included in a sixteenth segment corresponding to 337.5° through 360° when the x-component value x2is greater than the negative of the y-component value y2and the negative of the y-component value y2is greater than zero. The segment mapping components214a/214bmay provide one or more signals indicating the determined segments to the segment comparison component216. The segment comparison component216may determine whether the segment determined by the segment mapping component214bis included within the segment determined by the segment mapping component214a. For example, the segment comparison component216may determine whether a range of angles associated with the segment determined by the segment mapping component214bis included within a range of angles associated with the segment determined by the segment mapping component214a. The segment comparison component216may provide one or more signals indicating a positive result of the comparison to the digital output component210when the segment determined by the segment mapping component214bis included within the segment determined by the segment mapping component214a. In some implementations, the segment comparison component216may provide one or more signals indicating a negative result of the comparison to the digital output component210when the segment determined by the segment mapping component214bis not included within the segment determined by the segment mapping component214a. In some implementations, the segment comparison component216may perform one or more additional safety checks when the segment determined by the segment mapping component214bis not included within the segment determined by the segment mapping component214a. In some implementations, the borders for the segments for the 180° measurement range and the borders for the segments for the 360° measurement may align every 45°. The alignment of the borders and minor inaccuracies may cause the segment determined by the segment mapping component214bto not be included within the segment determined by the segment mapping component214awhen the x-component values and/or the y-component values are close to (e.g., within a threshold number of degrees) aligned borders. For example, as shown inFIG.2C, the x-component value x1and the y-component value y1are adjacent to the 90° border of the second segment of the 360° measurement range and the x-component value x2and the y-component value y2are adjacent to the 90° border of the fifth segment of the 180° measurement range. The segment comparison component216may perform a safety check to determine whether the segment determined by the segment mapping component214bis adjacent to the segment determined by the segment mapping component214a. The segment comparison component216may provide one or more signals indicating a positive result of the comparison to the digital output component210when the segment determined by the segment mapping component214bis adjacent to the segment determined by the segment mapping component214a. In some implementations, the segment comparison component216may determine whether the common border of the identified segments is a multiple of 90° (e.g., 0°, 90°, 180°, 270°, 360°). For example, as shown inFIG.2C, the x-components and the y-components may be adjacent to the 90° border. The segment comparison component216may perform an additional safety check based on the common border of the identified segments being a multiple of 90°. In some implementations, the segment comparison component216may determine whether the x-component value x1and the y-component y1are close to the common border (e.g., within a particular quantity of degrees). The segment comparison component216may determine that the x-component value x1and the y-component y1are close to the common border when the absolute value of the x-component value x1and the y-component y1that is closest to zero is smaller than the absolute value of the other one of the x-component value x1and the y-component y1by a factor m. The factor m may be a multiple of 2 and within the range of 2 through 64. A higher value of the factor m may provide a higher accuracy relative to a lower value of the factor m. In some implementations, the common border is at 90° or 270°. The segment comparison component216may determine that the x-component value x1and the y-component value y1are close to the common 90° or 270° border when the absolute value of the x-component value x1multiplied by the factor m is less than the absolute value of the y-component value y1. In some implementations, the common border is at 0°, 180°, or 360°. The segment comparison component216may determine that the x-component value x1and the y-component value y1are close to the common 0°, 180°, or 360° border when the absolute value of the y-component value y1multiplied by the factor m is less than the absolute value of the x-component value x1. In some implementations, the segment comparison component216may determine whether the common border of the identified segments is a multiple of 90°+45° (e.g., 45°, 135°, 225°, 315°). The segment comparison component216may perform an additional safety check based on the common border of the identified segments being a multiple of 90°+45°. In some implementations, the segment comparison component216may determine whether the x-component value x1and the y-component y1are close to the common border (e.g., within a particular quantity of degrees). The segment comparison component216may determine that the x-component value x1and the y-component y1are close to the common border when the absolute value of the x-component value x1minus the absolute value of the y-component y1is less than a value r. The value r may be the absolute value of the x-component value x1divided by a factor k, the absolute value of the y-component value y1divided by a factor k, or the sum of the absolute value of the x-component value x1and the y-component value y1divided by two times the factor k. The factor k may be a multiple of 2 and may be within the range of 2 through 64. A higher value of the factor k may provide a higher accuracy relative to a lower value of the factor k. In some implementations, the factor k is the same as the factor m. In some implementations, the factor k is different from the factor m. In some implementations, the common border is at 90° or 270°. The segment comparison component216may determine that the x-component value x1and the y-component value y1are close to the common 90° or 270° border when the absolute value of the x-component value x1multiplied by the factor m is less than the absolute value of the y-component value y1. In some implementations, the common border is at 0°, 180°, or 360°. The segment comparison component216may determine that the x-component value x1and the y-component value y1are close to the common 0°, 180°, or 360° border when the absolute value of the y-component value y1multiplied by the factor m is less than the absolute value of the x-component value x1. In some implementations, the segment comparison component216further segments the segments of the 360° measurement range based on the segment identified by the segment mapping component214bnot being within the segment identified by the segment mapping component214a. For example, as shown inFIG.2D, the segment comparison component216may further segment each 45° segment into three additional segments. The segment comparison component216may determine whether the x-component value x2and the y-component value y2are in a segment within, and/or adjacent to, an additional segment that includes the x-component value x1and the y-component value y1. In some implementations, the segment comparison component216may determine whether the x-component value x2and the y-component value y2are in a segment within, and/or adjacent to, an additional segment that includes the x-component value x1and the y-component value y1in a manner similar to that described above. The number and arrangement of elements shown inFIGS.2A-2Dare provided as an example. In practice, there may be additional elements, fewer elements, different elements, or differently arranged elements than those shown inFIGS.2A-2D. FIG.3is a diagram illustrating example hardware elements of angle sensor102. As shown, angle sensor102may include sensing elements310(e.g., comprising at least two sets of elements), an ADC320, a DSP330, a memory element340, and/or a digital interface350. Sensing element310includes an element for sensing a magnetic field present at sensing element310. For example, sensing element310may include one or more Hall-based sensing elements that operate based on a Hall effect. As another example, sensing element310may include one or more magnetoresistive (MR) based sensing elements, where the electrical resistance of the magnetoresistive material may depend on a strength and/or a direction of the magnetic field present at the magnetoresistive material. Here, sensing element310may operate based on an anisotropic magnetoresistance (AMR) effect, a giant magnetoresistance (GMR) effect, a tunnel magnetoresistance (TMR) effect, and/or the like. As an additional example, sensing element310may include one or more variable reluctance (VR) based sensing elements that operate based on induction. In some implementations, a set of angle sensing elements202(e.g., sensing element202xand sensing element202y) and/or a set of angle sensing elements204(e.g., sensing element204xand sensing element204y) comprise one or more sensing elements310. ADC320includes one or more analog-to-digital converters that convert analog signals from sensing elements310to digital signals. For example, ADC320may convert an analog signal received from a set of angle sensing elements310to a digital signal to be processed by DSP330. In some implementations, ADC320may provide a digital signal to DSP330. In some implementations, angle sensor102may include one or more ADCs320. DSP330may include a digital signal processing device or a collection of digital signal processing devices. In some implementations, DSP330may receive digital signals from ADC320and may process the digital signals in association with selective performance of one or more safety checks, as described herein. In some implementations, DSP330may process the digital signals in order to form output signals, such as output signals associated with an angular position of a target object. Memory element340includes a read only memory (ROM) (e.g., an EEPROM), a random access memory (RAM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, an optical memory, etc.) that stores information and/or instructions for use by angle sensor102, as described herein. In some implementations, memory element340may store information associated with processing performed by DSP330. Additionally, or alternatively, memory element340may store configurational values or parameters for sensing element310and/or information for one or more other elements of angle sensor102, such as ADC320or digital interface350. Digital interface350may include an interface via which angle sensor102may receive and/or provide information from and/or to another device, such as controller112. For example, digital interface350may provide the output signal determined by DSP330to controller112and may receive information from controller112. The number and arrangement of elements shown inFIG.3are provided as an example. In practice, there may be additional elements, fewer elements, different elements, or differently arranged elements than those shown inFIG.3. For example, angle sensor102may include one or more elements not shown inFIG.3, such as a clock, an analog regulator, a digital regulator, a protection element, a temperature sensor, a stress sensor, and/or the like. FIG.4is a flowchart of an example process400associated with safety mechanism for angle sensors using segmentation. In some implementations, one or more process blocks ofFIG.4may be performed by an angle sensor (e.g., sensor102). In some implementations, one or more process blocks ofFIG.4may be performed by another device or a group of devices separate from or including the angle sensor, such as a controller (e.g., controller112). As shown inFIG.4, process400may include receiving a first x-component value and a first y-component value from a first set sensing elements (block410). For example, the angle sensor may receive a first x-component value and a first y-component value from a first set sensing elements, as described above. As further shown inFIG.4, process400may include receiving a second x-component value and a second y-component value from a second set of angle sensing elements (block420). For example, the angle sensor may receive a second x-component value and a second y-component value from a second set of angle sensing elements, as described above. As further shown inFIG.4, process400may include performing a safety check including determining a first range of angles associated with a target object based on a relationship between a magnitude of the first x-component value and a magnitude of the first y-component value (block430). For example, the angle sensor may perform a safety check including determining a first range of angles associated with a target object based on a relationship between a magnitude of the first x-component value and a magnitude of the first y-component value, as described above. As further shown inFIG.4, process400may include determining a second range of angles associated with the target object based on a relationship between a magnitude of the second x-component value and a magnitude of the second y-component value (block440). For example, the angle sensor may determine a second range of angles associated with the target object based on a relationship between a magnitude of the second x-component value and a magnitude of the second y-component value, as described above. As further shown inFIG.4, process400may include determining whether the second range of angles is a subset of the first range of angles (block450). For example, the angle sensor may determine whether the second range of angles is a subset of the first range of angles, as described above. As further shown inFIG.4, process400may include outputting an indication of a result of performing the safety check based on whether the second range of angles is a subset of the first range of angles (block460). For example, the angle sensor may output an indication of a result of performing the safety check based on whether the second range of angles is a subset of the first range of angles, as described above. Process400may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In a first implementation, determining the first range of angles comprises determining that the first range of angles is a range of angles from zero degrees to forty-five degrees when the first x-component value is greater than the first y-component value and the first y-component value is greater than zero, determining that the first range of angles is a range of angles from forty-five degrees to ninety degrees when the first y-component value is greater than the first x-component value and the first x-component value is greater than zero, determining that the first range of angles is a range of angles from ninety degrees to 135 degrees when the first y-component value is greater than a negative of the first x-component value and the negative of the first x-component value is greater than zero, determining that the first range of angles is a range of angles from 135 degrees to 180 degrees when the negative of the first x-component value is greater than the first y-component value and the first y-component value is greater than zero, determining that the first range of angles is a range of angles from 180 degrees to 225 degrees when the first y-component value is less than zero and the first x-component value is less than the first y-component value, determining that the first range of angles is a range of angles from 225 degrees to 270 degrees when the first x-component value is less than zero and the first y-component value is less than the first x-component value, determining that the first range of angles is a range of angles from 270 degrees to 315 degrees when the negative of the first y-component value is greater than the first x-component value and the first x-component value is greater than zero, and determining that the first range of angles is a range of angles from 315 degrees to 360 degrees when the first x-component value is greater than the negative of the first y-component value and the negative of the first y-component value is greater than zero. In a second implementation, alone or in combination with the first implementation, determining the second range of angles comprises determining that the second range of angles is a range of angles from zero degrees to 22.5 degrees when the second x-component value is greater than the second y-component value and the second y-component value is greater than zero, determining that the second range of angles is a range of angles from 22.5 degrees to forty-five degrees when the second y-component value is greater than the second x-component value and the second x-component value is greater than zero, determining that the second range of angles is a range of angles from forty-five degrees to 67.5 degrees when the second y-component value is greater than a negative of the second x-component value and the negative of the second x-component value is greater than zero, determining that the second range of angles is a range of angles from 67.5 degrees to ninety degrees when the negative of the second x-component value is greater than the second y-component value and the second y-component value is greater than zero, determining that the second range of angles is a range of angles from ninety degrees to 112.5 degrees when the second y-component value is less than zero and the second x-component value is less than the second y-component value, determining that the second range of angles is a range of angles from 112.5 degrees to 135 degrees when the second x-component value is less than zero and the second y-component value is less than the second x-component value, determining that the second range of angles is a range of angles from 135 degrees to 157.5 degrees when the negative of the second y-component value is greater than the second x-component value and the second x-component value is greater than zero, and determining that the second range of angles is a range of angles from 157.5 degrees to 180 degrees when the second x-component value is greater than the negative of the second y-component value and the negative of the second y-component value is greater than zero. In a third implementation, alone or in combination with one or more of the first and second implementations, determining the second range of angles comprises determining that the second range of angles is a range of angles from 180 degrees to 202.5 degrees when the second x-component value is greater than the second y-component value and the second y-component value is greater than zero, determining that the second range of angles is a range of angles from 202.5 degrees to 225 degrees when the second y-component value is greater than the second x-component value and the second x-component value is greater than zero, determining that the second range of angles is a range of angles from 225 degrees to 247.5 degrees when the second y-component value is greater than a negative of the second x-component value and the negative of the second x-component value is greater than zero, determining that the second range of angles is a range of angles from 247.5 degrees to 270 degrees when the negative of the second x-component value is greater than the second y-component value and the second y-component value is greater than zero, determining that the second range of angles is a range of angles from 270 degrees to 292.5 degrees when the second y-component value is less than zero and the second x-component value is less than the second y-component value, determining that the second range of angles is a range of angles from 292.5 degrees to 315 degrees when the second x-component value is less than zero and the second y-component value is less than the second x-component value, determining that the second range of angles is a range of angles from 315 degrees to 337.5 degrees when the negative of the second y-component value is greater than the second x-component value and the second x-component value is greater than zero, and determining that the second range of angles is a range of angles from 337.5 degrees to 360 degrees when the second x-component value is greater than the negative of the second y-component value and the negative of the second y-component value is greater than zero. In a fourth implementation, alone or in combination with one or more of the first through third implementations, performing the safety check further includes determining that a largest angle included in the first range of angles is equal to a smallest angle included in the second range of angles, wherein the indication of the result includes an indication of a positive result of performing the safety check based on the largest angle included in the first range of angles being equal to the smallest angle included in the second range of angles. In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, performing the safety check further includes determining that a smallest angle included in the first range of angles is equal to a largest angle included in the second range of angles, wherein the indication of the result includes an indication of a positive result of performing the safety check based on the smallest angle included in the first range of angles being equal to the largest angle included in the second range of angles. In a sixth implementation, alone or in combination with one or more of the first through fifth implementations, performing the safety check further includes determining that the first range of angles includes ninety degrees or 270 degrees, and determining whether the first x-component value, multiplied by a first factor, is less than the first y-component value, wherein the indication of the result includes an indication of a positive result of performing the safety check when the first x-component value, multiplied by the first factor, is less than the first y-component value. In a seventh implementation, alone or in combination with one or more of the first through sixth implementations, performing the safety check further includes determining that the first range of angles includes zero degrees, 180 degrees, or 360 degrees, and determining whether the first y-component value, multiplied by a first factor, is less than the first x-component value, wherein the indication of the result includes an indication of a positive result of performing the safety check when the first y-component value, multiplied by the first factor, is less than the first x-component value. In an eighth implementation, alone or in combination with one or more of the first through seventh implementations, performing the safety check further includes determining that the first range of angles includes 45 degrees or 135 degrees, and determining whether a difference between an absolute value of the first x-component value and an absolute value of the first y-component value satisfies a threshold, wherein the indication of the result includes an indication of a positive result of performing the safety check when difference of the absolute value of the first x-component value and the absolute value of the first y-component value satisfies the threshold. AlthoughFIG.4shows example blocks of process400, in some implementations, process400may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of process400may be performed in parallel. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, and/or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based on the description herein. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). | 73,099 |
11859972 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS An exemplary embodiment of a method and device for providing a microresonator frequency comb platform is described below with respect toFIGS.1-9. Those skilled in the art will appreciate that the steps and devices described are for exemplary purposes only and are not limited to the specific processes or devices described. Overview Achieving microresonator combs with a small line spacing may be desirable for OCT application. In OCT, a spectrometer with a finite spectral resolution is generally used to sample the interferograms, which are Fourier transformed to reconstruct images. If the comb lines are too sparse, it will deteriorate the Fourier-transformed image quality and will reduce the imaging range due to spectrometer resampling requirement. Generating frequency combs with a small line spacing is thus desired for OCT. However, it is challenging to achieve small line spacing due to the large size of the resonator required and the consequently increased parametric oscillation threshold. Also, the presence of mode crossings makes it difficult to achieve a small line spacing yet maintaining a smooth spectrum. The present disclosure may address one or more shortcomings of the prior art. As described, a microresonator platform may be configured to generate frequency combs with a broad bandwidth of 110 nm and a small line spacing of 38 GHz around the long wavelength OCT imaging window of 1300 nm, based on high-Q silicon nitride resonators. Other configurations may be used. In certain aspects, favorable results have been achieved using silicon nitride as a material platform to generate broadband micro-resonator frequency combs. Silicon nitride combines the beneficial properties of a wide transparency range covering the entire OCT imaging window, the high nonlinear refractive index (n2=2.4×10−19m2/W) and its CMOS compatibility. A majority of the research in silicon nitride broadband frequency combs has been based on devices with a line spacing of 200 GHz or 1 THz, which is too large for many applications including OCT. Other materials besides silicon nitride may be used. For example, other materials such as silica, silicon, aluminum nitride, crystalline fluorides, diamond, and AlGaAs, and the like may be used to create the microresonator frequency comb platform. Additionally, most microresonator combs have been generated with a pump wavelength around 1550 nm, where water absorption is very high, making such combs unsuitable for OCT of biological samples. It will be appreciated by those skilled in the art that the microresonator based frequency combs platform described herein may also be used to generate frequency combs at 800 nm and 1700 nm, in addition to 1550 nm, to cover the full range of OCT operating wavelengths that have the potential to be used widely to replace the current light sources for OCT and to provide better resolution. In certain aspects, ultra-high-Q resonators may be achieved by minimizing surface roughness of Si3N4, which not only mitigates undesirable mode crossings but also enables small-line-spacing combs. As an illustrative example, Q may depend on the line spacing. For large line spacing (200G), the present disclosure may achieve Q up to 37 million (loss less than 1 dB/m). For small line spacing (38G) (e.g., using SiN with small line spacing) example results have been about 8 million (loss about 3dB/m). As used herein ultra-high-Q may comprise greater than 800,000, 900,000, 1 million, 2 million, 3 million, 4 million, 5 million, 6 million, 7 million, 8 million (or intervening end points) using small line spacing of 38G and with large line pacing (200G), losses may be achieved of less than 3 dB/m, less than 2 dB/m, less than 1 dB/m or intervening end points. Other Q factors may be achieved. In addition, traditional frequency combs are generated based on tabletop ultrafast mode-locked lasers. Microresonators, on the other hand, offer the unique possibility of chip-scale comb sources pumped by stable continuous-wave lasers. Usually these pump lasers are low-noise tunable external cavity diode lasers (ECDLs) which are very expensive, bulky and cumbersome. Due to the low coherence requirement in OCT, the inventors avoid working in the phase locked states which makes it possible to use a simple inexpensive DFB laser to pump the rings. The waveguide cross section is designed to be 1500 nm×730 nm to achieve anomalous dispersion at the pump wavelength. Also, using integrated micro-heaters on top of the devices enables control of the cavity resonance by temperature tuning, which enables using a fixed-wavelength pump laser to generate frequency combs. The platform not only significantly reduces the cost and size of the setups, but also reduces the laser power to a safe range for clinical applications. A real-time spectral domain (SD) OCT imaging using on-chip frequency combs is demonstrated with a measured axial resolution of 18 μm at 1312 nm, which is better than that achievable with a commercial single superluminescent diode source. The interferogram generated based on the frequency comb source is recorded by a spectrometer. In an exemplary setup of a microresonator frequency comb platform, with the frequency comb source, the axial resolution, measured by the full-width half maximum (FWHM) of the axial point spread function (PSF), is around 18 μm, which matches well with the theoretical estimation of 16.3 μm. This resolution is better than the single external commercialized SLD with a measured axial resolution of 24 μm. Moreover, the frequency combs described herein have the potential to have resolution down to the sub micrometer scale where the pump power needed can be in a clinically safe regime. No other known platform can provide both of these features. The sensitivity of the OCT system may be defined by the minimal sample reflectivity at which the signal to noise ratio reaches unity. In certain aspects, it is measured to be 100 dB at an A-line rate of 28 kHz for the OCT source. The OCT images are reconstructed from the raw spectral data generated by the system following standard OCT signal processing steps, including background subtraction, linear-k interpolation, apodization, and dispersion compensation. In other aspects, a platform based on chip-scale lithographically-defined microresonators breaks the tradeoff between bandwidth and output power of conventional SLD sources and could enable OCT with deep tissue penetration and sub-micrometer axial resolution. These microresonators may be fabricated using traditional microelectronic processes. When optically pumped with a single continuous-wave laser source they can generate broadband frequency combs, comprising discrete lines with a frequency spacing determined by the geometry of the resonator. Such frequency combs have been demonstrated in numerous chip-scale platforms including silica, silicon, silicon nitride, aluminum nitride, crystalline fluorides, diamond and AlGaAs. The parametric gain in these photonic structures enables ultra-broad optical bandwidths (up to an octave) in contrast to traditional gain materials and is not limited by the gain bandwidth tradeoff The platforms of the present disclosure may be based on ultra-low loss silicon nitride resonator generated frequency combs with a small line spacing (38 GHz) which is compatible with current OCT spectrometers.FIG.1ashows the 3D schematics of the resonator andFIG.1bshows the fabricated on-chip resonator. Silicon nitride has been used extensively to generate broadband microresonator frequency combs. It combines the beneficial properties of a wide transparency range covering the entire OCT imaging window, a high nonlinear refractive index (n2=2.4×10−19m2/W),20)and semiconductor mass manufacturing compatibility. The resonators may be configured with a large cavity length L=1.9 mm so that the frequency comb has a line spacing (0.21 nm) comparable to the spectral sampling interval of the OCT spectrometer (approximately 0.17 nm): δλ=λ02ngL,(1) where λ0is the center wavelength, δλ is the spectral sampling interval, and ngis the group index. In order to achieve low pump threshold Pthgiven by equation (2)5,21, Pth≈1.54(π2)Qc2QL·n2LAn2λQL2(2) where λ is the pump wavelength, n and n2are the linear and nonlinear refractive indices, and A is the cross sectional modal volume. Resonators may be used with ultra-high coupling and loaded quality factors, Qcand QL, respectively, and demonstrate frequency combs with a 38-GHz frequency spacing generated using optical pump powers as low as 117 mW. In order to generate the comb with a flat top and adjustable output power, ideal for OCT imaging, the comb generation process may be configured to not induce soliton states with characteristic hyperbolic secant spectrum which limits the achievable FWHM, by tuning of the cavity resonance relative to the pump frequency using a microheater co-fabricated with the resonator.FIG.2ashows the generated frequency comb spectra with a FWHM of 47 nm, using a ring resonator based on waveguides with 730×1500 nm cross section. This FWHM corresponds to a predicted axial resolution of 16.3 μm (in air) in good agreement with our measured FWHM of the axial point spread function (PSF) of 18 μm (in air). The inset shows the small line spacing of the frequency combs. To show our ability to tailor the spectral shape of these combs by tailoring the dispersion in these resonators,FIG.2bshows another generated frequency comb spectra with a FWHM of 92 nm, using a ring resonator based on waveguides with slightly different geometries. This FWHM corresponds to a predicted axial resolution of 7.9 μm (in air) which is comparable with the achievable resolution using a single state of art SLD.FIG.2cshows the spectrum of a single commercial SLD with a FWHM of 28 nm and a measured PSF FWHM of 24 μm (in air) In principle, one can multiplex the comb sources in the same way that multiplexing of SLDs is done. However, other multiplexing methods may be used. Using the microresonator platform, OCT images of human tissue may be acquired with chip-based frequency combs and show that the platform is compatible with a standard commercial SD-OCT system. These images were achieved using a standard SD-OCT system (Thorlab Telesto I), where the SLD was simply replaced by the chip-based frequency comb. Since the system is not optimized for our combs, the imaging capability is a lower bound limit.FIGS.3-4show ex vivo OCT images of human breast and coronary artery samples imaged with our microresonator frequency comb source using a commercial SD-OCT system. The human breast tissue was obtained from Columbia University Tissue Bank26, and the human heart was obtained via the national disease research interchange27.FIG.3compares images recorded using our microresonator frequency comb and a commercial SLD which has similar bandwidth of the generated combs. The Hematoxylin and Eosin (H&E) stained histology is provided as the reference for both the breast and coronary artery tissue structures. Different tissue types, including stromal tissue, adipose tissue and milk duct, are delineated in both B-scans by comparing with the corresponding histology analysis.FIG.4ashows a stitched frequency-comb-based OCT image of a human left anterior descending artery (LAD) in comparison with the H&E histology inFIG.4b. OCT B-scans were stitched using the method previously used in cervical imaging. In the red inset, a gradually decreasing trend of backscattering can be visualized within the transition region from a fibrous region to the media. The blue inset inFIG.4reveals a typical pattern of a fibrocalcific plaque3, where a layer of signal-rich fibrous cap is on the top of calcium, a signal-poor region with a sharply delineated border. Importantly, overlying the fibrocalcific plaque region, a transition can be seen from dense fibrous cap for stable plaque structure to thinner fibrous cap for unstable plaque structure, the latter of which is highly correlated with acute coronary syndrome and acute myocardial infarction.FIG.4indicates the potential to visualize critical features in human coronary arteries by integrating the chip-based frequency combs into an OCT system. OCT based on microresonator frequency combs has the capability of achieving resolution below 1 μm simply by engineering the waveguide dimensions of the resonator. In order to tune the waveguide dispersion and achieve wide spectral combs, the geometry of the waveguide may be configured to compensate for higher-order waveguide dispersion effects. A flat uniform spectrum with even broader spectrum of several hundred nanometers could be generated from a single frequency comb (simulation is shown inFIG.5). InFIG.6, combs are shown generated around 1600 nm with FWHM of 154 nm achieved using such compensation. The efficiency of these combs can be up to 30%, enabling simultaneous high output power and broad bandwidth, in contrast to the traditional SLD sources which suffer from power-bandwidth tradeoff In conclusion, a microresonator frequency comb platform is shown that has the potential for paving the way for UHR-OCT in clinical settings. The frequency comb is generated using high-Q silicon nitride resonators. The capability of frequency comb OCT imaging is illustrated by comparing with the histology analysis. The platform has the potential to be cost-effective. In order to generate these frequency combs, a pump source may be used based on a low-cost distributed feedback (DFB) laser in contrast to the high power and high stability wavelength-tunable sources usually required for generating phase-locked frequency combs. The integration of such laser with our microresonator platform could enable inexpensive sources for clinically safe OCT and allow for miniaturization of OCT systems. Device Fabrication Starting from a silicon wafer, a 4-um-thick oxide layer is grown for the bottom cladding. Silicon nitride (Si3N4) is deposited using low-pressure chemical vapor deposition (LPCVD) in steps. After Si3N4deposition, a silicon dioxide (SiO2) hard mask may be deposited using plasma enhanced chemical vapor deposition (PECVD). Patterning may be accomplished using JEOL 9500 electron beam lithography. Ma-N 2403 electron-beam resist is used to write the pattern and the nitride film is etched in an inductively coupled plasma reactive ion etcher (ICP RIE) using a combination of CHF3, N2, and O2gases. After stripping the resist and oxide mask, the devices may be annealed at 1200° C. in an argon atmosphere for 3 hours to remove residual N—H bonds in the Si3N4film. The devices may be clad with 500 nm of high temperature silicon dioxide (HTO), deposited at 800° C., and followed by 2.5 μm of SiO2using PECVD. CMP and multipass lithography technique can be applied to further reduce sidewall scattering losses. Above the waveguide cladding, integrated microheaters may be fabricated by sputtering platinum and using a lift-off approach. Micro-heaters may be integrated on our device to control the cavity resonance by temperature tuning, which enables the use of a simple compact single-frequency pump laser diode to generate frequency combs. Measurements As the presence of the pump within the comb spectrum limits the dynamic range of the detection, a filtering setup may be used based on a free-space grating and pin to fully attenuate the pump power. The setup is shown inFIG.7. This filtering setup can be replaced by a customized fiber-based filter to miniaturize the size of the setup in the future. The output power of our optical frequency comb source may be adjusted to be similar to the output power of the SLD being used. The comb source may be coupled directly into a commercial system (Thorlabs Telesto I) to acquire images. The schematic of the OCT system is shown inFIG.8. An optical circulator with an isolation of −40 dB is added to protect the commercial console. The incident light from the comb source is routed to the Michelson interferometer, and the backscattered signals from both interferometer arms are directed back to the spectrometer. Using the frequency combs combined with the commercialized SD-OCT system, OCT images may be acquired. The images are reconstructed in real-time from the raw spectral data generated by the system, following standard OCT signal processing steps, including background subtraction, linear-k interpolation, apodization, and dispersion compensation. The acquisition rate is 28 kHz currently limited by the CCD line rate. The total acquisition time of an image for the SLD and the chip comb images is the same (35 msec). The sensitivity of the OCT system is defined by the minimal sample reflectivity at which the signal to noise ratio reaches unity. It is measured to be 100 dB at an A-line rate of 28 kHz for the frequency comb source. The sensitivity can be further increased by suppressing the noise due to the laser-chip coupling via packaging. Ultra-Board Frequency Comb Spectrum Simulation FIG.5shows a simulated frequency comb spectrum generated from silicon nitride microresonators. It shows the potential for generating a flat uniform spectrum over several hundred nanometers by engineering the geometry of the waveguide to compensate for higher-order waveguide dispersion effects. Frequency Comb Spectrum Generated Centered around 1600 nm FIG.6shows a measured frequency comb spectrum generated centered around 1600 nm with FWHM of 154 nm. This frequency comb is generated using the same silicon nitride microresonators platform. System Setup FIG.7shows the system setup for comb generation and pump filtering. A DFB laser is amplified and coupled to the Si3N4micro-chip. An amplifier may be used to compensate coupling loss from the setup. A grating is used to filter out the pump laser before the comb is sent to the OCT system. This filtering setup can be replaced by a customized fiber-based filter to miniaturize the size of the setup in the future. FIG.8shows the schematic of the comb-based OCT setup. Note that the comb source is directly coupled into the commercial system (Thorlabs Telesto I) to acquire images. The optical circulator is added to protect the commercial console. It shows that our platform is compatible with a standard commercial SD-OCT system. A Line Analysis and Noise Discussion FIG.9shows the A-line signals extracted from OCT B-scan images of a mirror surface, taken with an SLD and the comb source, respectively. From the figure, one can see that the discrete nature of combs does not deteriorate the OCT images quality. The SLD and comb source have the same acquisition rate of 28 kHz (limited by the CCD line rate). One can see that the noise level of the comb source is comparable to the level of SLD's indicating that the expected extra noise in the comb setup from fiber coupling setup, known to induce extra noise due to fiber fluctuations, has a minimal effect on the OCT imaging capabilities. If needed, any extra noise from the coupling can be significantly suppressed by packaging the source with the chip. The techniques disclosed herein demonstrate a microresonator frequency comb platform for OCT applications. Those skilled in the art will appreciate that the techniques described herein may be used for other applications as well. For example, the platform may be used for replacing commercial supercontinuum sources. Such sources use microstructural fibers (such as photonic crystal fibers) for supercontinuum generation, which is a result of a complex interplay of various nonlinear processes, including simulated Raman scattering, self-phase modulation, four-wave mixing and others. | 19,736 |
11859973 | DESCRIPTION OF EMBODIMENTS CNN regression is a technique that can enable fine-grained global localization and can be used to computer vision system re-localization. Typically the training data of CNN regression is obtained via a visual simultaneous localization and mapping (VSLAM) technique. However CNN regression based re-localization using VSLAM are not as effective for use in mapping large-scale structures. First, buildings are normally divided into different parts segmented by “tunnels” (e.g., long corridors with single color walls), which provide limited visual information. The limited visual input makes the use of VSLAM more difficult. Second, even with sufficient visual information, the VSLAM optimization for large areas is time consuming and memory intensive. Third, the drift error in VSLAM can produce degraded results. Fourth, the large amount of glass in modern buildings can damage the results generated via VSLAM. Furthermore, the coordinate system of VLAM differs from human used maps. Therefore, humans cannot easily use the localization results produces for robots. The techniques described herein resolve the above problems and provides a method of CNN regression based re-localization that is suitable for use in large-scale buildings. The techniques described herein are also favorable comparable to other localization methods such as Lidar, ultra wide band (UWB), Wi-Fi, and internal measurement unit (IMU) localization. The techniques described herein are lower cost and have a higher re-localization call back rate relative to Lidar, have a lower drift error relative to IMU, and does not require the installation of new instruments, as in UWB and Wi-Fi. In one embodiment, CNN regression for interior localization within large-scale structures is performed as follows. First, the interior of a large-scale structure is divided into multiple parts. Next, sufficient visual data of the interior of the structure is gathered to enable camera pose estimation via VSLAM. A point cloud of each part is constructed and matched to a two-dimensional (2D) map. The estimated camera poses associated with the visual data of the structure is transformed into coordinates in the 2D map. CNN regression can then be trained using pairs of visual and camera pose data. The CNN regression can then be used to predict coordinates of newly captured visual data within the 2D map. In addition to service robot re-localization, these techniques have direct application for real time localization, autonomous navigation, and to enhance navigation and localization functionality for mobile device users. For example, re-localization techniques described herein can be used to enable indoor positioning to assist human navigation. For the purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the various embodiments described below. However, it will be apparent to a skilled practitioner in the art that the embodiments may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles, and to provide a more thorough understanding of embodiments. Although some of the following embodiments are described with reference to a graphics processor, the techniques and teachings described herein may be applied to various types of circuits or semiconductor devices, including general purpose processing devices or graphic processing devices. Reference herein to “one embodiment” or “an embodiment” indicate that a particular feature, structure, or characteristic described in connection or association with the embodiment can be included in at least one of such embodiments. However, the appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other. Embodiments may be implemented as any one or a combination of: one or more microchips or integrated circuits interconnected using a parent-board, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). The term “logic” may include, by way of example, software or hardware and/or combinations of software and hardware. Embodiments may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments described herein. A machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), and magneto-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, or other type of non-transitory machine-readable media suitable for storing machine-executable instructions. Moreover, embodiments may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of one or more data signals embodied in and/or modulated by a carrier wave or other propagation medium via a communication link (e.g., a modem and/or network connection). In the description that follows,FIGS.1-14provide an overview of exemplary data processing system and graphics processor logic that incorporates or relates to the various embodiments.FIGS.15-21provide specific details of the various embodiments. Some aspects of the following embodiments are described with reference to a graphics processor, while other aspects are described with respect to a general-purpose processor, such as a central processing unit (CPU). Similar techniques and teachings can be applied to other types of circuits or semiconductor devices, including but not limited to a many integrated core processor, a GPU cluster, or one or more instances of a field programmable gate array (FPGA). In general, the teachings are applicable to any processor or machine that manipulates or processes image (e.g., sample, pixel), vertex data, or geometry data. System Overview FIG.1is a block diagram of a processing system100, according to an embodiment. In various embodiments, the system100includes one or more processors102and one or more graphics processors108, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors102or processor cores107. In on embodiment, the system100is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices. An embodiment of system100can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments system100is a mobile phone, smart phone, tablet computing device or mobile Internet device. Data processing system100can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, data processing system100is a television or set top box device having one or more processors102and a graphical interface generated by one or more graphics processors108. In some embodiments, the one or more processors102each include one or more processor cores107to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores107is configured to process a specific instruction set109. In some embodiments, instruction set109may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores107may each process a different instruction set109, which may include instructions to facilitate the emulation of other instruction sets. Processor core107may also include other processing devices, such a Digital Signal Processor (DSP). In some embodiments, the processor102includes cache memory104. Depending on the architecture, the processor102can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor102. In some embodiments, the processor102also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores107using known cache coherency techniques. A register file106is additionally included in processor102which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor102. In some embodiments, processor102is coupled to a processor bus110to transmit communication signals such as address, data, or control signals between processor102and other components in system100. In one embodiment the system100uses an exemplary ‘hub’ system architecture, including a memory controller hub116and an Input Output (I/O) controller hub130. A memory controller hub116facilitates communication between a memory device and other components of system100, while an I/O Controller Hub (ICH)130provides connections to I/O devices via a local I/O bus. In one embodiment, the logic of the memory controller hub116is integrated within the processor. Memory device120can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device120can operate as system memory for the system100, to store data122and instructions121for use when the one or more processors102executes an application or process. Memory controller hub116also couples with an optional external graphics processor112, which may communicate with the one or more graphics processors108in processors102to perform graphics and media operations. In some embodiments, ICH130enables peripherals to connect to memory device120and processor102via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller146, a firmware interface128, a wireless transceiver126(e.g., Wi-Fi, Bluetooth), a data storage device124(e.g., hard disk drive, flash memory, etc.), and a legacy I/O controller140for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. One or more Universal Serial Bus (USB) controllers142connect input devices, such as keyboard and mouse144combinations. A network controller134may also couple to ICH130. In some embodiments, a high-performance network controller (not shown) couples to processor bus110. It will be appreciated that the system100shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, the I/O controller hub130may be integrated within the one or more processor102, or the memory controller hub116and I/O controller hub130may be integrated into a discreet external graphics processor, such as the external graphics processor112. FIG.2is a block diagram of an embodiment of a processor200having one or more processor cores202A-202N, an integrated memory controller214, and an integrated graphics processor208. Those elements ofFIG.2having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor200can include additional cores up to and including additional core202N represented by the dashed lined boxes. Each of processor cores202A-202N includes one or more internal cache units204A-204N. In some embodiments each processor core also has access to one or more shared cached units206. The internal cache units204A-204N and shared cache units206represent a cache memory hierarchy within the processor200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units206and204A-204N. In some embodiments, processor200may also include a set of one or more bus controller units216and a system agent core210. The one or more bus controller units216manage a set of peripheral buses, such as one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express). System agent core210provides management functionality for the various processor components. In some embodiments, system agent core210includes one or more integrated memory controllers214to manage access to various external memory devices (not shown). In some embodiments, one or more of the processor cores202A-202N include support for simultaneous multi-threading. In such embodiment, the system agent core210includes components for coordinating and operating cores202A-202N during multi-threaded processing. System agent core210may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores202A-202N and graphics processor208. In some embodiments, processor200additionally includes graphics processor208to execute graphics processing operations. In some embodiments, the graphics processor208couples with the set of shared cache units206, and the system agent core210, including the one or more integrated memory controllers214. In some embodiments, a display controller211is coupled with the graphics processor208to drive graphics processor output to one or more coupled displays. In some embodiments, display controller211may be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor208or system agent core210. In some embodiments, a ring based interconnect unit212is used to couple the internal components of the processor200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor208couples with the ring interconnect212via an I/O link213. The exemplary I/O link213represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module218, such as an eDRAM module. In some embodiments, each of the processor cores202-202N and graphics processor208use embedded memory modules218as a shared Last Level Cache. In some embodiments, processor cores202A-202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores202A-202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores202A-N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores202A-202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor200can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components. FIG.3is a block diagram of a graphics processor300, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor300includes a memory interface314to access memory. Memory interface314can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. In some embodiments, graphics processor300also includes a display controller302to drive display output data to a display device320. Display controller302includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. In some embodiments, graphics processor300includes a video codec engine306to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats. In some embodiments, graphics processor300includes a block image transfer (BLIT) engine304to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE)310. In some embodiments, graphics processing engine310is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations. In some embodiments, GPE310includes a 3D pipeline312for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline312includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system315. While 3D pipeline312can be used to perform media operations, an embodiment of GPE310also includes a media pipeline316that is specifically used to perform media operations, such as video post-processing and image enhancement. In some embodiments, media pipeline316includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine306. In some embodiments, media pipeline316additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system315. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system315. In some embodiments, 3D/Media subsystem315includes logic for executing threads spawned by 3D pipeline312and media pipeline316. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem315, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem315includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data. 3D/Media Processing FIG.4is a block diagram of a graphics processing engine410of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE)410is a version of the GPE310shown inFIG.3. Elements ofFIG.4having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. For example, the 3D pipeline312and media pipeline316ofFIG.3are illustrated. The media pipeline316is optional in some embodiments of the GPE410and may not be explicitly included within the GPE410. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE410. In some embodiments, GPE410couples with or includes a command streamer403, which provides a command stream to the 3D pipeline312and/or media pipelines316. In some embodiments, command streamer403is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer403receives commands from the memory and sends the commands to 3D pipeline312and/or media pipeline316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline312and media pipeline316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline312can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline312and/or image data and memory objects for the media pipeline316. The 3D pipeline312and media pipeline316process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array414. In various embodiments the 3D pipeline312can execute one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array414. The graphics core array414provides a unified block of execution resources. Multi-purpose execution logic (e.g., execution units) within the graphic core array414includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders. In some embodiments the graphics core array414also includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the execution units additionally include general-purpose logic that is programmable to perform parallel general purpose computational operations, in addition to graphics processing operations. The general purpose logic can perform processing operations in parallel or in conjunction with general purpose logic within the processor core(s)107ofFIG.1or core202A-202N as inFIG.2. Output data generated by threads executing on the graphics core array414can output data to memory in a unified return buffer (URB)418. The URB418can store data for multiple threads. In some embodiments the URB418may be used to send data between different threads executing on the graphics core array414. In some embodiments the URB418may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic420. In some embodiments, graphics core array414is scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE410. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed. The graphics core array414couples with shared function logic420that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic420are hardware logic units that provide specialized supplemental functionality to the graphics core array414. In various embodiments, shared function logic420includes but is not limited to sampler421, math422, and inter-thread communication (ITC)423logic. Additionally, some embodiments implement one or more cache(s)425within the shared function logic420. A shared function is implemented where the demand for a given specialized function is insufficient for inclusion within the graphics core array414. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic420and shared among the execution resources within the graphics core array414. The precise set of functions that are shared between the graphics core array414and included within the graphics core array414varies between embodiments. FIG.5is a block diagram of another embodiment of a graphics processor500. Elements ofFIG.5having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. In some embodiments, graphics processor500includes a ring interconnect502, a pipeline front-end504, a media engine537, and graphics cores580A-580N. In some embodiments, ring interconnect502couples the graphics processor to other processing units, including other graphics processors or one or more general-purpose processor cores. In some embodiments, the graphics processor is one of many processors integrated within a multi-core processing system. In some embodiments, graphics processor500receives batches of commands via ring interconnect502. The incoming commands are interpreted by a command streamer503in the pipeline front-end504. In some embodiments, graphics processor500includes scalable execution logic to perform 3D geometry processing and media processing via the graphics core(s)580A-580N. For 3D geometry processing commands, command streamer503supplies commands to geometry pipeline536. For at least some media processing commands, command streamer503supplies the commands to a video front end534, which couples with a media engine537. In some embodiments, media engine537includes a Video Quality Engine (VQE)530for video and image post-processing and a multi-format encode/decode (MFX)533engine to provide hardware-accelerated media data encode and decode. In some embodiments, geometry pipeline536and media engine537each generate execution threads for the thread execution resources provided by at least one graphics core580A. In some embodiments, graphics processor500includes scalable thread execution resources featuring modular cores580A-580N (sometimes referred to as core slices), each having multiple sub-cores550A-550N,560A-560N (sometimes referred to as core sub-slices). In some embodiments, graphics processor500can have any number of graphics cores580A through580N. In some embodiments, graphics processor500includes a graphics core580A having at least a first sub-core550A and a second core sub-core560A. In other embodiments, the graphics processor is a low power processor with a single sub-core (e.g.,550A). In some embodiments, graphics processor500includes multiple graphics cores580A-580N, each including a set of first sub-cores550A-550N and a set of second sub-cores560A-560N. Each sub-core in the set of first sub-cores550A-550N includes at least a first set of execution units552A-552N and media/texture samplers554A-554N. Each sub-core in the set of second sub-cores560A-560N includes at least a second set of execution units562A-562N and samplers564A-564N. In some embodiments, each sub-core550A-550N,560A-560N shares a set of shared resources570A-570N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in the various embodiments of the graphics processor. Execution Units FIG.6illustrates thread execution logic600including an array of processing elements employed in some embodiments of a GPE. Elements ofFIG.6having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. In some embodiments, thread execution logic600includes a pixel shader602, a thread dispatcher604, instruction cache606, a scalable execution unit array including a plurality of execution units608A-608N, a sampler610, a data cache612, and a data port614. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic600includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache606, data port614, sampler610, and execution unit array608A-608N. In some embodiments, each execution unit (e.g.608A) is an individual vector processor capable of executing multiple simultaneous threads and processing multiple data elements in parallel for each thread. In some embodiments, execution unit array608A-608N includes any number individual execution units. In some embodiments, execution unit array608A-608N is primarily used to execute “shader” programs. In some embodiments, the execution units in array608A-608N execute an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each execution unit in execution unit array608A-608N operates on arrays of data elements. The number of data elements is the “execution size,” or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units608A-608N support integer and floating-point data types. The execution unit instruction set includes single instruction multiple data (SIMD) or single instruction multiple thread (SIMT) instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible. One or more internal instruction caches (e.g.,606) are included in the thread execution logic600to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g.,612) are included to cache thread data during thread execution. In some embodiments, sampler610is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler610includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit. During execution, the graphics and media pipelines send thread initiation requests to thread execution logic600via thread spawning and dispatch logic. In some embodiments, thread execution logic600includes a local thread dispatcher604that arbitrates thread initiation requests from the graphics and media pipelines and instantiates the requested threads on one or more execution units608A-608N. For example, the geometry pipeline (e.g.,536ofFIG.5) dispatches vertex processing, tessellation, or geometry processing threads to thread execution logic600(FIG.6). In some embodiments, thread dispatcher604can also process runtime thread spawning requests from the executing shader programs. Once a group of geometric objects has been processed and rasterized into pixel data, pixel shader602is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, pixel shader602calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel shader602then executes an application programming interface (API)-supplied pixel shader program. To execute the pixel shader program, pixel shader602dispatches threads to an execution unit (e.g.,608A) via thread dispatcher604. In some embodiments, pixel shader602uses texture sampling logic in sampler610to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing. In some embodiments, the data port614provides a memory access mechanism for the thread execution logic600output processed data to memory for processing on a graphics processor output pipeline. In some embodiments, the data port614includes or couples to one or more cache memories (e.g., data cache612) to cache data for memory access via the data port. FIG.7is a block diagram illustrating a graphics processor instruction formats700according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format700described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed. In some embodiments, the graphics processor execution units natively support instructions in a 128-bit instruction format710. A 64-bit compacted instruction format730is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format710provides access to all instruction options, while some options and operations are restricted in the 64-bit instruction format730. The native instructions available in the 64-bit instruction format730vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field713. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format710. For each format, instruction opcode712defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field714enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For 128-bit instructions710an exec-size field716limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field716is not available for use in the 64-bit compact instruction format730. Some execution unit instructions have up to three operands including two source operands, src0720, src1722, and one destination718. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2724), where the instruction opcode712determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction. In some embodiments, the 128-bit instruction format710includes an access/address mode information726specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction710. In some embodiments, the 128-bit instruction format710includes an access/address mode field726, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction710may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction710may use 16-byte-aligned addressing for all source and destination operands. In one embodiment, the address mode portion of the access/address mode field726determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction710directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction. In some embodiments instructions are grouped based on opcode712bit-fields to simplify Opcode decode740. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group742includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group742shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group744(e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group746includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group748includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group748performs the arithmetic operations in parallel across data channels. The vector math group750includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands. Graphics Pipeline FIG.8is a block diagram of another embodiment of a graphics processor800. Elements ofFIG.8having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. In some embodiments, graphics processor800includes a graphics pipeline820, a media pipeline830, a display engine840, thread execution logic850, and a render output pipeline870. In some embodiments, graphics processor800is a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor800via a ring interconnect802. In some embodiments, ring interconnect802couples graphics processor800to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect802are interpreted by a command streamer803, which supplies instructions to individual components of graphics pipeline820or media pipeline830. In some embodiments, command streamer803directs the operation of a vertex fetcher805that reads vertex data from memory and executes vertex-processing commands provided by command streamer803. In some embodiments, vertex fetcher805provides vertex data to a vertex shader807, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher805and vertex shader807execute vertex-processing instructions by dispatching execution threads to execution units852A,852B via a thread dispatcher831. In some embodiments, execution units852A,852B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units852A,852B have an attached L1 cache851that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions. In some embodiments, graphics pipeline820includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader811configures the tessellation operations. A programmable domain shader817provides back-end evaluation of tessellation output. A tessellator813operates at the direction of hull shader811and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to graphics pipeline820. In some embodiments, if tessellation is not used, tessellation components811,813,817can be bypassed. In some embodiments, complete geometric objects can be processed by a geometry shader819via one or more threads dispatched to execution units852A,852B, or can proceed directly to the clipper829. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader819receives input from the vertex shader807. In some embodiments, geometry shader819is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled. Before rasterization, a clipper829processes vertex data. The clipper829may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component873in the render output pipeline870dispatches pixel shaders to convert the geometric objects into their per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic850. In some embodiments, an application can bypass rasterization and access un-rasterized vertex data via a stream out unit823. The graphics processor800has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units852A,852B and associated cache(s)851, texture and media sampler854, and texture/sampler cache858interconnect via a data port856to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler854, caches851,858and execution units852A,852B each have separate memory access paths. In some embodiments, render output pipeline870contains a rasterizer and depth test component873that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the render output pipeline870includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache878and depth cache879are also available in some embodiments. A pixel operations component877performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine841, or substituted at display time by the display controller843using overlay display planes. In some embodiments, a shared L3 cache875is available to all graphics components, allowing the sharing of data without the use of main system memory. In some embodiments, graphics processor media pipeline830includes a media engine837and a video front end834. In some embodiments, video front end834receives pipeline commands from the command streamer803. In some embodiments, media pipeline830includes a separate command streamer. In some embodiments, video front-end834processes media commands before sending the command to the media engine837. In some embodiments, media engine837includes thread spawning functionality to spawn threads for dispatch to thread execution logic850via thread dispatcher831. In some embodiments, graphics processor800includes a display engine840. In some embodiments, display engine840is external to processor800and couples with the graphics processor via the ring interconnect802, or some other interconnect bus or fabric. In some embodiments, display engine840includes a 2D engine841and a display controller843. In some embodiments, display engine840contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller843couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector. In some embodiments, graphics pipeline820and media pipeline830are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL) and Open Computing Language (OpenCL) from the Khronos Group, the Direct3D library from the Microsoft Corporation, or support may be provided to both OpenGL and D3D. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor. Graphics Pipeline Programming FIG.9Ais a block diagram illustrating a graphics processor command format900according to some embodiments.FIG.9Bis a block diagram illustrating a graphics processor command sequence910according to an embodiment. The solid lined boxes inFIG.9Aillustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands The exemplary graphics processor command format900ofFIG.9Aincludes data fields to identify a target client902of the command, a command operation code (opcode)904, and the relevant data906for the command A sub-opcode905and a command size908are also included in some commands. In some embodiments, client902specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands Once the command is received by the client unit, the client unit reads the opcode904and, if present, sub-opcode905to determine the operation to perform. The client unit performs the command using information in data field906. For some commands an explicit command size908is expected to specify the size of the command In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word. The flow diagram inFIG.9Bshows an exemplary graphics processor command sequence910. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence. In some embodiments, the graphics processor command sequence910may begin with a pipeline flush command912to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline922and the media pipeline924do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked ‘dirty’ can be flushed to memory. In some embodiments, pipeline flush command912can be used for pipeline synchronization or before placing the graphics processor into a low power state. In some embodiments, a pipeline select command913is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command913is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command is912is required immediately before a pipeline switch via the pipeline select command913. In some embodiments, a pipeline control command914configures a graphics pipeline for operation and is used to program the 3D pipeline922and the media pipeline924. In some embodiments, pipeline control command914configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command914is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands. In some embodiments, commands for the return buffer state916are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, configuring the return buffer state916includes selecting the size and number of return buffers to use for a set of pipeline operations. The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination920, the command sequence is tailored to the 3D pipeline922beginning with the 3D pipeline state930, or the media pipeline924beginning at the media pipeline state940. The commands for the 3D pipeline state930include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based the particular 3D API in use. In some embodiments, 3D pipeline state930commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used. In some embodiments, 3D primitive932command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive932command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive932command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive932command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline922dispatches shader execution threads to graphics processor execution units. In some embodiments, 3D pipeline922is triggered via an execute934command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a ‘go’ or ‘kick’ command in the command sequence. In one embodiment command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations. In some embodiments, the graphics processor command sequence910follows the media pipeline924path when performing media operations. In general, the specific use and manner of programming for the media pipeline924depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives. In some embodiments, media pipeline924is configured in a similar manner as the 3D pipeline922. A set of commands to configure the media pipeline state940are dispatched or placed into a command queue before the media object commands942. In some embodiments, commands for the media pipeline state940include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, commands for the media pipeline state940also support the use of one or more pointers to “indirect” state elements that contain a batch of state settings. In some embodiments, media object commands942supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command942. Once the pipeline state is configured and media object commands942are queued, the media pipeline924is triggered via an execute command944or an equivalent execute event (e.g., register write). Output from media pipeline924may then be post processed by operations provided by the 3D pipeline922or the media pipeline924. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations. Graphics Software Architecture FIG.10illustrates exemplary graphics software architecture for a data processing system1000according to some embodiments. In some embodiments, software architecture includes a 3D graphics application1010, an operating system1020, and at least one processor1030. In some embodiments, processor1030includes a graphics processor1032and one or more general-purpose processor core(s)1034. The graphics application1010and operating system1020each execute in the system memory1050of the data processing system. In some embodiments, 3D graphics application1010contains one or more shader programs including shader instructions1012. The shader language instructions may be in a high-level shader language, such as the High Level Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions1014in a machine language suitable for execution by the general-purpose processor core(s)1034. The application also includes graphics objects1016defined by vertex data. In some embodiments, operating system1020is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system1020can support a graphics API1022such as the Direct3D API or the OpenGL API. When the Direct3D API is in use, the operating system1020uses a front-end shader compiler1024to compile any shader instructions1012in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application1010. In some embodiments, user mode graphics driver1026contains a back-end shader compiler1027to convert the shader instructions1012into a hardware specific representation. When the OpenGL API is in use, shader instructions1012in the GLSL high-level language are passed to a user mode graphics driver1026for compilation. In some embodiments, user mode graphics driver1026uses operating system kernel mode functions1028to communicate with a kernel mode graphics driver1029. In some embodiments, kernel mode graphics driver1029communicates with graphics processor1032to dispatch commands and instructions. IP Core Implementations One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as “IP cores,” are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein. FIG.11is a block diagram illustrating an IP core development system1100that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system1100may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility1130can generate a software simulation1110of an IP core design in a high level programming language (e.g., C/C++). The software simulation1110can be used to design, test, and verify the behavior of the IP core using a simulation model1112. The simulation model1112may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design1115can then be created or synthesized from the simulation model1112. The RTL design1115is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design1115, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary. The RTL design1115or equivalent may be further synthesized by the design facility into a hardware model1120, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rdparty fabrication facility1165using non-volatile memory1140(e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection1150or wireless connection1160. The fabrication facility1165may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein. Exemplary System on a Chip Integrated Circuit FIGS.12-14illustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores. FIG.12is a block diagram illustrating an exemplary system on a chip integrated circuit1200that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuit1200includes one or more application processor(s)1205(e.g., CPUs), at least one graphics processor1210, and may additionally include an image processor1215and/or a video processor1220, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit1200includes peripheral or bus logic including a USB controller1225, UART controller1230, an SPI/SDIO controller1235, and an I2S/I2C controller1240. Additionally, the integrated circuit can include a display device1245coupled to one or more of a high-definition multimedia interface (HDMI) controller1250and a mobile industry processor interface (MIPI) display interface1255. Storage may be provided by a flash memory subsystem1260including flash memory and a flash memory controller. Memory interface may be provided via a memory controller1265for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine1270. FIG.13is a block diagram illustrating an exemplary graphics processor1310of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor1310can be a variant of the graphics processor1210ofFIG.12. Graphics processor1310includes a vertex processor1305and one or more fragment processor(s)1315A1315N (e.g.,1315A,1315B,1315C,1315D, through1315N−1, and1315N). Graphics processor1310can execute different shader programs via separate logic, such that the vertex processor1305is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s)1315A-1315N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor1305performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s)1315A-1315N use the primitive and vertex data generated by the vertex processor1305to produce a framebuffer that is displayed on a display device. In one embodiment, the fragment processor(s)1315A-1315N are optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API. Graphics processor1310additionally includes one or more memory management units (MMUs)1320A-1320B, cache(s)1325A-1325B, and circuit interconnect(s)1330A-1330B. The one or more MMU(s)1320A-1320B provide for virtual to physical address mapping for graphics processor1310, including for the vertex processor1305and/or fragment processor(s)1315A-1315N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s)1325A-1325B. In one embodiment the one or more MMU(s)1320A-1320B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s)1205, image processor1215, and/or video processor1220ofFIG.12, such that each processor1205-1220can participate in a shared or unified virtual memory system. The one or more circuit interconnect(s)1330A-1330B enable graphics processor1310to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments. FIG.14is a block diagram illustrating an additional exemplary graphics processor1410of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor1410can be a variant of the graphics processor1210ofFIG.12. Graphics processor1410includes the one or more MMU(s)1320A-1320B, cache(s)1325A-1325B, and circuit interconnect(s)1330A-1330B of the integrated circuit1300ofFIG.13. Graphics processor1410includes one or more shader core(s)1415A-1415N (e.g.,1415A,1415B,1415C,1415D,1415E,1415F, through1315N−1, and1315N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor1410includes an inter-core task manager1405, which acts as a thread dispatcher to dispatch execution threads to one or more shader core(s)1415A-1415N. Graphics processor1410additionally includes a tiling unit1418to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space. Tile-based rendering can be used to exploit local spatial coherence within a scene or to optimize use of internal caches. Large Scale CNN Regression Based Localization Via Two-Dimensional Map Embodiments described herein provide techniques to enable CNN regression for interior localization within large-scale structures. The techniques described herein resolve several issues with existing SLAM-based re-localization techniques and provide a method of CNN regression based re-localization that is suitable for use in large-scale buildings. The techniques described herein are also favorable comparable to other localization methods such as Lidar, ultra wide band (UWB), Wi-Fi, and internal measurement unit (IMU) localization. The techniques described herein are lower cost and have a higher re-localization call back rate relative to Lidar, have a lower drift error relative to IMU, and does not require the installation of new instruments, as in UWB and Wi-Fi. In addition to service robot re-localization, these techniques have direct application for real time localization, autonomous navigation, and to enhance navigation and localization functionality for mobile device users. FIG.15is a flow diagram of logic1500to perform localization within the interior of a large-scale structure, according to an embodiment. In one embodiment, the logic1500can perform localization using CNN-based regression analysis. To acquire training data for the CNN model, the logic1500can divide the interior of the large-scale structure into multiple parts, as shown at1502. Dividing the structure into multiple parts is illustrated inFIG.17, in which the interior of a large structure1702is segmented into multiple parts, with a single one of the parts1704illustrated surrounded by a dotted line. The logic1500can then gather visual data of the interior of the structure, as shown at1504. In one embodiment the visual data is image data that is gathered via a camera on a robot as the robot traverses the building. In one embodiment the visual data is video data that is gathered by a camera and de-constructed into multiple images. The image data (e.g., image, video, etc.) can be used to perform camera pose estimation, as shown at1506. Camera pose estimation data includes a physical position of a camera within the structure, as well as the pose (e.g., orientation) of the camera. The camera pose can be estimated in multiple degrees of freedom. In one embodiment, six degree-of-freedom (6-DoF) camera pose estimation is used. In some embodiments, VSLAM techniques are used to perform camera pose estimation, although embodiments are not limited to any particular camera pose estimation algorithm, technique, or system. In one embodiment the VSLAM technique used for camera pose estimation is a variant of the Visual Simultaneous Localization and Mapping (vSLAM™) system from Evolution Robotics, but embodiments are not limited to any specific system or algorithm. The VSLAM camera pose estimation makes use of visual data, camera pose data, and robot location or odometer data, to determine a camera pose based on input visual data, such as an RGB based photograph. However, the VSLAM data alone is insufficient to train the CNN regression model for proper re-localization. As shown at1508, the logic1500can generate a 3D point cloud of each of the multiple parts. The 3D point cloud can be generated using one or more 3D point cloud generation techniques known in the art based on captured visual data. At1510, the logic1500can correlate the 3D point cloud data with a two-dimensional (2D) map, such as a human-readable 2D map.FIG.18illustrates a screen shot1820of an exemplary 3D point cloud that can be correlated with a related portion of a 2D human readable map1810. Using the correlation between the 3D point cloud and the 2D map, a transformation matrix can be generated that enables the logic1500to transform the positions of the estimated camera poses associated with the visual data of the structure into positions having 2D map coordinates, as shown at1512. The logic1500can then train a CNN regression model using pairs of visual and camera pose data, as shown at1514. The camera pose data is transformed camera pose data that has been transformed from a 3D point cloud position to a 2D map position (e.g., the 2D map coordinates of1512). The CNN regression can then be used to predict coordinates of newly captured visual data within the 2D map, as shown at1516. FIG.16A-Bis a flow diagram illustrating logic1600for performing CNN regression re-localization for large structures, according to an embodiment. Re-localization logic can be configured as software or firmware to manage the hardware to perform the image gathering, camera pose estimation, coordinate transformation, CNN regression training, and CNN regression prediction operations that are performed by embodiments described herein. In one embodiment the computational logic can be integrated into computational hardware such as a heterogeneous GPGPU processing system100as inFIG.1. FIG.16Aillustrates that operations for the logic1600include capturing visual data for a portion of a segmented structure, as shown at1602. The segmented structure can be a human readable 2D map of a building, such as the building1702ofFIG.17. The map can be segmented into multiple parts, such as part1704. The logic can designate the human readable map data as HM, with each of the multiple parts designated as (Pi, i=1, . . . , I). The visual data can include video or images and can be continuous data or, in one embodiment, include only key frame data. Various embodiments can be configured to use various types of image data, including but not limited to red-green-blue-depth (RGBD) data, binocular images, and monocular images. For each of the multiple parts Pi, sufficient visual data (e.g., images (Imai,j(j=1, . . . , J′i))) is captured at1602to enable camera pose estimation at1604. The logic1600estimates poses of Imai,j(Pi,j) using a visual SLAM technique, although the specifics of the pose estimation techniques can vary among embodiments. The type of visual data that is captured is related to the camera pose estimation technique that is used by the logic1600, as differing camera pose estimation techniques may rely on specific types of input data. In one embodiment, monocular SLAM techniques can be performed on single images from which depth data cannot be derived. Where depth data is present in the image or binocular images are used, different camera pose estimation techniques may be employed. In one embodiment, if poses cannot be obtained for all images, only the images for which poses are obtained are used for further processed by the logic1600. As shown at1606, the logic can generate a 3D point cloud (PCi) based on the camera pose data. The 3D point cloud is a volumetric representation of the space in which localization and/or re-localization is to be enabled. The 3D point cloud can be generated using shape from motion techniques based on the six dimensional pose estimate and associated visual data, for example, using camera pose estimations and analysis of the spatial and temporal changes associated with a captured image sequence. The 3D point cloud PCiincludes point cloud data for each of the multiple parts Piof the structure. Once the 3D point cloud PCiis generated, a viewer pose (VPi) is selected for a screenshot of the 3D point cloud, as shown at1608. The screenshot of the point cloud is a 2D cross section of the point cloud as viewed from the viewer pose VPi. The viewer pose VPiis a pose from which the screenshot (e.g., cross section) of the 3D point cloud PCiis to be generated. In one embodiment the logic1600is configured to use known empirical data for a scene to determine a proper viewer pose for a screenshot. In one embodiment, a viewer pose for each segment may be explicitly selected via a human supervisor of the training process during supervised learning. The viewer pose to be selected is the pose from which a 2D screenshot generated from the 3D point cloud most corresponds with known 2D map; for example, of a human readable 2D map HM. An exemplary instance of viewer pose VPiis the pose associated with the screen shot1820of a 3D point cloud generated for the a part of the structure associated with the portion of the 2D human readable map1810, each inFIG.18. Once the logic1600selects a viewer pose VPifor each point cloud PCiof each of the multiple parts Piof the structure at1608, the logic can perform an operation to generate a screenshot for each of the 3D point clouds PCi, as shown at1630. The screen shot generation of the 3D point clouds can be performed in a parallel process or thread relative to other portions of the logic1600. The generated screenshots at1630can then be supplied to an operation at1610that generates a mark on the screen shot to represent the viewer pose point for each screenshot. The mark added at1610can be a point color that is easily recognizable by the logic1600, such as the point1822illustrated in the screenshot1820ofFIG.18. The logic1600can then generate a correspondence between the view pose point and screenshot position, as shown at1612. The logic1600repeats operations at1610and1620until sufficient pose points are acquired, as determined at1613. The pose points are image-pose pairs (IPPi,k˜(Imai,k, Pi,k), k=1, . . . , Ki) that can be used to enable the computation of a point cloud to screen shot transformation matrix (M_ptc2ss) from a 3D pose position to a specific position within the 3D screenshot, as shown at1614. For each screenshot SSi,koutput from1610, a position Mki,kin SSi,kis detected by methods such as, for example, intercepting by threshold and clustering for color values. The two dimensional coordinates of the point in SSi,kis PPi,k(=[xPP(i,k)yPP(i,k)]). The two dimensional coordinates correspond to the horizontal motion of Pi,k(=[uP1(,k)…uP6(i,k)]), where uPt(i,k), t=1 . . . 6 are respectively the left-right, up-down, forward-backward translation distances, and pitch, yaw and roll rotation angles) are extracted to form PHi,k(=[xPH(i,k)yPH(i,k)]=[uP1(i,k)uP3(i,k)]). The transformation matrix M_ptc2ss is obtained at1614by solving the equation [xPP(i,k)yPP(i,k)]=Mptc2ss*[xPH(i,k)yPH(i,k)] using least square method. As shown at1616, the logic1600can then generate a screen shot to human coordinate transformation matrix (M_ss2hu) that enables the translation of a screen shot position to a position in a human readable 2D map using 2D map data input at1632. This transformation is based on a correlation of corresponding points between the screenshot of the 3D point cloud generated at1630and the 2D map data input at1632. In one embodiment the logic1600can then identify corresponding points between the screenshots SSi,kand the input 2D map (e.g., human readable map data HM). In one embodiment, the correspondence between the 3D point cloud screenshots and the 2D map data is identified via the interaction of a supervisor of the logic1600, and the logic1600receives the correspondence data as input at1634. As shown at1634, the logic1600continues to identify or receive identification of corresponding points of correlation between the screenshots SSi,kand the input 2D map data HM until enough points are identified, as determined at1635, to enable the computation of a transformation matrix from screenshot position to human (e.g., 2D) map position at1616. As shown at1616, the logic1600can use the accumulated data to generate the transformation matrix M_ss2hu. For Pi, the transformation matrix (M_ss2hu) from screenshot to HM is computed from the corresponding points identified at1634. A set of points on the 2D map can be identified as HM(HMPi,s(=[xHMP(i,s)yHMP(i,s)]),s=1,¨,Si). Points on the screenshots can be identified as SSi(SSPi,s(=[xSSP(i,s)ySSP(i,s)])). Using these points, the logic1600can compute M_ss2hu by solving the equation [xHMP(i,k)yHMP(i,k)]=Mss2hu*[xSSP(i,k)ySSP(i,k)] using least square method. As shown at1618, the logic1600can compute a point cloud to human (e.g., 2D map) coordinate transformation matrix (M_ptc2hu) as Mptc2hu=Mss2hu*Mptc2ss. Using transformation matrix M_ptc2hu, the logic1600can determine “new” 2D map coordinates based on “old” point cloud positions at1620using the equation Pnew=M_ptc2hu*Pold. The logic1600can combine the 2D map coordinates determined at1620with the visual data captured at1602to determine image position pairs at1622. The operations described above can be repeated for each of the multiple parts Piof the structure until the logic1600determines at1623that image-position pairs for all parts of the structure have been determined. The logic1600can then reorder the image-position pairs to list pairs at1624. The image-position pairs: (Imai,j,[xHMP(i,j)0yHMP(i,j)000]([xHMP(i,j)yHMP(i,j)]=Mptc2hu*[xPH(i,j)yPH(i,j)])) are reordered to list pairs (Imai′,j′,[xHMP(i′,j′)0yHMP(i′,j′)000]). The methods of reordering include but not limited to random reordering, extracting odd/even rows, and/or composition of the rows. FIG.16Billustrates training and prediction of a CNN regression model using the re-ordered image-position pairs. The image-position pairs (Imai′,j′,[xHMP(i′,j′)0yHMP(i′,j′)000]) output from1624ofFIG.16Aare used to train CNN configured to perform regression (1642). The trained CNN can use a regression model to perform a re-localization prediction (1644) by providing new images for input (1646). CNN regression prediction (1644) can then be used to predict coordinates of the new images1648. For a newly captured image, the position of the image on the 2D map HM is predicted by CNN regression (CNNR) result [u1…u6]tobe[xy]=[u1u3]. FIG.19is an illustration of a screenshot1910of a 3D point cloud of a structure showing a reference position1922in comparison with the predicted coordinates1924based on a new, untrained image. Experimental results applied to one embodiment revealed an average localization error of 1.9 meters within a total area of 600 m2, although these results are exemplary and not limiting as to any particular embodiment. FIG.20illustrates a block diagram of a re-localization processor2000, according to an embodiment. The re-localization processor2000is configured to perform and/or accelerate logic operations to perform CNN regression based re-localization and can be integrated within a data processing system as described herein. In one embodiment the re-localization processor2000includes an image processor2002and a GPGPU engine2010. The image processor is configured to process visual data received via a sensor. The sensor can be an image sensor such as a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) sensor and can be configured to capture visual data in a still or video format. In one embodiment the image processor2002is configured to receive binocular data from multiple image sensors. In one embodiment the image processor2002is configured to process visual data having integrated depth data (e.g., RGBD data). In one embodiment the image processor2002is coupled with a general purpose graphics processing unit (GPGPU) engine2010including execution logic configured to perform graphics processing and general purpose computational operations. The GPGPU engine2010includes fixed function logic as well as programmable execution logic such as the execution logic600illustrated inFIG.6. The fixed function and programmable execution logic of the GPGPU engine2010can be configured to enable a camera pose estimator2012, point cloud generator2014, position transform logic2016, CNN training logic2018, and CNN regression logic2020, which can each be a hardware logic unit such as an ASIC or FPGA, or can be a shader based logic module executed by programmable execution logic of the GPGPU engine2010. In one embodiment the camera pose estimator2012is configured to estimate a camera pose for a unit of visual data. The point cloud generator2014is configured to generate a point cloud based on a set of estimated camera poses. The position transformation logic2016is configured to transform a position within the 3D point cloud to a position within a 2D map of the location. The CNN training logic2018is configured to train the CNN to predict coordinates on the map for an image. The CNN training logic2018can train the CNN using an image and position pair, where the image includes the unit of visual data and the position is the position of the unit of visual data within the map of the location. The CNN regression logic2020can use the trained CNN model to perform re-location operations based on newly acquired visual data. FIG.21is a block diagram of a computing device2100including a graphics processor2104, according to an embodiment. The computing device2100can be a computing device such as the data processing system100as in ofFIG.1. The computing device2100may also be or be included within a communication device such as a set-top box (e.g., Internet-based cable television set-top boxes, etc.), global positioning system (GPS)-based devices, etc. The computing device2100may also be or be included within mobile computing devices such as cellular phones, smartphones, personal digital assistants (PDAs), tablet computers, laptop computers, e-readers, smart televisions, television platforms, wearable devices (e.g., glasses, watches, bracelets, smartcards, jewelry, clothing items, etc.), media players, etc. For example, in one embodiment, the computing device2100includes a mobile computing device employing an integrated circuit (“IC”), such as system on a chip (“SoC” or “SOC”), integrating various hardware and/or software components of computing device2100on a single chip. The computing device2100includes a graphics processor2104. The graphics processor2104represents any graphics processor described herein. The graphics processor includes one or more graphics engine(s), graphics processor cores, and other graphics execution resources as described herein. Such graphics execution resources can be presented in the forms including but not limited to execution units, shader engines, fragment processors, vertex processors, streaming multiprocessors, graphics processor clusters, or any collection of computing resources suitable for the processing of graphics and image resources. In one embodiment the graphics processor2104includes a cache2114, which can be a single cache or divided into multiple segments of cache memory, including but not limited to any number of L1, L2, L3, or L4 caches, render caches, depth caches, sampler caches, and/or shader unit caches. In one embodiment the graphics processor2104can be configured as a re-location processor2000as inFIG.20. In such embodiment the graphic processor2104includes a GPGPU engine2124, an image processor2134, CNN logic2144, and display logic2154. The GPGPU engine2124and image processor2134can be variants of the GPGPU engine2010and image processor2002ofFIG.20. The CNN logic2114can include the CNN training logic2018and the CNN regression logic2020ofFIG.20. The display logic2154can be configured to output location and/or mapping data to a display coupled to or integrated within the computing device2100. The image processor2134, in one embodiment, is additionally configured to process newly captured images from an image sensor or camera device and perform re-localization operations using the newly captured images via the CNN logic2144. As illustrated, in one embodiment, and in addition to the graphics processor2104, the computing device2100may further include any number and type of hardware components and/or software components, including, but not limited to an application processor2106, memory2108, and input/output (I/O) sources2110. The application processor2106can interact with a hardware graphics pipeline, as illustrated with reference toFIG.3, to share graphics pipeline functionality. Processed data is stored in a buffer in the hardware graphics pipeline and state information is stored in memory2108. The resulting data can be transferred to a display controller (e.g., display logic2154) for output via a display device, such as the display device320ofFIG.3. The display device may be of various types, such as Cathode Ray Tube (CRT), Thin Film Transistor (TFT), Liquid Crystal Display (LCD), Organic Light Emitting Diode (OLED) array, etc., and may be configured to display information to a user via a graphical user interface. The application processor2106can include one or processors, such as processor(s)102ofFIG.1, and may be the central processing unit (CPU) that is used at least in part to execute an operating system (OS)2102for the computing device2100. The OS2102can serve as an interface between hardware and/or physical resources of the computing device2100and one or more users. The OS2102can include driver logic2122for various hardware devices in the computing device2100. The driver logic2122can include graphics driver logic2123such as the user mode graphics driver1026and/or kernel mode graphics driver1029ofFIG.10. It is contemplated that in some embodiments the graphics processor2104may exist as part of the application processor2106(such as part of a physical CPU package) in which case, at least a portion of the memory2108may be shared by the application processor2106and graphics processor2104, although at least a portion of the memory2108may be exclusive to the graphics processor2104, or the graphics processor2104may have a separate store of memory. The memory2108may comprise a pre-allocated region of a buffer (e.g., framebuffer); however, it should be understood by one of ordinary skill in the art that the embodiments are not so limited, and that any memory accessible to the lower graphics pipeline may be used. The memory2108may include various forms of random access memory (RAM) (e.g., SDRAM, SRAM, etc.) comprising an application that makes use of the graphics processor2104to render a desktop or 3D graphics scene. A memory controller hub, such as memory controller hub116ofFIG.1, may access data in the memory2108and forward it to graphics processor2104for graphics pipeline processing. The memory2108may be made available to other components within the computing device2100. For example, any data (e.g., input graphics data) received from various I/O sources2110of the computing device2100can be temporarily queued into memory2108prior to their being operated upon by one or more processor(s) (e.g., application processor2106) in the implementation of a software program or application. Similarly, data that a software program determines should be sent from the computing device2100to an outside entity through one of the computing system interfaces, or stored into an internal storage element, is often temporarily queued in memory2108prior to its being transmitted or stored. The I/O sources can include devices such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, ports, connectors, network devices, or the like, and can attach via an input/output (I/O) control hub (ICH)130as referenced inFIG.1. Additionally, the I/O sources2110may include one or more I/O devices that are implemented for transferring data to and/or from the computing device2100(e.g., a networking adapter); or, for a large-scale non-volatile storage within the computing device2100(e.g., hard disk drive). User input devices, including alphanumeric and other keys, may be used to communicate information and command selections to graphics processor2104. Another type of user input device is cursor control, such as a mouse, a trackball, a touchscreen, a touchpad, or cursor direction keys to communicate direction information and command selections to GPU and to control cursor movement on the display device. Camera and microphone arrays of the computing device2100may be employed to observe gestures, record audio and video and to receive and transmit visual and audio commands. I/O sources2110configured as network interfaces can provide access to a network, such as a LAN, a wide area network (WAN), a metropolitan area network (MAN), a personal area network (PAN), Bluetooth, a cloud network, a cellular or mobile network (e.g., 3rdGeneration (3G), 4thGeneration (4G), etc.), an intranet, the Internet, etc. Network interface(s) may include, for example, a wireless network interface having one or more antenna(e). Network interface(s) may also include, for example, a wired network interface to communicate with remote devices via network cable, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable. Network interface(s) may provide access to a LAN, for example, by conforming to IEEE 802.11 standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols, including previous and subsequent versions of the standards, may also be supported. In addition to, or instead of, communication via the wireless LAN standards, network interface(s) may provide wireless communication using, for example, Time Division, Multiple Access (TDMA) protocols, Global Systems for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocols. It is to be appreciated that a lesser or more equipped system than the example described above may be preferred for certain implementations. Therefore, the configuration of the computing device2100may vary from implementation to implementation depending upon numerous factors, such as price constraints, performance requirements, technological improvements, or other circumstances. Examples include (without limitation) a mobile device, a personal digital assistant, a mobile computing device, a smartphone, a cellular telephone, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combinations thereof. The following clauses and/or examples pertain to specific embodiments or examples thereof. Specifics in the examples may be used anywhere in one or more embodiments. The various features of the different embodiments or examples may be variously combined with some features included and others excluded to suit a variety of different applications. Examples may include subject matter such as a method, means for performing acts of the method, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to performs acts of the method, or of an apparatus or system according to embodiments and examples described herein. Various components can be a means for performing the operations or functions described. Embodiments described herein provide a processing apparatus comprising compute logic to train a convolutional neural network (CNN) to perform autonomous re-localization for a service robot or mobile device. In one embodiment the apparatus comprises an image processor to process visual data received via a sensor and a general purpose graphics processing engine perform camera pose estimation for image data and generate a transformation matrix to transform positions of camera pose estimations to positions within a human readable map of the location. The images and transformed positions are uses to train the CNN to perform re-localization. One embodiment provides a processing apparatus to configure a convolutional neural network (CNN) to perform autonomous re-localization within a structure. The processing apparatus comprises an image processor to process visual data received via a sensor and a general purpose graphics processing engine. In one embodiment the general purpose graphics processing engine includes a camera pose estimator to estimate a camera pose for a unit of visual data; a point cloud generator to generate a point cloud based on a set of estimated camera poses; position transformation logic to transform a position within the point cloud to a position within a map of the structure; and CNN training logic to train the CNN to predict coordinates on the map for an image, the CNN training logic to train the CNN using an image and position pair, wherein the image includes the unit of visual data and the position includes the position of the unit of visual data within the map of the structure. One embodiment provides a data processing system comprising a storage device to store data for convolutional neural network (CNN) configured to perform regression based re-localization for a structure; a display device to display results of the re-localization; and a re-localization processor including CNN regression logic to perform regression based re-localization, wherein the CNN regression logic is trained via CNN training logic configured to train the CNN to predict coordinates on a map of the structure for an image, wherein the CNN training logic is trained using a set of image and position pairs, wherein the image includes a unit of visual data for the structure and the position includes the position of the unit of visual data within the map of the structure. Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims. | 99,059 |
11859974 | Reference numerals:1: deformable coupling pipeline;2unmanned trajectory tracer;3: wireless power transmitting coil;4: first NFC terminal;5: motor wheel;6: support link;7: wireless coupling coil;8: second NFC terminal;9: single chip microcomputer;10: inertial sensor;11: cavity;12: solar cell;13: GPS device;14: control box;15: monitoring pier;16: simply-supported hinge type;17: cantilever type;18: floating hinge type;19: landslide mass;20: initial measuring line;21: deformation measuring line;22: connecting rod;22a: ring-shaped protrusion portion;23: spring;24: lantern ring; and25: battery. DETAILED DESCRIPTION In order to make the objectives, technical solutions, and advantages of the present disclosure clearer, the implementations of the present disclosure are described in more detail below with reference to the accompanying drawings. Refer toFIG.1toFIG.4. An embodiment of the present disclosure provides an unmanned system for monitoring lateral deformation of a landslide based on inertial measurement. The system includes a deformable coupling pipeline1, an unmanned trajectory tracer2, and two monitoring piers15. The deformable coupling pipeline1is disposed in a landslide mass19and at an upper part of a sliding surface. The deformable coupling pipeline1may be buried under a shallow surface of the landslide mass19by manually excavating a trench, or may be disposed in a deep part of the landslide mass19through drilling. In a sliding process of the landslide mass19, the deformable coupling pipeline1needs to slide with the landslide mass19. Therefore, the deformable coupling pipeline1should have low flexural rigidity to avoid uncoordinated pipeline-soil coupling deformation. The deformable coupling pipeline1may be a polyvinyl chloride (PVC) wired hose or the like, and the deformable coupling pipeline1should be buried at a soft place or another soil region that can be easily excavated. The unmanned trajectory tracer2is disposed in the deformable coupling pipeline1. The unmanned trajectory tracer2is provided with a battery25, a plurality of motor wheels5, an inertial sensor10, and a single chip microcomputer9that are electrically connected. The battery25supplies power. The motor wheel5is configured to make contact with an inner wall of the deformable coupling pipeline1. Due to force of friction between the motor wheel5and the inner wall of the deformable coupling pipeline1, when the motor wheel5is powered on, the single chip microcomputer9controls the motor wheel5to rotate, to drive the unmanned trajectory tracer2to move back and forth in the deformable coupling pipeline1. In a moving process of the unmanned trajectory tracer2, the single chip microcomputer9controls the inertial sensor10to measure a shape of the deformable coupling pipeline1, to realize a monitoring frequency of the deformable coupling pipeline1. The two monitoring piers15are securely connected to two ends of the deformable coupling pipeline1respectively. The monitoring pier15is provided with a GPS device13and a communication device. The GPS device13is configured to obtain a position, namely, spatial coordinates, of the monitoring pier15in real time. The communication device is in communication connection with the single chip microcomputer9, and the single chip microcomputer9obtains the shape of the deformable coupling pipeline1and sends it to the communication device. The communication device is configured to upload the shape of the deformable coupling pipeline1to a network or a mobile terminal. Further, the monitoring pier15is provided with a solar cell12, the deformable coupling pipeline1is wound with a wireless power transmitting coil3, the solar cell12is electrically connected to the wireless power transmitting coil3, the unmanned trajectory tracer2is wound with a wireless coupling coil7, the wireless power transmitting coil3is wirelessly coupled with the wireless coupling coil7, and the battery25is electrically connected to the wireless coupling coil7. The solar cell12can absorb light energy of solar energy and convert the light energy into electric energy, use the wireless power transmitting coil3and the wireless coupling coil7to wirelessly transmit the electric energy through electromagnetic induction, and store the electric energy in the battery25, so that the single chip microcomputer9, the inertial sensor10, and the motor wheel5can be charged wirelessly. This makes it easy and convenient to perform wiring, and avoids wiring troubles and safety problems caused by wired charging when the unmanned trajectory tracer2moves back and forth. The deformable coupling pipeline1is provided with a first NFC terminal4, the unmanned trajectory tracer2is provided with a second NFC terminal8that corresponds to the first NFC terminal4, the first NFC terminal4is in communication connection with the second NFC terminal8, and the first NFC terminal4is electrically connected to the communication device. The second NFC terminal8and the first NFC terminal4can realize NFC for information and transfer monitoring and control information. The second NFC terminal8is electrically connected to the single chip microcomputer9. The wireless coupling coil7is electrically connected to the second NFC terminal8, to wirelessly charge the second NFC terminal8. Specifically, the unmanned trajectory tracer2includes a hollowed-out cavity11, and the inertial sensor10, the single chip microcomputer9, and the second NFC terminal8are secured in the cavity11. The wireless coupling coil7is wound inside the cavity11and corresponds to the wireless power transmitting coil3. The single chip microcomputer9controls the second NFC terminal8and the inertial sensor10. The motor wheel5may be directly secured on the unmanned trajectory tracer2. In this embodiment, the unmanned trajectory tracer2further includes two connecting rods22and support links6. The two connecting rods22extend along an extension direction of the deformable coupling pipeline1, and are respectively secured at two ends of the cavity11. The connecting rods22each are connected to a plurality of support links6. One end of the support link6is securely connected to an end portion of the cavity11, and the other end of the support link is connected to an end, far away from the cavity11, of the connecting rod22. The motor wheel5is secured on each support link6. This can improve stability of the unmanned trajectory tracer2during movement. Specifically, there are three evenly spaced support links6, and the unmanned trajectory tracer2is in contact with and spaced from the deformable coupling pipeline1by using the motor wheel5. This can reduce force of friction when the unmanned trajectory tracer2moves in the deformable coupling pipeline1, and enhance the stability of the unmanned trajectory tracer2during movement. The other end of the support link6is slidably mounted on the connecting rod22, and a spring23is connected between the other end of the support link6and the end, far away from the cavity11, of the connecting rod22. In this embodiment, the connecting rod22is sleeved with a lantern ring24, the other end of the support link6is securely connected to the lantern ring24, the end, far away from the cavity11, of the connecting rod22protrudes outward to form a ring-shaped protrusion portion22a, the spring23is connected between the lantern ring24and the ring-shaped protrusion portion22a, and the spring23is in a compressed state. Because a return action of the spring23generates thrust to the support link6, the motor wheel5abuts against the inner wall of the deformable coupling pipeline1. The other end of the support link6can slide on the connecting rod22, and a distance between the motor wheel5and the connecting rod22can be adjusted. In this way, the unmanned trajectory tracer2can be applied to deformable coupling pipelines1with different pipeline diameters. The monitoring pier15may be provided with a control device. The control device is electrically connected to the solar cell12to control the solar cell12and the wireless power transmitting coil3to cooperate with each other to supply power. The control device is electrically connected to the wireless power transmitting coil3and the first NFC terminal4to control power supply of the wireless power transmitting coil3and information collection of the first NFC terminal4. The communication device and the control device are mounted in a control box14. Based on the above unmanned system for monitoring lateral deformation of a landslide based on inertial measurement, an embodiment of the present disclosure further provides a monitoring method. As shown inFIG.5, the monitoring method includes the following steps.S1: Determine a position of an initial measuring line20of a landslide mass19based on existing geological exploration data. The determining an initial measuring line20includes a depth, an elevation, and a disposing manner of the landslide mass19, and needs to comprehensively consider geological and topographical conditions, a deformable coupling form of a pipeline, a purpose and demand of displacement distribution measurement. The initial measuring line20needs to be perpendicular to a sliding direction of the landslide mass19, and is generally located near a front edge of a surface of the landslide mass19.S2: Dispose a deformable coupling pipeline1in the landslide mass19along a direction of the initial measuring line20. To bury the deformable coupling pipeline1under a shallow surface, excavate a trench with a width larger than that of the deformable coupling pipeline1, spread and place the deformable coupling pipeline1in the trench properly, and then cover the deformable coupling pipeline1with soil. To bury the deformable coupling pipeline1deeply, drill a borehole by using a drilling machine, dispose the deformable coupling pipeline1in the borehole, and place an unmanned trajectory tracer2at one end of the deformable coupling pipeline1after burying the deformable coupling pipeline1.S3: Build monitoring piers15on two sides of the deformable coupling pipeline1, and securely connect the monitoring piers15to two ends of the deformable coupling pipeline1. The monitoring pier15is made of concrete, and a buried depth of the concrete pier should be enough to prevent the two ends of the deformable coupling pipeline1from moving.S4: Before monitoring, use a wireless power transmitting coil3and a wireless coupling coil7to transmit, wirelessly through electromagnetic induction, electric energy generated by a solar cell12to a battery25for storage, so that a single chip microcomputer9, an inertial sensor10, and a motor wheel5can be charged wirelessly; and use a control device to initialize a monitoring frequency and other information of the unmanned trajectory tracer, and transfer, by using a first NFC terminal4and a second NFC terminal8, the monitoring frequency and other information to the single chip microcomputer9for storage. After the monitoring starts, the battery25and the motor wheel5are powered on. The motor wheel5rotates to drive the unmanned trajectory tracer2to move back and forth in the deformable coupling pipeline1. The single chip microcomputer9controls the inertial sensor10and the motor wheel5, to measure the disposed deformable coupling pipeline1regularly. Positioning is performed by using a GPS device13, and a shape of the deformable coupling pipeline1is obtained by using the inertial sensor10, to obtain a deformation measuring line21of the deformable coupling pipeline1. Each measurement needs to be repeatedly performed for a plurality of times to obtain an average value. The initial measuring line20of the deformable coupling pipeline1is used as a zero displacement. In a subsequent monitoring process, a displacement distribution curve, along a direction of the measuring line, of the landslide mass19is obtained by subtracting a curve of the initial measuring line20from each measured curve of the deformation measuring line21. Specifically, the single chip microcomputer9controls the motor wheel5to drive the unmanned trajectory tracer2to move back and forth once in the deformable coupling pipeline1, and meanwhile, the single chip microcomputer9controls the inertial sensor10to measure a current shape of the deformable coupling pipeline1as the deformation measuring line21. This is referred to as one monitoring process. Positioning is performed by using the GPS device13, and the shape of the deformable coupling pipeline1is obtained by using the inertial sensor10. The single chip microcomputer19obtains the shape of the deformable coupling pipeline1and sends it to the communication device. In this embodiment, the single chip microcomputer19transmits information measured by the inertial sensor10back to the communication device on the monitoring pier15through cooperation between the second NFC terminal8and the first NFC terminal4. The communication device uploads the shape of the deformable coupling pipeline1to a network or a mobile terminal, to obtain the deformation measuring line21of the deformable coupling pipeline1. Refer toFIG.2. The deformable coupling pipeline1may be disposed in three manners. When two sides of the landslide mass19each have a solid ground surface, the deformable coupling pipeline1runs through the whole landslide mass19along a cross section of the landslide mass19, the two ends of the deformable coupling pipeline1are outside the landslide mass19, the monitoring piers15are secured on the solid ground surface, and positions of the ends of the deformable coupling pipeline1are absolutely fixed, and will not change with deformation of the deformable coupling pipeline1, forming a simply-supported hinge type16. When the shape of the deformable coupling pipeline1is measured in a time-sharing manner to calculate a displacement, an absolute displacement distribution curve, along a direction of the measuring line, of the landslide mass19can be calculated only by aligning the positions of the two ends. When only one side of the landslide mass19has a solid ground surface, one end of the deformable coupling pipeline1runs through a boundary of the landslide mass19, and the other end is located at a cantilever type17in the landslide mass19. One monitoring pier15is secured on the solid ground surface, and the other monitoring pier15is secured on the landslide mass19. In this way, single-ended (GPS monitoring pier15) positioning can be performed only through a fixed end point outside the boundary of the landslide mass19, and positioning precision is slightly worse than that of two-ended positioning in the simply-supported hinge type16. When the smaller landslide mass19is formed in a landslide mass, the two ends of the deformable coupling pipeline1are secured in the landslide mass19to form a floating hinge type18, and the two monitoring piers15are secured on the landslide mass19. In this case, the monitoring rely on only local deformation of the landslide mass19and a relative displacement distribution curve based on the positions of the two ends, resulting in a worst monitoring effect of the landslide mass19. According to the technical solutions provided in the present disclosure, the system can work at any time any place under any weather condition. It has a mature technology and reasonable design, and can be widely applied. It is applicable to monitoring of surface, underground and even underwater deformation of a landslide. The unmanned trajectory tracer2has a high update rate of measurement data, and desired short-term precision and stability. The inertial sensor10can provide data of a spatial position, a moving speed and direction, and a spatial posture of a monitored object, and generated measurement information has excellent continuity and low noise. With an unmanned design, the monitoring device is economically advantageous, and can be easily popularized. The solar cell12is used for power collection, storage and supply of the whole system. The solar cell12can obtain electric energy through energy saving and environmental protection, and can also wirelessly charge the unmanned trajectory tracer2. In this specification, the terms “front”, “back”, “upper”, and “lower” are defined based on positions of the components or parts in the figure and relative positions of the components or parts, to merely express the technical solutions clearly and conveniently. It should be understood that these terms are not used to limit the protection scope of the present disclosure. The embodiment in the present disclosure and the features in the embodiments may be combined with each other in a non-conflicting situation. The above-mentioned are merely preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. Any modifications, equivalent replacements and improvements made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure. | 16,992 |
11859975 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS An exemplary embodiment according to the present application is shown inFIGS.1-15. As shown inFIG.1, the exemplary embodiment shows a rotary laser level10. Rotary laser levels are known, for example, as shown in U.S. Pat. Nos. 4,854,703; 4,751,782; and 6,338,681, which are herein incorporated by reference in their entirety. Another rotary laser level is shown in US Patent Application Publication No. 2014/0203172, which is hereby incorporated by reference. The present application may also be applicable to other types of lasers such as U.S. Pat. Nos. 7,665,217; 7,076,880; 6,964,106; 7,481,002; 7,027,480; 8,640,350; 6,606,798; 7,013,571; 7,111,406; 7,296,360; and 7,571,546, which are herein incorporated by reference in their entirety. FIGS.1-8illustrate the exemplary embodiment of the invention with the removable battery pack200attached.FIGS.9-15illustrate the exemplary embodiment of the invention without the battery pack. As shown in the FIGS., there is a laser level10. The laser level10includes a control mechanism housing20. A laser projector30extends from the control mechanism housing and is configured to project a laser onto a surface. In the case of the rotary laser shown, the laser projection can be a 360 degree rotary projection. In other instances, the projector30may project one or more dot, one or more lines or a combination of lines and dots. The control mechanism housing20includes a control mechanism which provides for projection of one or more laser beams or dots by the laser projector30. The control mechanism may include, among other things; an LED or other light source; one or more lenses; one or more mirrors; a motor; and a microprocessor configured to control the laser level10. The control mechanism may be a control mechanism shown in one of U.S. Pat. Nos. 4,854,703; 4,751,782; 6,338,681; US 2014/0203172; U.S. Pat. Nos. 7,665,217; 7,076,880; 6,964,106; 7,481,002; 7,027,480; 8,640,350; 6,606,798; 7,013,571; 7,111,406; 7,296,360; and 7,571,546 all of which have been incorporated by reference. For example, the control mechanism housing20may include the control mechanism housed in the upper casing part shown and described U.S. Pat. No. 4,854,703. Alternatively, the control mechanism housing20may include the control mechanism shown and described U.S. Pat. No. 8,640,350. In various embodiments, the projector30may be disposed at different places along the control mechanism housing20. For example, the projector30may be on a front surface or may be internal to the control mechanism housing20with beams projecting out from the control mechanism housing20. The control mechanism housing20has a substantially cubical shape. Accordingly, it has a top surface21, a bottom surface22, a left surface23, a right surface24, a front surface25and a back surface26. The removable battery pack200is provided at the back surface. The removable battery pack200provides power for the laser level10. The removable battery pack200may be a power tool battery pack such that it can be removed and mounted to a variety of power tools, outdoor power tools, cleaning tools, or other tools or products. As shown inFIG.11, the laser level10includes a receptacle201for receiving the battery pack. The receptacle201can be one of many known designs for receiving a battery pack, including those for receiving a power tool battery pack. The control mechanism housing20may be made of a rigid material such as Acrylonitrile butadiene styrene (ABS), high impact polypropylene or high impact polystyrene (HIPS). In an embodiment, the control mechanism housing20may be made of a material having a Rockwell R hardness of 60 to 140. In other embodiments, the control mechanism housing may be made of a material having a Rockwell R hardness of 70 to 130, 80 to 120 or 80 to 114. As shown inFIGS.1-15, the laser level10has a protective structure50. The protective structure50comprises an upper protective structure100and a lower protective structure150. The upper protective structure100projects upwardly and outwardly from the top surface21and the lower protective structure150projects downwardly and outwardly from the bottom surface22. The protective structure50in the exemplary embodiment is made of a shock absorbing material which can deform on impact at a controlled rate to dissipate the impact energy over a longer period. In the exemplary embodiment, the material of the protective structure50is designed to absorb shocks better than the material of the control mechanism housing20. The shock absorbing material may be a material such as rubber, foam or a shock absorbing plastic. As can be appreciated, because the protective structure extends beyond the control mechanism housing20, it protects the control mechanism housing20from impact when dropped from a variety of orientations. The flanges and the legs may be made of different durometers of rubber. For example, the rubber used for one or more legs may have a higher durometer than the rubber used for one or more flange. The rubber used for one or more flange may have a higher durometer than one or more legs. In other exemplary embodiments, it is contemplated that the legs may be made from metal, such as spring steel, for example. In other exemplary embodiments, the protective structure may comprise a molded skeleton made from an impact resistant polymer that is overmolded in rubber or foamed rubber. The upper protective structure100has four flanges101,102,103, and104. The flanges run roughly parallel to upper edges of the control mechanism housing20. The flanges101,102,103and104are connected to the control mechanism housing20by upper legs110. The lower protective structure150has four flanges151,152,153, and154. The flanges run roughly parallel to upper edges of the control mechanism housing20. The flanges151,152,153and154are connected to the control mechanism housing20by lower legs160. In the exemplary embodiment, each upper corner includes a pair of legs110and each lower corner includes a pair of legs160. In other embodiments, there may be additional or fewer legs. For example, each corner may include only one leg. The legs can also be dimensioned differently. For example, they could be made thinner or thicker than shown in the exemplary embodiment. In the exemplary embodiment, the legs110,160are made of the same material as the flanges101,102,103,104,151,152,153, and154. In other embodiments, the legs110,160may be made of a different material than the flanges. The legs110,160may be the same or differ in various ways. The upper protective structure100is configured so that the flanges101,102,103and104rise above the projector30. In this way, the projector30is particularly protected against impact and the flanges101,102,103and104do not block lasers projecting from the projector30. In the shown exemplary embodiment, the upper legs110are longer than the lower legs160. This allows the upper legs110to provide sufficient clearance for the projector30so that the flanges101,102,103and104do not block any projection from the projector30. The legs creates spaces111,112,113,114,161,162,163and163between the control mechanism housing20and the flanges101,102,103,104,151,152,153, and154. The spaces allow for a decreased weight. Additionally, providing this spaced construction provides better impact protection. Also, as can be appreciated, the flanges101,102,103,104,151,152,153, and154can serve as grab handles so the laser level tool10can be carried or re-positioned. Additionally, the flanges101,102,103,104,151,152,153, and154have at least one set of aligned flats which allows the laser level10to be accurately re-positioned on its side to project a vertical beam. That is, when sitting upright, the laser level10projects a tool in a horizontal plane. The flats allow the laser level10to be placed on its sides so that a beam can be projected vertically. The flanges are also designed so as to not interfere with mounting of the tool on a tripod either vertically or horizontally. In the exemplary embodiment, the laser level10can be stably positions on a flat surface in at least six orientations. The at least six orientations correspond to the six sides of the cube-shaped control mechanism housing20. That is, the laser level10can be positioned on a flat surface upright, upside-down or on any of its four sides. In any of these orientations, the laser level10will sit stably with only the protective structure50resting on the flat surface. As shown in, for example,FIG.3, the upper flanges101,102,103,104form a closed shape that is generally square. The structure is open inside the closed square shape, allowing access to the laser projector30. Similarly, as is shown inFIG.8, the lower flanges151,152,153,154form a closed shape that is generally square shape. The structure is open inside the closed square shape, allowing access to the bottom surface22of the control mechanism housing20. FIG.16shows another exemplary embodiment of a laser level210. Unless otherwise stated, the features of this exemplary embodiment are the same as in the previous exemplary embodiment. For example, similar materials can be used in this exemplary embodiment and similar laser mechanisms are also possible. In this instance, the laser projector230is disposed at a center of a control mechanism housing220. There is a protective structure250which includes an upper protective structure300and a lower protective structure350. The upper protective structure300includes flanges301,302,303,304which provide a generally square shape. The lower protective structure350includes flanges351,352(and two other flanges not shown) which also provide a generally square shape. In this exemplary embodiment, the upper legs310include a single leg310at each corner of the control mechanism housing220. As can be appreciated, in this embodiment, laser beams projected by the projector230do not have to pass through the upper legs310. Accordingly, the upper legs310can be made thicker without impeding the projection of a laser beam. In this exemplary embodiment, the lower legs360also comprise a single leg360at each corner. As shown inFIG.16, this exemplary embodiment includes support posts240. The support posts240connect portions of the control mechanism housing220. As can be appreciated, laser beams projected from the laser projector230may pass through this section of the housing220. Accordingly, having relatively thin supports posts240lessens the amount of any disruption of a laser beam. Another exemplary embodiment of a laser level is shown inFIG.17. As shown inFIG.17, the laser level410of this exemplary embodiment is generally cylindrically shaped. In particular, the control mechanism housing420is roughly cylindrical shaped and the overall laser level410is roughly cylindrically shaped. As shown inFIG.17, the projector430in this embodiment is at a top end of the housing420. As with the previous embodiments, the laser level410ofFIG.17has a protective structure450consisting of an upper protective structure500and a lower protective structure550. The upper protective structure500includes a generally circular flange501which is supported by four upper legs510. The flange510includes a series of bumpers511. In the exemplary embodiment, there are four bumpers511which are disposed at each of the legs510. In other embodiments, there may be a greater number of bumpers511or fewer bumpers511. The lower protective structure550likewise includes a generally circular flange551which is supported by four lower legs560. The lower protective structure550also includes a series of bumpers561. The bumpers511,561make it so that if the laser level410is placed on its side on a flat surface, the protective structure450contacts the flat surface and the control mechanism housing420does not. In this exemplary embodiment, because of the shape of the laser level410, the laser level410can be rolled when on its side if pushed by a user. As with other embodiments, it can also be placed upside down. The bumpers511,561also allow it to rest on each of its sides as each bumper will resist further rolling. In other embodiments, the shape of the laser level could be different. For example, inFIG.17, the flanges501,551are generally circular and the control mechanism housing420is generally cylindrical. The shape could be modified so that the flanges are generally elliptical and the control mechanism housing is a generally elliptical cylinder. The major axis of the elliptical shape can be different in different embodiments. Simplified schematics for operation of a laser level are shown inFIGS.18A and18B. As shown inFIG.18A, battery pack200provides power to a light source201, a controller202and motor203. The light source201may be, for example, a laser diode. The controller202may, for example, a microcontroller or microprocessor. As shown inFIG.18B, there is a light source201. The light source201projects light through a collimator lens204and the light then travels through a prism205and finally is reflected off of mirror206disposed in the projector30. Motor203rotates the projector30. Alternatively, the motor203may rotate the mirror206directly. As previously discussed other mechanisms for the laser are possible and part of the present application. In an exemplary embodiment, the upper protective structure100may be made of a material having a Shore A hardness of 40 to 100; 50 to 100; 60 to 100; 70 to 100; 70 to 90; 60 to 90; 50 to 90; 40 to 90 or 40 to 80. In an exemplary embodiment, the lower protective structure may be made of a material having a Shore A hardness of 40 to 100; 50 to 100; 60 to 100; 70 to 100; 70 to 90; 60 to 90; 50 to 90; 40 to 90 or 40 to 80. In an exemplary embodiment, the flanges of the upper protective structure may be made of a material having a Shore A hardness of 40 to 100; 50 to 100; 60 to 100; 70 to 100; 70 to 90; 60 to 90; 50 to 90; 40 to 90 or 40 to 80. In an exemplary embodiment, the flanges of the lower protective structure may be made of a material having a Shore A hardness of 40 to 100; 50 to 100; 60 to 100; 70 to 100; 70 to 90; 60 to 90; 50 to 90; 40 to 90 or 40 to 80. The legs of the upper protective structure may be made of a material having a higher hardness than the material of the flanges of the upper protective structure. The legs of the upper protective structure may be made of a material having a lower hardness than the material of the flanges of the upper protective structure. The legs of the lower protective structure may be made of a material having a higher hardness than the material of the flanges of the lower protective structure. The legs of the lower protective structure may be made of a material having a lower hardness than the material of the flanges of the lower protective structure. In another embodiment, the flanges of the protective structure can form a triangular shape and the control mechanism housing can be generally shaped as a triangular prism. In other embodiments, the flanges could form a shape with more sides such as 5 sides (pentagon), six sides (hexagon), seven sides (heptagon), eight sides (octagon), etc. and the control mechanism housing can be shaped with a corresponding structure (i.e., having a cross-section that corresponds to the shape formed by the flanges). In other embodiments, the shape formed by the flanges and the control mechanism housing may not correspond. For example, the flanges may form a hexagon shape while the control mechanism housing is generally cube shaped. Various different features have been shown and described with respect to different embodiments. It is contemplated that the features of the embodiments could be combined or used in other embodiments. For example, a centrally located projector as shown inFIG.16could also be used with a cylindrically laser level of the type shown inFIG.17. While the invention has been described by way of exemplary embodiments, it is understood that the words which have been used herein are words of description, rather than words of limitation. Changes may be made within the purview of the appended claims, without departing from the scope and spirit of the invention in its broader aspects. | 16,249 |
11859976 | DETAILED DESCRIPTION The illustrations in the figures are used solely for illustration and, if not explicitly indicated otherwise, are not to be considered to be exactly to scale. Identical or functionally similar features are provided, if practical, with the same reference signs throughout and are differentiated if necessary with a letter as an index. The illustrated schemes each show the basic technical structure, which can be supplemented or modified by a person skilled in the art in accordance with general principles. The terms essentially, substantially, or at least approximately express in this case that a feature is preferably formed, but does not necessarily have to be 100% exactly or exactly as literally described, but rather that minor deviations are also permissible—not only with respect to unavoidable practical inaccuracies and tolerances, but rather especially, for example, insofar as the technical effect essential for the invention is substantially maintained in this case. FIG.1schematically shows one possible example of an application using an embodiment of an automatic target search device14saccording to the invention in a geodetic surveying device14. In this case, one or more surveying reflectors—in the example shown, for example, a reflector10as a measurement point in the surroundings of the device14—are to be found and surveyed automatically using the positioned device14. The reflector10can be formed as a surveying reflector—for example, as a triple prism on a surveyor's rod or pole (also having an operating unit18connected via a radio signal16here)—which is moved by a user17to different measuring positions. In this case, the device according to the invention in or on the surveying device14scans its surroundings for light reflections9to locate the reflector10. The reflector10can therefore be targeted using the target axis15of the surveying device14to exactly survey the reflector10using a laser distance meter in the direction of the target axis15. In this case, the locating of the reflector10to be surveyed is preferably to take place as rapidly and reliably as possible. During the searching (also referred to as scanning) of the surroundings, however, in general not only the reflection9from the desired surveyor's rod19is acquired, but rather also other reflections9from reflective objects possibly located in the surroundings, such as further reflectors in use, buildings12, vehicles60, headlights, cat's eyes, boundary posts, traffic signs, safety vests of workers, machines, etc. The scanning or searching for reflections9is performed in this case using a modulated, preferably pulsed emission of a fan13of optical radiation in the visible or invisible spectral range and an acquisition and analysis of reflections9of a part of this optical radiation from the location of the reflector9. For example, a light fan13in the form of a straight laser line having an aperture angle, for example, of approximately +/−20° is emitted by the surveying device14, and this light fan13is rotated over the spatial region22to be searched. The emission fan13is in this case preferably an essentially continuous and essentially homogeneous light fan13, for example, by a light beam being emitted as a fan preferably spanning a plane, which spans a straight line in the viewing direction on the object. In one embodiment, the light fan13can be, for example, aligned essentially vertically and can be rotated or pivoted, preferably by up to 360°, for example, jointly with a targeting or telescope unit of the surveying device14essentially horizontally around the standing axis of the device14by a motorized drive unit over a spatial region22to be searched. The invention can, however, also be designed in other embodiments having other alignments of the light fan13and/or pivot planes of the movement of the light fan13, also by less than 360°. Using a preferably pulsed emission of the light fan13, higher optical peak powers can be emitted in this case while maintaining ocular safety, whereby stronger reflections9are also obtained and an improvement of the SNR is achievable. In this case, a bundle of multiple emission fans13or a family of fan bundles more or less results, which cover the spatial region22to be searched. In particular in consideration of an emission of the emission fan13in the form of chronological light pulses having pulse widths of a few nanoseconds, the spatial region22is thus scanned using a bundle of discrete fans. With correspondingly faster scanning and/or with a corresponding ratio of the scanning to the movement, however, in this case a quasi-continuous scanning of the spatial region22to be searched can also be performed. According to the invention, in the emission of the emission fan13, not only simple single pulse sequences, but rather also more complex types of modulation, for example, coded pulse sequences, pulse amplitude modulation, phase modulation, etc. are applicable. As already mentioned above with respect to the coverage of the search space22, in this case the rate of the distance measurements is preferably to be at least 50 kHz or more, for example, also 500 kHz or even more to achieve both an acceptable rotational velocity of the movement of the emission fan13and also a sufficient coverage of the search space22. Jointly with the emission fan13, in this case a fan-shaped reception region20of the device according to the invention is also moved along, which preferably essentially corresponds in size and spatial location approximately to the emission fan13or comprises it. In one embodiment, reception region20of the reception fan can be formed approximately 3 to 10 times wider than the width of the emission fan13transversely to the fan direction, for example. In this case, according to the invention the reception region20is formed having a position-resolving optoelectronic sensor element or detector11, so that a first location of the reflection9can be acquired or analyzed along the reception region fan20—i.e., in the vertical direction in the above example. The fan-shaped reception region or reception fan20is preferably designed in this case in such a way that it essentially covers the same angle range as the emission fan13, in particular for greater distances of several meters, which are typical in matters of surveying, or several dozen or even hundreds of meters and more, in particular in road construction. Using the position resolution according to the invention of the optical receiver or detector, in this case the reception fan20is divided during the analysis into multiple fan segments20a,20b,20cetc. In other words, the position resolution is achieved using segmenting of the reception fan20along its alignment into multiple regions or segments20a,20b,20c, . . . , by a position-resolving electro-optical detector11being formed having a linear arrangement of a plurality of pixels1, in particular more than two pixels. The number of the pixels1in the present invention in this case is also especially less than 100, in particular less than 64 or 32 pixels, however. For example, an embodiment of a detector11according to the invention can comprise, approximately, 5 to 16 pixels. These pixels are juxtaposed along a preferably straight line in the fan direction, wherein a distance between the sensitivity surfaces of the pixels should as much as possible be kept at zero or at least relatively small in relation to the size of the sensitivity surface of the pixel, preferably not greater than approximately 10%. According to the invention, in this case the arrangement of an optical system of the receiving unit and the position-resolving electro-optical detector11is formed in such a way that the regions or fan segments20a,20b, . . . , which are each acquired by one pixel, overlap. These segments20a,20b, . . . are at least essentially equally large in this case. In other words—as also symbolically shown in the beam path—the projections of the sensitivity surfaces of the pixels overlap in the object plane in this case. The pixels according to the invention are formed in this case in such a way that each of the pixels is formed as an SPAD array. Such an SPAD array comprises in this case a plurality of single photon avalanche photodiodes (SPADs) operated in the Geiger mode, which are interconnected to form a common output signal for the pixel, in particular using a parallel circuit of SPADs each provided with a series resistor—which form the SPAD array. In other words, the pixels according to the invention thus each only have a single output signal per pixel but are internally constructed having multiple photodiodes per pixel. Instead of a juxtaposition of multiple individual pixels, a single, specially designed producer-specific SPM diode pixel array can also be applied, for example, in the form of an Original Equipment Manufacturer (OEM) product. According to the invention, in this case the electrical and digital signal analysis of the position resolution are performed not only solely using an analysis of a single pixel as such, i.e., using a simple association of a reflection9with a single one of the pixels, but rather the position resolution in the fan direction is greater according to the invention than the physical resolution of the detector11on the basis of the number of the existing pixels1. Using the overlap according to the invention of the sectors or regions20a,20b, . . . , a reflection9from a reflector10within the acquisition region is at least partially received by a plurality of pixels, wherein an intermediate position dependent on the reception intensity between the pixels is ascertained, especially by a relative ratio of the reception intensity from the pixels to an intermediate position being analyzed, or a location of a barycenter or a maximum value of the reflection9being physically formed and analyzed accordingly as an intermediate position. In one embodiment according to the invention, in this case the analysis of the output signals of the pixels can take place in parallel, i.e., over multiple acquisition channels, especially each having one channel per pixel. In particular, in this case each of the pixels can be analyzed using a separate A/D converter channel, wherein the cycling of the A/D converter channels is preferably synchronized. A first location41, as a first coordinate of the first direction from which the reflection9was received, is determinable—at least roughly—as described above using the analysis of the position-resolving detector. A second location42of the object10causing or triggering the reflection9as a second coordinate in the direction from which the reflection was received results from the location of the movement of the emission fan13in the spatial region22at the appointed time of the respective light fan pulse13a,13b,13c, . . . , as illustrated by way of example inFIG.2. In this case, for example, an angle encoder in the device14can acquire the alignment of the movement of the emission fan13, especially at the point in time of the emission of the emission fan13, at the point in time of the reception of the reflection9, or at the point in time of the reflection (i.e., approximately half of the time between emission and reception). In the example shown, the drive and angle encoder14hof the surveying device14, which are formed to rotate the device14in relation to its deployment14uaround the standing axis H (or alternatively possibly also around the tilt axis V) can be used for the movement and determination of the second location. The target acquisition device14saccording to the invention can be formed in this case, for example, in the support14bof the surveying device14, which is only movable in a single axis, or also in the telescope body14t, which is also movable in two axes with the target axis15. Using the detector according to the invention having SPAD array, in some embodiments it is also possible to dispense with varying the emission power of the pulses of the emission fan13, as often has to be applied in the prior art, for example, in the form of an emission of a double pulse—for example, in the form of an emission of a weaker pulse directly followed by stronger pulse—to manage the restricted signal dynamic range of the prior art system. Therefore, according to the invention, position coordinates of the object10causing or triggering the reflection9are ascertained in two dimensions, in the above example thus in a horizontal location (as a second location42of the present alignment of the emission/reception fan) and a vertical location (as a first location41in or between the sectors20a,20b,20c, . . . of the reception fan) of the reflection9in the coordinate system of the device14. In embodiments having non-vertical alignment of the target search fans and/or non-horizontal movement of the fans, the position coordinates can be converted accordingly on the basis of the geometrical relationships provided in this case. Using the modulated pulse emission, not only can the above-mentioned higher peak power be achieved. A discrete point in time for the emission of the light fan13also results in this case, and therefore a discrete light fan in a discrete direction of the movement in the second direction. Moreover, a distance43between device14and reflection object9can be derived using a distance measuring unit on the basis of a runtime and/or phasing of the light fan light pulse13a,13b, . . . , from the emission at the device14, to the reflection9at the object10, and back to the device14on the basis of the propagation speed of the light—or multiple distances43, if multiple reflections occur. The above-mentioned position coordinates of the reflection9can thus be supplemented by a third dimension in the form of a distance value43. The distance measuring unit23can especially be formed in this case in such a way that it can also accordingly analyze multiple distances43in a multi-target case, in which for a single emission light pulse, multiple reflections staggered in the distance thereof are received from multiple targets. In one embodiment, in this case especially a parallel analysis can be formed using one respective dedicated distance measuring unit23per pixel. A comparatively shorter pulse duration can often effectuate a comparatively more discrete or accurate determination of the position, in particular of the radial position in the distance direction, wherein a minimum required emitted pulse energy, peak power of the emitting element, etc., are often limiting here in a known manner. In practical embodiments, an engineering consideration of all parameters and effects on effectiveness, costs, utility, etc. has to be carried out here during the design. In the case of an analog acquisition and/or acquisition digitized with sufficient resolution (of at least greater than two or more) of an intensity or amplitude of the reflection9on the optoelectronic detector, in addition to the three position coordinates, a fourth characteristic value can be associated with a respective reflection9and/or its source10. In one embodiment according to the invention, furthermore, in addition to the position, a spatial extension of a coherent reflection9can be ascertained, for example, approximately in the form of a height in the first direction and a width in the second direction, and/or in one or two extensions of a reflection9in another spatial direction. On the basis of such an extension, for example, essentially punctiform reflections9due to surveying prisms10can be differentiated from, for example, oblong-shaped reflections9bof reflectors on warning vests of workers17, from large-area reflections on windowpanes or the like, automatically on the basis of the extensions by a correspondingly formed analysis unit. In the ascertainment of the extension, the distance information of the distance measuring unit can preferably also be taken into consideration in this case, whereby a differentiation of reflections can also be performed on the basis of the depth staggering thereof. For example, in this case a triple prism10can also be automatically recognized by an analysis unit in front of a reflective glass pane in the background. In one example of an embodiment of an analysis of the target finder according to the invention, for example, in a locating unit27, an item of more than three-dimensional information can thus be acquired for an acquired reflection in this case. For example, an item of four-dimensional information having vertical position41, horizontal position42, distance43, and intensity and, derivable therefrom, a reflectance of the target object10. In this case, a reflection9acquired here in many cases is not only associated with one single discrete, two-dimensional or three-dimensional spatial coordinate, which may be determined, for example, in an intensity center or an intensity barycenter of the reflection9. Rather, in one embodiment of the invention a positional or spatial reflection profile of the reflection9can be ascertained and analyzed. For example, to analyze, for example, in the first and second directions, a clustering of intensity values of reflections9can be performed, or also a clustering of the intensity or the reflectance in three dimensions—having first direction, second direction, and distance—wherein such an intensity cluster is ascertained with at least one (2D or 3D) position, preferably also with a (2D or 3D) extension by an analysis unit. Therefore, for example, a reflection profile in the positional neighborhood of a potential reflector10can be ascertained, especially, for example, in the case of a reflection source10of non-negligible extension, for example, a reflector strip on clothing, a window, etc. Such an analysis in a target search unit according to the invention also provides advantages in the case of non-negligible optical influences of the air such as flickers, mist, fog, etc.—or in the case of partially diffuse reflections, blooming, etc.—especially also with respect to a recognition and differentiation of discrete, specific surveying reflectors10in greatly varying surroundings and surrounding conditions. A four-dimensional or higher-dimensional profile of the surroundings of the surveying device14can thus be derived, which can be produced by corresponding algorithms—which can be produced in a classic manner and/or with incorporation of machine learning and artificial intelligence according to the rules of the art—and provide an analysis of the profile, which is formed in such a way that it recognizes potential target reflectors10, differentiates them from potential interference signals and spurious signals, and locates them in a known coordinate system. For example, with such a profile or cluster, an interpolation of a reflection center as the location of the reflection can also be performed, for example, in the form of a computed center point, barycenter, centroid, expected value, etc. During the analysis, a spatial extension of the profile or cluster of a reflection9can in this case also be taken into consideration to differentiate external targets (such as security vests, traffic signs, glass or painted surfaces, headlights, cat's eyes, etc.) to exclude spurious targets, wherein preferably a possible depth staggering of reflections9can also be taken into consideration to suppress spurious reflections. A user17having the reflector10can in this case also operate the surveying device14remotely using an operating unit18via a radio connection16. In an optional embodiment, in this case the device14can carry out a rough determination of a rough direction to the reflector10by means of the radio connection, for example, to restrict the search region22of the automatic target finder according to the invention to defined surroundings with respect to this rough direction and/or to identify the reflector10and/or to differentiate it from other reflections9, which could potentially be confused with the reflector10of the user17. To determine this rough direction, for example, rough radio locating of the mobile operating unit18can be performed, for example, using diversity receiving using multiple antennas from which a probable direction to the received radio emitter may be derived. Such approaches are also known, inter alia, in the (planned) specifications of radio connections such as Bluetooth (for example, >5.0), WLAN, mobile radio, etc. A consideration of a directional characteristic of a radio antenna moved along during the movement of the device14, for example, also during the movement of the light fan13(for example, in the second direction) can also be used to derive a rough direction to the operating unit, for example, on the basis of a directional dependence of the reception signal strength and/or phasing. This rough direction can then be refined by means of the target finder according to the invention, and/or a reflection9from the reflector10can thus be differentiated from other reflections9. The operating unit18can also determine its rough position itself by means of a GPS receiver or locating in a mobile radio network, and can provide this via the radio connection as the starting point for the target search according to the invention, from the exact or at least roughly known deployment of the device14. FIG.3shows an example of a schematic sectional illustration of an embodiment of a target reflector search device14saccording to the invention, in which a position of the object10triggering a reflection9is determinable in a locating unit27using an analysis of the receiving unit40and the second direction42and preferably a distance43. In this case, the emitting unit8emits an emission fan13(located in the plane of the sheet). This light fan13is projected by means of a light source12and an emission optical unit into the surroundings to be searched, and can be moved between or during the emission of the light fan13, preferably motorized and equipped with an encoder14m. Light-reflecting targets10encountered in the surroundings in this case, for example, the surveying reflector10shown, reflect a component21of the light of the emission fan13in this case back to the target reflector search device14saccording to the invention. A receiving unit40is formed there in such a way that this light21reflected from the targets10is acquired by the photosensitive detector11, which provides corresponding electrical signals for further analysis. In this case, the receiving unit40is formed in such a way that it also covers a fan-shaped reception region20, which preferably essentially overlaps with the emission fan13. According to the invention, the receiving unit40comprises an imaging optical unit4in this case, which is preferably designed as a fixed-focus optical unit. A correspondingly formed and arranged aperture7, for example, formed as a slit aperture, restricts a field of view of the receiving unit40in this case to the reception fan20, for example, wherein the aperture7can be arranged in the focal plane of the fixed-focus optical unit4. The photosensitive detector11is formed and arranged in this case in such a way that it is designed to be position-resolving along the reception fan20. In this case, multiple discrete pixels1a,1b,1c,1d,1eare arranged along an image of the reception fan20in the object space toward the image space—i.e., behind the optical unit4viewed from the outside—so that the reception fan20is resolved along the fan into multiple reception segments20a,20b,20c,20d,20e. According to the invention, an imaging optical unit4is applied in this case, but the photosensitive detector11is intentionally arranged outside the optimum imaging depth of field of the optical unit4, as it is with the blurry image21bof the reflected light component21. According to the invention, for example, this is precisely not as would be the case, for example, in the case of an image sensor of an imaging camera or in classic applications of an imaging optical unit4. In one embodiment according to the invention, in this case the number of the pixels1, i.e., the number of the individually readable photosensitive sensors for the position resolution along the fan20can be kept small. According to the invention, hundreds of pixels1are thus not necessarily provided for the position resolution, but rather only a few pixels1—for example, approximately 5 to 25 pixels, especially approximately 5 to 15, or, for example, approximately 10 pixels are sufficient—wherein each of these pixels1is formed in this case as an SPAD array, however. Such a pixel1in the form of a single SPAD array comprises in this case a plurality (for example, approximately 100 to 10,000 units) of photosensitive cells operated in the Geiger mode, which are interconnected to form a single, common output of the pixel1. According to the invention, an analog analysis of the respective outputs of the pixels1is performed in this case, preferably a parallel analysis of all pixels1, which can be executed in one embodiment as a simultaneous or at least essentially simultaneous or quasi-simultaneous analysis. The analog analysis can also be performed with application of an analog-to-digital converter in this case, which provides a value-discrete analog representation, in particular having a resolution of more than two, especially at least 16, at least 128, or preferably even more value quantification steps. The individual pixels1also each comprise in this case, as SPAD arrays1a,1b,1c,1d,1e, a correspondingly larger sensitivity surface than a single semiconductor photodiode of comparably high electronic signal bandwidth. For example, an SPAD array having a sensitivity surface of approximately 1×1 mm can have a signal bandwidth in the gigahertz range (GHz), which is not achieved in classic photodiodes at comparable size. In one embodiment according to the invention, the analysis of the outputs of the pixels is performed in this case using a distance measuring unit23having sampling frequency which is sufficiently high to carry out a runtime measurement of emitted light pulses of the emission fan13. This distance measuring unit23has in this case a distance resolution at least in the decimeter range, preferably in the centimeter range, for example, at least having a sampling rate or sampling frequency of greater than 1 MHz, for example, in the range of approximately 80 MHz or more. The position resolution and/or angle resolution of the reception fan in the first direction41achieved according to the invention using the reception fan longitudinal direction position determination unit24on the basis of the signals of the few pixels1a,1b,1c,1d,1eexceeds in this case the fundamental resolution provided by the number of the pixels1, which results as the total acquisition fan angle by number of pixels1. Using the defocusing, beamforming or beam expansion, and parallel analog analysis according to the invention, a position resolution along the reception fan20can be achieved in this case which exceeds that of the number of the provided pixels1. In this case, the analog signals, which correspond to the reception intensity per pixel1, are analyzed as weightings over multiple juxtaposed pixels, wherein it is concluded, on the basis of the ratios of the intensities at intermediate positions of the point of incidence of the reflected light between the pixels1, which intermediate positions improve the position resolution. Such an intermediate position can be ascertained in this case, for example, by interpolation, barycenter formation, expected value computation, trained artificial intelligence systems, model formations, in any case also specifically for different types or classes of measurement targets or interfering reflections, etc. In addition to the intermediate position, a possible extension of the received reflection over multiple pixels can also be acquired in this case. During the signal analysis of the pixels1, not only the value of the analog outputs of the pixels1, but rather, using the distance measuring unit23, additionally also the incidence time, or in other words the runtime or the distance43dependent on this runtime to the reflection target10, can be taken into consideration, whereby a distance staggering of reflections9is ascertainable, and it is thus possible to prevent, for example, a measurement reflector10in front of a mirrored glass pane from being acquired as a single reflection9, but rather the reflections9and the background reflection can be differentiated by the analysis unit as separate reflections9using the system according to the invention on the basis of this depth staggering. Multiple reflections9can thus not only be differentiated on the basis of the depth staggering thereof and optimally also assisted by intensity profiles of the reflections9(for example, a bar graph of a frequency density of the analog values over direction and/or time) and/or on the basis of a reflectance of the source of the reflection9derived therefrom, but rather optionally also identified and/or classified, for example, as a measurement reflector10. The intensity profiles can in this case not only be analyzed along the first direction41of the reception fan20, but rather also additionally or alternatively in the second direction42of the movement of the reception fan20over a spatial region22, for example, on the basis of a position measurement of a movement of the target search device14saccording to the invention. The locating unit27is preferably formed in this case in such a way that it ascertains an intensity profile of the reflections9in two dimensions, i.e., for example—in the first direction41and second direction42—in the first direction41and distance43—and/or in the second direction42and distance43—or also in three dimensions, thus—in the first direction41, second direction42, and distance42—and is analyzed by the locating unit27. The analysis can in particular be performed in this case in such a way that the analog values obtained from the pixels1a,1b,1c,1d,1emore or less as a fourth dimension, are analyzed as a spatial profile—for example, via first direction41, second direction42, and/or distance43(or runtime). In this case, spatially coherent intensity clusters can especially be identified and delimited from one another. In this case, these intensity clusters can be formed in particular in such a way that these clusters represent reflectances, especially in that a distance influence is subtracted from the intensity values, or the reception signals are scaled over the respective associated distance values thereof. Such a reflectance usually represents a characteristic target object value in this case, on the basis of which a target object10can be identified and/or classified. For these intensity and/or reflectivity clusters, a location (for example, in the form of a position of a barycenter or an expected value of the cluster) and/or a spatial extension (for example, in the form of an extension, dimension, or standard deviation of the cluster), and/or a geometric shape of the cluster (for example, a point, a line, a surface, and the position thereof and possibly a location of the shape in space) can be ascertained by the analysis unit23, on the basis of which the reflections9associated with the clusters can be recognized and possibly also identified and/or classified. In addition to analytical and modeling analyses, in this case approaches from the field of machine learning and/or artificial intelligence can also be implemented. In this case, training data from typically occurring reflectors such as measurement reflectors10, but also spurious reflections, for example, from safety vests, headlights, glass panes, mirrors, cat's eyes, etc. can also be recorded and/or virtually simulated, in particular under variable surrounding conditions, etc. A detail of a further embodiment according to the invention is shown in a simplified manner inFIG.4, which uses, for example, an optical diffuser or a microlens array3a,3bin the beam path of the reception fan20. The exemplary illustration of only two pixels1is not to be understood as restrictive here, but rather is to be viewed as a detail, since a device according to the invention preferably comprises more than two pixels1(but also not hundreds of pixels1). In this case, the pixels1aand1bare again formed in the form of SPAD arrays2which are symbolized by way of example in the form of the schematic illustration2and the surface pattern of the SPAD array as an array grid and also one common electrical output per pixel. Of the light21of the reflection incident on the reception unit within the reception fan20of a fan-shaped light pulse emitted by the emitting unit, a first component21aof A % is received by the first pixel1ain this case, which results at the pixel output in the analog intensity value5a, at a point in time which corresponds to the distance-dependent runtime of the light pulse. A second component21bof B % received by the first pixel1b, which results in the analog intensity value5bat the pixel output, also at the same point in time, which corresponds to the distance-dependent runtime of the light pulse21. In this case, the optical unit4of the receiving unit is formed and arranged according to the invention in such a way that a point within the reception fan20in the object space is imaged on a non-negligibly small surface on the arrangement of the photosensitive pixels1a,1b. This can be performed as explained above using an intentionally defocused arrangement of the pixels1a,1b, alternatively or additionally, however, in this case a diffuser or microlens array3a,3bshown here can also be arranged behind the imaging optical unit4, in particular in front of the pixels1a,1b. In addition to a single continuous microlens array or diffuser3a,3bfor all pixels1a,1b, a separate diffuser3aor3bcan also be applied in this case for each of the pixels1aor1b, respectively, which distributes incident light21a,21bessentially uniformly—in particular more uniformly than without this diffuser—over the sensitivity surface of the respective pixel1a,1b. Light21aor21b, which is incident on a portion of the pixel1aor1b, is preferably distributed in this case by the diffuser3aor3bover a larger region of the pixel1aor1b, in particular over the entire SPAD array of the pixel1aor1b, respectively. The respective segments20aand20bof the reception fan20associated with the pixels1aand1bpartially overlap in this case. In this case, an analysis of the location of the incident reflection21between the two pixels1aand1bcan be ascertained on the basis of the intensity percentages A % and B %. In the example shown, for example, the spatial position6of the incidence of imaged reflection21along the juxtaposition of the pixels (of which1aand1bare partially shown here) can be ascertained, for example, by an interpolation being performed on the basis of intensity distribution5aand5b. In addition to a solely computational interpolation, a lookup table, machine learning, etc. can also be applied in this case. A location of the reflection21within the reception fan20is thus ascertained with a positional resolution which is greater than that which would result solely from the number of the pixels1for this acquisition region20. In this case, a chronological separation of positionally overlapping reception signals5a,5bcan particularly advantageously be applied during the analysis. During this, only signals over adjoining pixels1a,1bare interpolated in the case of which the signals5a,5bwere received practically simultaneously on the time axis (for example, with a time delay of <10 ns)—and thus (at least highly probably) originate from the same reflective object10. In this manner, different reflective objects10—even if they are closely staggered in the direction of viewing—may be automatically differentiated in the scope of the analysis in a comparably simple manner by the locating unit27described elsewhere. In other words, in a receiving unit40according to the invention, the fan-shaped reception region20is divided using a position-resolving detector11having a row of pixels1into a plurality of segments20,20b, . . . . In this case, each of the pixels1a,1b, . . . is associated with one of the segments20a,20b, . . . . An acquisition region20a,20bof one of the pixels1a,1bthus forms one of the segments20a,20bin each case. According to the invention, the receiving unit40is formed in this case in such a way that these acquisition regions20a,20bof adjacent pixels1a,1bpartially overlap in the object space in front of the imaging optical unit4. A reflection9in the overlapping region of these acquisition regions20a,20bof two pixels1a,1bresults in this case in an output signal at both participating pixels1a,1b. In this case, the light21is divided differently between the two pixels1a,1bin accordance with the position of the light reflection21in the overlap region, in particular the received intensity21a,21bof the light at the pixel1a,1bis dependent on the surface component of the reflection21which is acquired by the respective pixel1a,1b. Using an analog output signal of the pixel1a,1b, which is dependent on the received intensity5, in this case a position6of the reflection21from the target object10between the two adjacent pixels1a,1bis ascertainable according to the invention. Moreover, an item of information about the reflectance and/or extension of the object10, from which the light21reflected by its reflection9originates, can be derived in an overall view of the intensity over both pixels1a,1b. This item of information can then be used to derive whether and/or with which probability the reflection originates from a surveying target mark10having specific and/or known reflectance or whether it is another type of undesired interfering reflection. In this case, optionally not only a runtime of the light signals from the target finder14sto the reflection9and back can be ascertained using the specific pulsed emission of the emission fan13. This also enables an evaluation—and therefore also an electrical and/or numeric suppression or subtraction—of possible bias or interference signals, for example, due to ambient light, possible active light sources in the acquisition region, base emissions of the surroundings, etc., which are received outside the time of the pulse reception and/or are received essentially constantly. A differential image analysis of the acquired spatial region20can thus also be performed, for example. Thus, for example, a chronological derivative of the reception signals can be analyzed, for example, by corresponding numeric filtering or analysis of the analog, preferably digitized signals and/or using an electrical high-pass filtering of the analog output signals of the pixels, a dynamic bias control, etc. An example of an acquisition device according to the invention is shown once again inFIG.5. The optoelectronic detector11is formed as a position-resolving optoelectronic detector11having a line of SPAD array pixels1a,1b,1c,1d,1e, if and is arranged and formed jointly with an imaging optical unit4in this case in such a way that a point from the fan-shaped reception region20(which is also indicated here in its cross section) is imaged blurred on the pixels1a,1b,1c,1d,1e,1f. The detector11(or the output signals of its pixels1) are analyzed by a position determination unit24for the position resolution in the fan longitudinal direction, a distance measuring unit23, and a locating unit27as described. Reflective targets10, thus in particular a retroreflective target mark to be located, are located in this case especially within the hyperfocal distance, i.e., in a range of the finite object distance at which—in accordance with typically fixed-focus design of the optical system—objects10located in infinity can also still just be imaged with acceptable blurriness. The so-called depth of field then extends from half the hyperfocal distance up to infinity. According to the invention, however, sharp imaging of reflections9from the objects10which are located in the range of the depth of field is intentionally omitted. Instead, for example, using a defined axial displacement38of the detector11, a blurry image of a reflection from the target object is generated from the actually sharply imaged focal plane. The above-mentioned lower limit of the object distance often does not represent a significant obstacle in the scope of the first aspect of the present invention, since it is per se designed for an analysis of a blurry image and therefore an even greater level of blurriness only displays minor negative effects with respect to reception signal strength and directional resolution. Moreover, at close range a greater part of the divergent, fan-shaped acquisition region is typically used than at long range, whereby the relative resolution can be at least partially compensated for again. Such a close range (for example, of a few meters) is also not used or is only used rarely in many surveying devices. In one example of an embodiment, for example, as shown for imaging from the infinite (or imaging equivalent thereto within the depth of field range of a fixed-focus optical unit), the image distance can be set approximately identically to the focal length f of the optical unit. According to the invention, in this example an imaging optical unit4can thus be used, in relation to which the optoelectronic detector11is intentionally arranged by a defined distance38in relation to the focal length f, i.e., in a defined manner in the back focus (or alternatively also in the front focus). In an example shown hereafter, an intentionally dimensioned blurriness, which is preferably different in the first and second directions, is explicitly introduced in a comparable manner by means of an optical diffuser3in the beam path—which can be arranged in front of, in, or behind the image plane—which exceeds the minimal blurriness which would actually be achievable using the imaging optical unit4, especially by a multiple of the technically achievable minimal blurriness. In the design of the imaging optical unit4, the requirement for the imaging optical unit4can thus also be shifted away from the sharpest possible imaging and more toward a correct position imaging. In another embodiment, the accurate position imaging can also be performed using a corresponding position calibration of the receiving unit, in particular an arithmetic calibration of the analysis of the detector11. Additionally or alternatively, in this case an optical beam expander or diffuser3—indicated in this figure by way of example in front of one of the pixels—can be attached in front of the detector11or in front of its pixels1a,1b,1c,1d,1e,1f, which causes a blurry distribution of the light incident from the imaging optical unit4on a larger surface than its point of incidence. In one embodiment, in this case the optical system39can preferably be designed in such a way that this surface of the light bundle has a greater extension in the first direction (in the fan direction) than transversely thereto. The detector and the optical system39associated with it are thus designed and arranged in such a way that according to the invention a bundle cross section at the point of incidence in the detector plane is used which is larger than a minimal circle of confusion, as would classically be used in imaging or photographic systems. In one embodiment, for example, in this case a circle of confusion or more generally a beam bundle cross section can be formed, which is in particular larger in the first direction than one of the pixels1a,1b,1c,1d,1e,1f. In one embodiment, the optical beam expander3can be formed having a microlens array. This microlens array can be formed, for example, using many cylindrical lens rods, typically approximately 100 μm wide, and can be arranged in such a way that it asymmetrically expands the light bundle incident from the reflector target10and shaped by the receiving objective4directly before the array made of pixels1. For example, the light spot in the detector plane can thus be expanded to a size of, for example, approximately 2.2 mm×0.9 mm on the sensitivity surface of the detector11, wherein the SPAD pixels1of the detector11, having a size of approximately 1 mm×1 mm, are arranged without spacing in a line. Using the blurry, expanded imaging according to the invention, not only can a subpixel interpolation be carried out to increase the resolution during the analysis of the detector11, (using which the technology-related larger pixels1and the resolution thus limited of a detector11of reasonable structural size may be at least partially compensated for). Using the blurry, expanded imaging, according to the invention it is also possible that all microcells of an SPAD pixel1are illuminated at least essentially homogeneously—whereby, for example, a collapse of the effective usable dynamics of the SPAD pixel1due to illumination of only a part of the available microcells can also be prevented or at least reduced. Especially in this case, using an embodiment according to the invention having an optical system having asymmetrical beam expansion, both a loss of received light can be avoided and also a pixel interpolation over more than one single pixel1can advantageously be enabled. In this case, the field-of-view of the detector11can be limited further to the desired reception fan20using a slit aperture7. For example, in an embodiment having back focus, in this case a slit aperture7restricting the field-of-view of the detector11can be arranged in the beam path between imaging optical unit4and detector11, especially, for example, approximately in the range of the focal length f. FIG.6shows an example of an embodiment according to the invention having an optical beam expander3having diffuser plate, hologram, or micro-optical lens array, etc., which causes blurry imaging of the object space in spite of the imaging optical unit4, so that light reflected from an object10from the object space is expanded in such a way that it is acquired by more than one single pixel1. In this case, as shown here, a continuous diffuser3can be applied for multiple pixels1a,1b,1c,1d,1e,1f,1gof the detector11, or alternatively an individual diffuser can be applied for each of the pixels1a,1b,1c,1d,1e,1f,1g. Especially in the case of a solely randomly scattering diffuser3for multiple pixels1, this diffuser3can also be arranged in the sharply imaged focal plane of the optical unit4, in that the optical properties of the diffuser3effectuate the blurry imaging toward the detector11. A front focus or back focus arrangement is thus not absolutely required, but it is optionally also additionally possible—especially if the diffuser3is used to distribute the received light more homogeneously over the sensitivity surface of a pixel1. An advantageous expansion or forming of the light bundle reflected from the object space and imaged can also be achieved by means of specially adapted holograms or astigmatic microlens arrays, for example, using a corresponding light forming plate introduced into the beam path. To keep the structural length of the receiving unit small, in this case the optical light forming plate can be arranged in front focus arrangement. The light forming of an optical system according to the invention can especially be formed in such a way that the light bundles assume an oblong elliptical (as indicated in this figure) or rectangular extension on the pixel plane. By means of such micro-optical components, with corresponding design, the radiation reflected from the target object can be used optimally, i.e., essentially without loss. FIG.7shows a sketch of two adjacent emission fans13aand13b(or the respective associated reception fans, respectively), wherein the angle distance thereof in the second direction42is shown disproportionately wide for the sake of clarity (see, for example, values typical in practice mentioned elsewhere). The light fans13aand13bshown comprise in this case reflections9a,9b,9c,9d, which are acquired by the position-resolving detector of the target search device according to the invention in a blurry image, and of which in each case a first location is ascertained in the first direction41as described, for example, as an angular position within the fan13in relation to its optical axis43—as also indicated, for example, with the segmenting of the fan13b. Furthermore, a distance measuring device is connected to the detector, which determines a distance43for each of the reflections9a,9b,9c,9don the basis of a run-time measurement between emission and reception of the light of the emission fan13. The second location in the direction42of a reflection9results with the association of a reflection9with one of the emission fans13a,13bduring the movement of the emission direction of the emission fans13. For example, in this case spherical coordinates of the position or object result, at which the reflection9occurs, for example, having polar angle (=first location in41), azimuth angle (=second location in42), and radius (=distance in43), which can also be converted in a known manner to other coordinate systems, however, in particular to a coordinate system of the surveying device. On the basis of these three coordinates, the reflections9may generally be differentiated well, so that a differentiation of different reflection objects (also referred to here as clusters) may be carried out automatically by a locating unit on the basis of these locations, in particular on the basis of the distance information (or in other words the point in time of the incidence of the reflection9). In addition to this position, a geometrical extension or size of the reflection9can optionally also be determined, for example, if the reflection9occurs in the three dimensions in a geometrically coherent manner (as a cluster) over a plurality of emission fans13or over multiple pixels of the detector, a size and/or shape of the reflecting object can be determined therefrom. An actual extension of the reflective object can be determined from a measured apparent height in the first direction and a corresponding apparent width in the second direction of a reflection9or cluster with the aid of the measurement distance43. In addition to these geometrical considerations, in addition an intensity of the reflection9can also be determined, which can also be incorporated into observations. In this case, for example, a location of the position of the reflection9can be ascertained on the basis of a maximum, barycenter, or another evaluation function of the intensity of the reflection9, in first location, second location, and/or distance. Using the intensity of the reflection9, especially in combination with the other above-mentioned analyses, a reflectance of the reflective object10can also be ascertained as already described, which can represent a further criterion for differentiating the reflections9a,9b,9c,9dand also especially for classifying the object10triggering the reflection9, especially for differentiating a surveying reflector10from an interference reflection9c,9dor for automatically selecting a specific reflector type. The reflectance is illustrated here by patterns of different brightness of the reflections9a,9b,9c,9d. The ascertained items of information with respect to position and/or extension can optionally also be taken into consideration in addition in the classification of the objects10. In the example shown, in this case the reflections9aand9bmay be recognized as geometrically coherent (=cluster)—and thus originating from a single reflection object—on the basis of the location thereof in distance43, first direction41, and the proximity thereof in the emission fans13aand13bin the second direction42. Jointly with the high reflectance thereof (and/or the small geometrical extension of the reflection), in this case this is a surveying triple prism10(at least with very high probability). This surveying triple prism10is thus in the distance43associated with the reflection9a,9b, of the first direction determined for this reflection9a,9band for this distance on the basis of the analysis of the blurry imaging of a plurality of pixels of the detector. In the second direction42, the object10is to be located in the direction of the emission fan13a, since the reception signal for this reflection already becomes weaker again in the emission fan13band has thus already exceeded its maximum. Alternatively, the second direction42could also be interpolated between the emission fans, in particular with respect to the determined intensity—especially upon use of emission fans which are comparatively narrow in the second direction42, however, in such a way it is often not necessary to determine a sufficiently accurate location in the second direction as a transfer value for a subsequent fine targeting of the object10using the surveying device. This at least rough location of the position of the surveying reflector10ascertained by the target search device can then, for example, be transferred automatically to an automatic targeting device of the surveying device, which then automatically (exactly) targets it, surveys it, and provides its geodetic coordinates. In one embodiment according to the invention, in this case the analysis can especially be performed on the basis of the distance43, the location in the first direction41(=vertical angle), and the intensity (and/or the reflectance derived therefrom) of the reflection, in particular since all of these items of information are ascertained directly from the output signals of the SPAD array pixels1of the position-resolving detector11according to the invention. A rapid and optionally also at least partially parallel analysis of the pixel signals for these items of information can thus be performed, and especially also the combination thereof can be carried out to recognize, differentiate, and classify targets—preferably online, i.e., just-in-time for each of the fans directly upon reception of the reflection signals. A primary analysis in a distance (=time), vertical angle, intensity, space, or diagram can thus more or less be carried out. The horizontal angle of the second direction42can then first be incorporated after completion of the above analysis, wherein the second direction42can also be supplied by the surveying device. InFIG.8a, an example of an acquisition and analysis according to the invention during the target search is shown—in the scope of the limited possibilities here of a two-dimensional black-and-white illustration. In this case, the searched spatial region is shown more or less as a panoramic image30. In this case, the resolution of the spatial region22is represented by the described analysis of the longitudinal direction of the detector11in the first direction41in the image30in the vertical direction, and the resolution of the spatial region22is represented by the adjacent emission fans13a,13b, . . . in the second direction42in the image30in the horizontal direction. In a non-orthogonal or non-levelled embodiment, the acquisition could if needed also be converted to the representation shown. In this case, the relatively low resolution in the form of the clearly recognizable individual grid (appearing square in this example, but generally not necessarily) of the image30obviously stands out. This low resolution may appear disadvantageous and inaccurate at first view, but, inter alia, enables faster scanning of the spatial region22than would be the case at higher resolutions. The grids represent the reception intensity in its brightness here, wherein lighter regions represent stronger reflections and black regions represent no reflections. In this case, however, the distance43associated respectively with the reflection9and ascertained by the distance measuring unit is preferably also to be taken into consideration, since, for example, only a comparatively weak reflection will be received even from a very bright reflector9at long distances43. This can be performed, for example, by an analysis of a distance-scaled brightness of the reflector10, or in other words by an analysis of a reflector-specific reflectivity in 3D space. The simplified 2D representation can be considered here to be solely by way of example and/or as an already distance-scaled 2D representation of the reflectivity or reflectance. Spatial regions from which reflections were received are thus shown in the form of image regions or clusters9represented as light. Such a cluster9can be defined, for example, as a spatially coherent region (i.e., not only in the two dimensions41,42shown, but rather additionally also in the distance43), in which a received reflection intensity and/or a reflectance ascertained in this direction is greater than a static or dynamically adapted threshold value. According to the invention, one—or in the case of a multitarget case possibly also multiple—distances are ascertained for each reflection9region shown as light using the distance measuring unit, whereby the illustration30shown actually would also have a spatial depth, which may not be reasonably represented here, however. The light regions would thus also be staggered into the plane of the sheet in this case. Therefore, for example, a region only visible here as a single cluster9could also result in multiple depth-staggered spatial clusters in the distance43, which are each to be considered as an independent cluster as such. For example, in the case of a reflective glass pane in the background of a measurement reflector or the like. Furthermore, on the basis of the measured extension33and/or34of a reflection9in one or over multiple emission fans, and also with incorporation of the respective associated distance43, an actual width33and/or height34of the target reflectors10can be ascertained. For example, on the basis of the geometrical relationships known in this case, a width and/or height (or a geometrical extension of the target reflector in general location) can also be computed or at least estimated in specific measurement units. Such a width and/or height are often characteristic features of the target objects to be located and represent relevant measured variables in the target mark recognition. During the data analysis, the depth staggering of the reflectors10and the interference reflections9is preferably taken into consideration. The distance measuring unit, especially if it is formed as a waveform digitizer (WFD), already generically provides depth-staggered brightness values of the reception signals, for example, one brightness value in the form of a signal amplitude is acquired and provided per pixel over a time axis (which accordingly corresponds to the signal runtime of a distance axis). If brightness values of adjacent pixels have equal associated distance43in this case (thus have equivalent locations on the time axis), the location thereof in the first direction41(and/or in the second direction42) can thus be interpolated between the pixels, and thus a reception direction (or location) of the reflection9in the first direction41can be ascertained. In the case of differing distances43, it is to be presumed that the reflections9originate from different objects, so that an interpolation in the first direction41and/or second direction42is not expedient. Furthermore, an associated reflectance can be ascertained in each case on the basis of the received reflection intensities to the target reflectors10or reflective objects, especially in consideration of the respective associated distance. On the basis of these data, the analysis unit can identify and/or classify reflection targets, in this case especially reflective foreign objects can be robustly differentiated from surveying targets, especially in consideration of respective characteristic reflectances. For the located reflectors, in this case in consideration of the intensity or brightness distribution, the location and thus the directions to the reflectors are determinable. All of these computations can be computed in this case by the analysis unit27, preferably in real time during the scanning of the search space. In one embodiment, in this case an objective and complete computation of a 2D or 3D intensity image—as shown inFIG.8ato illustrate one embodiment variant of the analysis—can also be omitted in the target recognition. An embodiment having an analysis carried out essentially online during the acquisition can thus be formed. Similarly as also explained in conjunction withFIG.8b, for example, for each of the emitted emission fans13, a plurality of the time axes (or distance axes, respectively), which are each generated with a distance measuring unit associated with one of the pixels1for this emission fan13, can be processed and analyzed. This analysis and computation are formed in this case in such a way that one or more signals of reflections9A,9B are located on the time axis. This can be performed continuously and from laser emission to laser emission, wherein corresponding signals are compared. In this case, especially the expected signal strength and simultaneously the target object width can be checked. Signal parameters of the received signals can be compared to one (of multiple) configured reflector models for correspondence in this case to find a target object10. Multiple target objects can certainly be located in this case along the distance/time axis43, which are differentiable according to the invention on the basis of the different distance information and/or reflectivity, for example, a reflector in front of a traffic sign, etc. On the basis of the majority of detector pixels1, in this case the targets can also be located in the vertical direction, along the fan direction, and also spatially separated during the analysis in the case of multiple targets. In other words, the 3D space is more or less successively scanned for reflectors in a fan-like manner, including depth acquisition, continuously analyzed by computer during this, and located target objects10are classified and stored. A user can optionally establish in this case which target classes or target types are to be located, stored, or approached. The performed analysis can then optionally also be visualized for a user, for example, the found target objects can be overlaid as an overlay in a camera image or the reflector targets can be marked on a tablet PC in a construction plan, a map, or a CAD model for the purpose of visualization. In one embodiment, such a 2D intensity image, or preferably a 3D intensity image, can also be provided to a local or remote user for visualization of a station overview and/or for interaction with a target selection unit, for example, on a display screen, but optionally also via an augmented-reality (AR) or virtual-reality (VR) display unit. According to the invention, in this case the determination of a position or location31or location of a reflection9in the spatial region is carried out not only with an analysis of a single pixel1, but rather the reception intensity of the pixels1located adjacent in the first and/or second direction is also considered. For example, an intensity barycenter or center point of a reflection9, i.e., of a spatially coherent cluster9of multiple pixels1, can be determined as position31of the reflection, whereby the position31ascertained in this case has a resolution which is greater than the pixel resolution of the acquired image30. Therefore, with corresponding engineering design of the parameters of a target recognition system according to the invention, the position31of a target reflector10in the spatial region can be produced with sufficient accuracy that this reflector can be automatically targeted by the surveying device for subsequent surveying. Special embodiments of the invention can in this case also provide additional functionalities for an improved recognition of target reflectors10and the differentiation thereof from interference reflections9. In addition to the sensitivity advantages of the pixels1used according to the invention, which are each formed as an SPAD array (for example, with respect to sensitivity, overload behavior, time measurement properties, etc.), the reflection acquisition according to the invention—as is apparent from the illustration shown here—can also be determined as a two-dimensional extension33,34of the reflection9(and/or if needed also as a three-dimensional extension of the reflection9in consideration of the distance43, which unfortunately cannot be represented here). Inferences can thus be computed about a probable type or class of reflection sources, for example, using a comparison of the cluster9,10to a modeling of a reflection to be expected of a known target reflector in the corresponding distance of the cluster9,10, or using a correspond trained artificial intelligence system, neuronal network, rule-based classification, or a mixed form thereof. Thus, for example, an oblong reflection9of a reflector strip on a sign or a large-area but weakly reflecting glass pane35can be differentiated from a punctiform triple prism10reflecting at high intensity. A classification is in this case, for example, on the basis of a reflectance determined by the device according to the invention of the object causing or triggering the reflection9with those which are known from the target objects10used in the surveying. Triple prisms in surveying have, for example—scaled to a scattering white surface—a reflectance of approximately 1 million, planar retroreflectors made of plastic (cat's eyes) have a reflectance of approximately 30,000, reflective films of approximately 1000, etc. The gradations of these reflectivities of the above examples thus comprise in this case at least approximately one order of magnitude or more, whereby these can be acquired, resolved with sufficient accuracy, and also differentiated using the SPAD arrays used according to the invention. A reliable classification of the reflective markings and reflective targets typically used in surveying and in construction can therefore be carried out on the basis of the reflectances thus ascertained. In the example shown inFIG.8a, in addition to diverse interference reflections, inter alia, for example, a surveying reflector10on a surveyor's rod and the warning vest9of the worker holding the surveyor's rod can be seen. For a recognition of triple prisms, in one embodiment especially also a parallax adapted to the dimension of the triple prism used between emission and reception fans in the target search unit can be used in this case. The signal of targets, the reflection of which is incident with a parallel offset in the receiver—which parallel offset essentially corresponds to the parallax typical in these triple prisms—is heightened in relation to other targets typically reflecting without parallax. With correspondingly formed embodiments of the devices having such parallax between emission axis and reception axis, for example, simple reflective objects can only overcome the sensor parallax from approximately 20 m and generate a reception signal at all. Retroreflectors having beam offset, in contrast, overcome the parallax at all distances. Especially together with the significantly higher reflectances of triple prisms in contrast to interference reflections and/or the specific point shape thereof in contrast to often larger-area interference reflections, the analysis unit can thus also carry out a robust, automatic specific recognition of surveying reflectors in the associated first and second directions thereof, and also ascertain the distance thereof. With multiple repeated scans of the spatial region, in addition mechanical dithering can occur, for example, in the form of a geometrical offset of the emitted fans in the second direction in a location between two of the fans emitted during the prior pass and/or an offset of the acquisition regions of the pixels in relation to the prior pass in the first direction—whereby the achievable resolution may be further improved. For this purpose, for example, the movement axes of the surveying device and/or the emission points in time of the light fans can be controlled accordingly. For example, this can be performed, inter alia, after a first rough scan of the entire spatial region for a portion of potential interest of the entire spatial region. In modern geodetic surveying devices, such as total stations, theodolites, tachymeters, or laser trackers, the movement in the second direction42can usually take place at quite high speed, for example, a pivot around the standing axis at up to 120°/second. A target search device according to the invention therefore has to have a correspond high laser firing rate and also a correspondingly high measurement rate on the reception side when emitting the emission fan to also be able to use these speeds and in this case be able to scan the second direction continuously—or also with multiple overlaps—for surveying targets using the light fans. For example, an assumed measurement rate of 75 kHz divides a horizontal search region of 360° into sectors of the width 0.0016°. Using a typical width of the laser fan assumed here in the second direction of 0.013°, each reflection point9can thus be measured approximately 8 times. In one embodiment of the invention, these and even higher scanning speeds and analyses can certainly be processed in real-time using current analysis means. However, in order to save electric power and current and/or avoid excessive heat development of the electronic processing unit—especially, for example, with battery-operated instruments—optionally or alternatively the above-described method of mechanical dithering using repeated scans can be used, a measurement rate aperiodic with respect to a revolution can be used, or first a rough scan at a lower measurement rate (for example, for approximately 1-fold coverage of the scanning region) followed by a measurement—in particular only focused on potential target reflectors recognized in this case—at higher measurement rate. A diagram having exemplary reception signals of a target reflector search device14saccording to the invention is shown inFIG.8b. Therein, multiple, by way of example five here, SPAD array pixels1a-eof the linear arrangement of the position-resolving detector11are plotted, which are arranged here in a vertical direction V (as the first direction41, in which the emission fan13is aligned) in the image region of the imaging optical unit4. A time axis43D,(t) is associated with each of these pixels1a-ein this case, along which, for each of the pixels1a-e, in each case the received light intensity of possible reflections10of an emission fan light pulse emitted in this Hz direction42is plotted. The time axis D,(t)43is in this case, in accordance with the signal runtime of the light pulse plotted thereon, also simultaneously to be considered a distance axis as a measure of a distance to the position of the reflection10proportional to the runtime. The second direction42is indicated in this case as a horizontal axis with Hz, in which the emission fan13is moved between the light pulses of the individual emission light fans. With each laser pulse, in this case a diagram as shown results at new Hz angle42, having the dimensions distance D43& V angle41. For the sake of clarity, only one diagram for one of the emission fans13of this emission fan family is shown in the Hz direction42here, however. In this case, the time axis D,(t)43is also to be considered in accordance with the signal runtime of the light pulse plotted thereon as a distance axis, on which a measure of a distance, proportional to the runtime, to a position of the reflection9is plotted. The curves shown of the reflections9aand9bon the time axes D,(t)43of the individual pixels1a-ecan be acquired in this case using an analog-to-digital converter at the output signal of the respective pixel1as a waveform of the reception intensity over time. It can be seen in this case that, in accordance with the blurry imaging according to the invention, according to the invention the acquisition regions of the individual pixels1a-epartially overlap in the object space, so that a reflection9a,9bfrom a location in the object space is (at least proportionally) received by more than one of the pixels1a-1e. In the example shown, two reflections9aand9boccur, which have the same location Hz in the second direction42here—since they belong to the same emission fan13—but have both a different vertical location V in the first direction41and also a different distance D43. In this case, in one embodiment of the analysis, it can be presumed that intensity pulses A,B at different pixels1a-e, which have (at least substantially or essentially) the same location on the time axis D,(t)43, originate from the same reflection source10. Therefore, in the example shown, the pulses A2, A3, A4can be evaluated together, wherein a location of the reflection source in the V direction41is ascertainable on the basis of a distribution of the intensities (i.e., for example, the level of the pulses) of the pulses A2, A3, A4occurring at this point in time at the pixels2,3,4. Therefore, the V location41of the reflection source of the reflection9A is ascertained with a resolution which can also be between the pixels1a-e(and/or the associated optical axes thereof), whereby an association of the location of the reflection source with a single one of the pixels does not solely take place, but rather a resolution of the location of the reflection source in the direction V41is achievable, which exceeds the number of the pixels1a-e. In the example shown, a second reflection9B furthermore occurs at a distance RetB different from the above-mentioned distance RetA. This is also acquired according to the invention by a plurality of the pixels1a-e, wherein the analysis thereof not only results in a different distance D43, but rather also a different location in the direction V41—in this case between the pixels1c,1d, and1e, especially a V location between the pixels1c-1ewhich is approximately at one-fourth of the distance between the direction of pixel1din the direction of pixel1c. This analysis was already explained above. It is to be noted here with respect to the location Hz in the second direction42that it does not necessarily have to be identical in its final analysis for the reflections RetA, RetB shown, but rather that the pulses A2, A3, A4B3, B4, B5shown can in any case also be distributed over multiple Hz diagrams of multiple emission fans13in the second direction42, and the final location of the reflection in the second direction42is ascertained, as already explained, on the basis of a barycenter, peak value, or the like of the received intensity over multiple emission fans—so that in any case an intermediate position between two emission fans13can also be ascertained in the direction Hz42. In one embodiment, the diagram inFIG.8bcan also be considered as a side view or as a section orthogonal to the plane of the drawing ofFIG.8a. In this case, the V axis41and the Hz axis42correspond to the two image coordinates ofFIG.8band the D,(t) axis43extends orthogonally to the plane of the sheet. The heights of the waveforms A2, A3, A4and B3, B4, B5of the reflections9a,9B fromFIG.8bare represented in this case by corresponding brightnesses of the pixels inFIG.8a(wherein in the embodiments shown here, the number of the pixels1of the detector11is not equal in the different examples ofFIG.8aandFIG.8b). FIG.9shows a block diagram of an embodiment of a method according to the invention or a process according to the invention, especially to be executed automatically during targeting of target objects, for example, in a geodetic surveying device. In block50, an emission of an emission fan of optical radiation is performed, preferably in the form of a time-modulated, in particular pulsed projection of a laser line. In block51, a movement of the emission fan is performed in different directions over a spatial region to be searched, in particular a rotation of the emission fan around an axis, so that the spatial region is covered by a fan bundle thus resulting. In block52, a reception of a reflection of one of the emission fans is performed in a fan-shaped reception region. Using an imaging optical unit, in this case a projection of the reception region is performed on a position-resolving optical detector, which is formed using a linear arrangement of a plurality of pixels each formed as SPAD arrays. The projection is performed in this case using an optical system which is formed in such a way that blurry imaging of the object space, which is expanded in relation to a focused image, is performed on the position-resolving optical detector—especially wherein the reflection of light of the emission fan on reflective objects in the object space is acquired by more than one of the pixels. In block53, a determination of a distance, a signal strength, and a position of the reflection is performed on the basis of the direction of the emission fan in the spatial region using an analysis of the position-resolving optical detector, wherein the position is performed using a determination of a location of an intensity-barycenter (or center of gravity or centroid) of the blurry image of the reflection on the detector over a plurality of the pixels. In this case, especially a runtime distance measuring unit can be formed for determining the one distance for each of the reflections, preferably respectively individually for each of the pixels. The direction of the emission fan in the spatial region can be acquired using an angle encoder as the second direction. In particular, a reflectance of the source of the reflection can be determined in this case on the basis of the signal strength and the distance. The above steps can especially be performed continuously or quasi-continuously in this case, in any case also at least partially in parallel—in particular for each of the emission fans. In a further step, the ascertained values can be relayed to the surveying device—especially a location—rough in the geodetic scale—of the object, which triggers the reflection in the first direction, the second direction, and the distance. In this case, a recognition, classification, and/or filtering of the objects can preferably be performed, for example, on the basis of the reflectance, geometrical dimension, etc. thereof. In particular, on the basis of this ascertained position of the target object, the surveying device can approach this position using a high-precision automatic target acquisition device and as a result automatically ascertain the coordinates of the target object with geodetic accuracy, i.e., in particular accuracy to seconds of an angle and millimeters, and provide them as the surveying result. In one example of an embodiment of an application of the first aspect of the present invention, for example, the target search unit can ascertain rough coordinates (in the geodetic scale) for the searched target objects, for example, a rough direction in the first and second directions, for example as two angles Hz and V in the coordinate system of the surveying device. This is performed by means of a distance measuring unit which is formed to receive transient signals of the emitting unit and ascertain at least one amplitude (intensity) and a runtime for a reflection from a target. The amplitude can be converted in this case into a reflectivity, as a target property, and can be compared, for example, to a threshold value preconfigured for a searched reflector target. In this case, more than one reflection can certainly also occur on the distance and/or time axis for one of the pixels, which reflections are well separated, however, using a distance measuring unit of the target search device, which measures at least to approximately 1 to 5 cm accuracy, in the reception signal and are accordingly separable from one another during the analysis on the basis of the location thereof on the time axis. In one embodiment, for example, the amplitudes of the incident reflections can be measured continuously, i.e., for each of the emission fans emitted in a different second direction or from laser emission to laser emission. In this case, in a simply designed embodiment, for example, an angular position of the movement of the emission fan in the second direction can be acquired, at which the intensity of a reflection decreases again after an increase—which represents only one example for ascertaining a maximum of the reflection in the second direction of a pivot of the emission fan. In this case, as a further criterion moreover a predetermined requirement for the reflectivity ascertained for this reflection can be used as a condition for an acquisition of this reflection as a target. In the case of such an acquisition of a reflection of a target, the surveying instrument can then fix this angle, i.e., stop the movement of the axis system in the second direction and align the target axis of the surveying device on the target thus found. Using the multiple pixels of the position-resolving linear detector, which are provided in the first direction, for example, the vertical direction, according to the invention, in particular in combination with multiple distance measuring units each associated with one pixel and operating in parallel—as described—an at least roughly resolved alignment in the first direction along the emission fan is also ascertainable. Therefore, for example, an angle coordinate of the reflection in the vertical direction can be ascertained, for example, from a set angle on the first axis system of the surveying instrument supporting the target acquisition unit and the deposition measured by the target acquisition unit, which is ascertained via the irradiated pixels as described. The surveying device can then roughly align its two axis systems on the target thus found, or its coordinates in the first and second directions, respectively, and survey this target, for example, by the control being transferred to an automatic target recognition (ATR) of the surveying device. FIG.10schematically shows a surveying device101, e.g. a theodolite or a total station, with an automatic target marker locator102and two target markers103and105, with which target points can be marked in the measuring environment, e.g. for geodetic surveying purposes. In the example, the target markers103,105have retro-reflectors113for this purpose, which can be sighted and measured by the surveying device101using a measuring beam which is not shown, so that the direction (based on the sighting direction) and distance (e.g. by time-of-flight or phase difference measurement) to the retro-reflector123and thus to the target can be determined with high accuracy. To perform the measurement, the measuring beam must first be aligned to the target marker103,105, i.e. the latter must be located. This can be carried out manually by a user, which is relatively time-consuming, or in some prior-art surveying devices automatically, e.g. using large-area illumination with target seeking radiation and detection thereof with an optical imaging unit with a large field of view. The optical imaging unit is designed as a camera, for example, which is either fixed relative to a telescope of the surveying device101or is pivotable freely about one or two axes, wherein the relative angles between the viewing direction of the camera and the telescope are measured. The central issue here is that associated offset angles with respect to the viewing direction of the telescope can be calculated for each pixel of the camera. One of the problems with this approach is that when multiple target markers103,105are present in the measuring environment, as shown, the surveying device101cannot distinguish which target marker103or105has been located. In addition, under certain circumstances it is also possible that extraneous light sources120, which emit extraneous radiation121, are incorrectly registered as a target marker. Thus, with prior-art methods confusion arises, which causes measured points to be assigned incorrectly, for example. Although techniques for the unambiguous identification of target markers103,105are known from the prior art, these are not sufficiently robust and/or require disproportionate additional effort. The second aspect of the present invention proposes a method in which a respective target marker103,105emits target marker radiation104a,106a, which is modulated in such a manner that it repeatedly exhibits a signature104,106characteristic of the respective active target marker103,105, wherein a phase-coded signature104,106is advantageously used in each case to minimize the effect of amplitude fluctuation and/or radiation interruptions. For this purpose, a respective target marker103,105, as shown, has e.g. a light source122, for example a high-power LED with a wide emission angle. The wavelength of the target-marker radiation104s,106sis preferably in the near-infrared range, e.g. it is 850 nm. The emission of the target-marker radiation104s,106sis started at the target marker103,105by a user or by remote control, e.g. from the surveying device101, i.e. by an external communication signal for the target marker103,105. The target marker locator102is designed according to the invention in such a manner that ambient radiation is detected by means of a spatially-resolving optoelectronic sensor and an evaluation unit (not shown here). In the example of one embodiment, for example, a commercially available camera chip112(e.g. with a CCD or CMOS-matrix image sensor) can be used (or an RGB camera sensitive to NIR, if applicable), which provides a resolution of approximately 1.4 μm/pixel with approx. 10 megapixels and a field of view of approx. 4 mm by 5 mm. A target-seeking camera according to the invention is therefore e.g. a fast CCD or CMOS camera with or without a (modified) Bayer mask. Alternatively, the sensor is designed as a two-dimensional photodetector array or as a Dynamic Vision Sensor (event-based camera). As an alternative to the illustration, the target marker locator is a separate or separable unit. Optionally, the target marker locator102or a camera of the target marker locator102has a long-pass filter that can be switched off. As an additional option, the device102has a near-infrared corrected lens. Also, a target marker location camera can be an overview camera, e.g. the same overview camera as is already available in some prior-art surveying devices101anyway. As a further option, in addition to the signature104,106the target-marker radiation104s,106salso contains useful data (e.g. information about the target marker or sensor data), which can be read out by the surveying device101. In other words, the radiation104s,106scan be used for data transmission in addition to the identification by means of a signature. Radiation detected by the sensor is evaluated in such a way, e.g. by means of an image processing system or evaluation electronics, that a respective target-marker radiation104s,106sis reliably detected by means of the unique signature104,106known to the evaluation unit, and thus the target-marker radiation104s,106sor target marker103,105is reliably identified and thus also, for example, extraneous light radiation121or a foreign object120is reliably rejected. The method according to the invention will be explained in further detail by way of example by reference to the following figures. FIG.11shows an example of a sequence of the method107for locating a target marker103,105on the basis of its radiation104s,106sor, more precisely, on the basis of the unique signature104,106transmitted therewith (seeFIG.10). In step108, a first series of images is recorded, which takes place at a first frame rate. For example, the first frame rate is 45 or 65 Hz, which is advantageous in terms of target-marker radiation that is generated by mains-powered light sources. Preferably, the first frame rate and signature of the target marker radiation are tailored to each other by matching the modulation rate of the radiation to the first frame rate. Preferably, the recording of the first series of images takes long enough to ensure that the recurrently emitted signature is also detected multiple times/repeatedly. The following target marker location becomes more robust by multiple detection of the signal sequence. Images from the first image sequence are analyzed in step109. In this first evaluation stage, a statistical evaluation of pixels is carried out, wherein a quality function is determined with regard to a given known signature of the target marker radiation. The value of the quality function for a particular pixel indicates a probability that target-marker radiation is detected with this pixel. Thus, in step110, a statistical test is performed on the basis of a plurality of consecutively recorded images to determine whether one or which pixel(s) has/have a signal characteristic corresponding to the stored signature. In the example, symbol111represents pixels that are unlikely to have detected target-marker radiation, and symbol112represents pixels for which the test result or the quality function value suggests that the temporal profile of the pixel signal is produced by the signature, and thus target-marker radiation is detected with them. In this case, a relatively high degree of uncertainty is preferably permitted at this first test stage110, i.e. such pixels which have tested positive with a probability of 50%, for example, are also permitted in the target-marker radiation class112. The primary objective of this evaluation stage is that no target-marker radiation is “lost”. The quality threshold is therefore chosen to be low, so that all target markers in the measuring environment are detected. In this step, it is accepted that some artifacts, e.g. interference radiation121(seeFIG.10), will possibly also be incorrectly tested as “positive”, i.e. pixels will be identified as detecting target-marker radiation although this is actually not the case. In order to classify the target markers robustly, i.e. to exclude artifacts classified as possible target markers, in step113a further image series is recorded with a different frame rate, preferably with a considerably higher frame rate (e.g. 10 times the first frame rate and/or in the range of several kilohertz). Here also, the frame rate and signature are preferably tailored to each other, for which the signature, for example, has a component tailored to the first frame rate and a component tailored to the second signature. Afterwards, an evaluation (step114) of intensity signals of the second image sequence is performed (at least) for those pixels that are identified as target marker pixels in the first evaluation stage. In step115, the intensity signal of the pixel is checked for correspondence with a stored signature and if the evaluation is positive, it is confirmed (field117) that the pixel has actually detected target-marker radiation and the corresponding target marker is thus identified and located. Otherwise, the pixel is classified as an artifact or discarded (field116). A preferably high frame rate of several hundred or thousand hertz enables a detailed intensity signal to be created, so that in the second test115radiation signature can be distinguished from non-signature with high certainty. It is advantageous that due to the first test stage110the recording of the second image sequence is able to be limited to the identified pixels. This means that the pixels that are considered as potential “target-marker radiation candidates”, which are determined in the first part of the method, are optionally used as a (center of a) region of interest for which (and only for which) the second image series is then recorded. The images from the second image sequence are recorded with a narrowed field of view, which is smaller than the first field of view of the first image sequence (the first field of view is preferably the maximum field of view of the image sensor or target locator, e.g. 5 megapixels, to ensure that the largest possible area of the environment is covered and/or that all existing target markers are detected as far as possible). The position of the second field of view depends on a particular pixel selected on the basis of the first series of images. The identified pixels thus specify, for example, where in the image the segment is located (compared to the first images). The segment can be limited to the identified pixel or pixels, or else a pixel region around such pixels is recorded, e.g. a region (or region of interest) of 20×20 pixels. The advantage of such a small second field of view is that it results in considerably less data compared to a full image. This enables faster processing and/or the frame rate for the second image series can be increased even further, which can be advantageous for robust detection of the target-marker radiation. For example, with regard to tailoring the modulation rate and frame rate, signatures can be implemented with “high” modulation, which enable more robust identification of signatures. FIGS.12a-12cshow a further exemplary representation of a target location method according to the invention. FIG.12ashows a first image sequence118created at a first frame rate R1along a time axis t, the individual images119in the example being difference images, which are created pixel-by-pixel from two or more consecutively recorded images in each case. In the first difference images119, which are produced by forming the difference between the images recorded with the image sensor, four pixels P1-P4are identified as examples, the signals S of which will be examined in greater detail (wherein a black-to-white change is intended to illustrate a difference). The remaining pixels in the example have also been exposed, but for the sake of simplicity it will be assumed that these other pixels do not show any changes at all during the period of the image series in question. The lower part ofFIG.12aillustrates symbolically that the signals S of the pixels P1-P4are analyzed with regard to the known signatures104,106, and a quality function G is derived for each pixel P1-P4. The quality function G is used to test whether a given pixel has (probably) detected target-marker radiation. In the example, to do so it is tested whether the quality function exceeds a defined threshold value Gd or not. In the example, the detected signal for pixel P4does not have sufficient resemblance (similarity) with either of the two signatures and is therefore classified as an artifact. The other three pixels P1-P3, on the other hand, are classified into the class “target marker radiation” by virtue of their value of the quality function G and are identified as pixels P1-P3, which have (presumably) acquired target marker radiation. As already described above, the threshold for the class “target marker radiation” is set to a low value in order to reliably include any possible target marker radiation, and the classification is therefore considered as a “rough” or preliminary classification. FIG.12billustrates how a second sequence of images120is created for the three pixel “candidates” P1-P3which have entered the second “round”, in the example again in the form of difference images121. In this case, as shown by the hatching, the maximum field of view is not used, rather radiation is only detected selectively for the pixels P1-P3. The recording with a selectively restricted second field of view can be carried out sequentially for the individual pixels or regions of interest (i.e. first align field of view on/around pixel P1, then on pixel P2, etc.). The second frame rate R2in this case is, as indicated, much higher than the first frame rate R1. The intensity I derived from the difference images121for each individual pixel P1-P3is then analyzed with regard to the signatures104,106(shown in the lower part ofFIG.12b). In the example, a correspondence of the intensity I of pixel P1with the signature104is determined and correspondence of the intensity I of pixel P2with the signature106(value in each case above a defined threshold value T). P1is therefore recognized as belonging to the signature104or the (total) signal of pixel P1as being caused by the signature104, and P2is assigned to the signature106accordingly. Radiation of the target marker103is thus identified with the pixel P1(cf.FIG.1) and radiation of the target marker105with the pixel P2. By contrast, for pixel P3no correspondence of the intensity signal I with either of the two stored signatures104or106is determined (value with regard to both signature104and signature106below the threshold value T). Thus, with the second “finer” test stage, which is based on the second image sequence, this sensor signal of pixel P3was able to be “weeded out” as an artifact. FIGS.12a,12bthus represent a further example of the procedure according to the invention, in which a large field of view is first used to cover a broad measuring environment region and a “pre-test” with regard to target-marker radiation is carried out with comparatively few images119, in order then to perform a robust targeted test on the remaining segments of the measuring environment with a high temporal coverage using a dense image sequence120. The use of difference images121while using a phase-coded signature104,106has the advantage that an analysis with regard to target signature is thereby possible even without prior synchronization, wherein optionally a given signature104,106has a start and/or end pointer which indicates the start and/or end of the signature, e.g. in the manner of so-called “framing bits”. For example, it is thus not necessary that the recording of the first and/or second image sequence and the emission of the target-marker radiation or the signatures104,106are started synchronously or, expressed more generally, the proposed method eliminates the need for complex communication between the target marker and the target marker locator. If, for example, the emission of the target-marker radiation is started manually as mentioned above, any communication at all between the target marker and the target marker locator can be omitted. In addition, it is thus possible to eliminate bit errors as a result of slightly different clock frequencies of the target marker and the receiver purely on the receiver side, without communication between the two. As shown inFIGS.12aand12bat the bottom, according to the respective frame rate R1, R2for the first test stage (FIG.12a), the “low-frequency” component104a,106aof the signature104,106is used primarily or exclusively for the test, whereas in the second test stage (FIG.3b) the “high-frequency” component104b,106bof the signature104,106is used. As shown schematically inFIG.12c, contrary to the greatly simplified representation ofFIG.12b, the second field of view Z is preferably not limited only to a pixel P1identified using the first image sequence, but also comprises a region Z or a number of pixels around it. As shown in the example, the identified pixel P1does not need to be located in the center of the region and the second field of view Z does not need to be square. For example, in the case of moving target markers, the first image sequence is optionally used to determine a direction of motion and also speed for a respective detected radiation, and the position and size (shape) of the field of view is adjusted on the basis of these parameters. If, for example, the determined target marker speed is relatively high, the field of view is also set relatively large, if the radiation on the sensor is moving “upwards”, the field of view around the pixel P1is extended or shifted “upwards” as shown. Thus, the second field of view is optimally adjusted to the movement of the radiation source/target marker. FIG.13shows a further extension of the method. In the example, the position of those respective pixels which are clearly identified in step117(seeFIG.11) as pixels of a target marker103is used to determine a direction to the respective target marker103, e.g. in the form of polar and azimuthal angles (wherein the direction can be determined not just after step117, but also, for example, directly after the identification of potential target marker pixels; see step12inFIG.11). For example, the evaluation unit of the surveying device101(seeFIG.10) is thus designed to determine a direction to the target marker based on the position of the identified pixel. This is optionally used to control, for example, based on a deviation of the pixel from a central pixel/sensor position (a zero point of the image sensor), an alignment of the target marker locator102or surveying device101in such a manner that the deviation is corrected. In other words, using the optionally determined direction to the target103, the surveying device101can be aligned centrally to the target marker103. Alternatively or in addition, in the case of a moving target marker103, the position or positions of the relevant pixel on the sensor is/are used to continuously track the moving target marker103, wherein based on the temporal change of the pixel position, an (at least coarse) speed of the target marker, at least normal to the viewing direction, is also determined. The target locator102in these embodiments is therefore used not only for target location, but also for tracking the target marker103. Especially in the case of a target locator102with a very large field of view (overview camera), it is also possible to track multiple target markers103at the same time, or to switch very quickly between the tracking of multiple target markers103even if they are moving in different directions. The figures above represent only possible exemplary embodiments schematically. Unless otherwise noted, the different approaches can also be combined with each other as well as with known methods and devices. | 102,573 |
11859977 | DETAILED DESCRIPTION 1. First Embodiment Overview FIG.1shows a situation in which a laser scanning apparatus200is set up at a site at which point cloud data is to be obtained.FIG.1illustrates a bridge400as an example of a target from which point cloud data is to be obtained. In addition, the Sun100appearing in the sky is also illustrated. Herein, the attitude of the laser scanning apparatus200is not yet known, but the laser scanning apparatus200is positioned as horizontal as possible. In this example, first, the position of the laser scanning apparatus200is measured by a GNSS or the like. Then, 360-degree circumferential scanning is performed by the laser scanning apparatus200, in order to measure the direction of the Sun100. Laser scanning light contains pulses of light, whereas sunlight does not contain pulses of light. Thus, an output waveform of a light reception unit202of the laser scanning apparatus200differs from that of laser scanning light that is reflected back. Specifically, detected waveforms thereof differ from each other. By use of this difference between detected waveforms, the detected waveform of sunlight is distinguished, and the direction of sunlight is obtained. On the other hand, the direction of the Sun as seen from the set-up position can be derived from astronomical data under the condition that the time is known. From this point of view, a measured value of the direction of the Sun100as seen from the laser scanning apparatus200is compared with the direction of the Sun100derived from astronomical data, and the attitude of the laser scanning apparatus200is calculated. Structure of Hardware FIG.2shows an external appearance of the laser scanning apparatus (laser scanner)200. The laser scanning apparatus200includes a tripod311, a base312that is fixed on top of the tripod311, a horizontal rotation unit313that is a rotary body being horizontally rotatable on the base312, and a vertical rotation unit314that is a rotary body being vertically rotatable relative to the horizontal rotation unit313. In addition, a control panel (not shown) is disposed on a back side of the horizontal rotation unit313. The vertical rotation unit314includes an optical unit315that emits and receives laser scanning light. The optical unit315emits pulses of laser scanning light. The emission of pulses of laser scanning light is performed along a direction (vertical plane) orthogonal to a rotation axis (axis extending in the horizontal direction) of the vertical rotation unit314while the vertical rotation unit314rotates. That is, the optical unit315emits pulses of laser scanning light along a vertical angle direction (direction of an elevation angle and a depression angle). Laser scanning is performed on the surrounding area as follows: pulses of laser scanning light are emitted from the optical unit315while the horizontal rotation unit313is rotated horizontally and the vertical rotation unit314is rotated vertically, and the laser scanning light that is reflected back from a target object is received by the optical unit315. The horizontal rotation unit313is rotated horizontally while scanning along the vertical angle direction (upper-lower scanning) is performed, whereby a scanning line along the vertical angle direction (upper-lower scanning line) moves in such a manner as to slide along the horizontal angle (horizontal) direction. Performing the horizontal rotation at the same time as the vertical rotation causes the scanning line along the vertical angle direction (upper-lower scanning line) to not be perfectly along the vertical direction and be slightly slanted. Under the condition in which the horizontal rotation unit313is not rotated, scanning along the vertical angle direction (upper-lower scanning) is performed along the vertical direction. Rotation of each of the horizontal rotation unit313and the vertical rotation unit314is performed by a motor. Each of a horizontal rotation angle of the horizontal rotation unit313and a vertical rotation angle of the vertical rotation unit314is accurately measured by an encoder. Each laser scanning light is one pulse of distance measuring light. One pulse of the laser scanning light is emitted to a scanning target point that reflects it, and a distance of this point is thereby measured. On the basis of this measured distance value and the direction of emission of the laser scanning light, the position of the scanned point (point that reflects the laser scanning light) is calculated relative to the laser scanning apparatus200. In one case, the laser scanning apparatus200outputs a laser-scanned point cloud by providing data of a distance and a direction related to each point (each scanned point). In another case, the laser scanning apparatus200internally calculates a position of each point in a certain coordinate system, and a three-dimensional coordinate position of each point is output as point cloud data. Data of the laser-scanned point cloud also contains information of luminance of each scanned point (intensity of light that is reflected back from each scanned point). FIG.3is a block diagram of the laser scanning apparatus200. The laser scanning apparatus200includes a light emission unit201, a light reception unit202, a distance measurement unit203, a direction acquisition unit204, a light emission controller205, a drive controller206, a communication device207, a storage208, a laser scanning controller209, a GNSS position measurement device210, a sunlight incident direction measurement unit211, a Sun direction acquisition unit212, and an attitude calculator213. The laser scanning apparatus200includes a built-in computer having a central processing unit (CPU), a memory, a communication interface, a user interface, and a clock. This computer performs arithmetic calculations related to positioning. Moreover, this computer implements functions of the sunlight incident direction measurement unit211, the Sun direction acquisition unit212, and the attitude calculator213. The light emission unit201includes a light emitting element that emits laser scanning light and also includes an optical system and peripheral circuits related to emission of light. The light reception unit202includes a light receiving element that receives laser scanning light and also includes an optical system and peripheral circuits related to reception of light. The distance measurement unit203calculates a distance from the laser scanning apparatus200to a point that reflects laser scanning light (scanned point), based on output of the light reception unit202. In this example, a reference optical path is provided inside the laser scanning apparatus200. The laser scanning light is output from the light emitting element and is split into two beams. One beam is emitted from the optical unit315to a target object, as laser scanning light, whereas the other beam is led to the reference optical path as reference light. The laser scanning light is reflected back from the target object and is received at the optical unit315, whereas the reference light propagates in the reference optical path. Then, these two beams are combined together and then enter the light reception unit202. The propagation distances of the laser scanning light and the reference light differ from each other, and therefore, the reference light is detected first by the light receiving element, and the laser scanning light is then detected by the light receiving element. In terms of an output waveform of the light receiving element, a detection waveform of the reference light is output first, and a detection waveform of the laser scanning light is then output after a time interval. The distance to the point that reflects the laser scanning light is calculated from a phase difference (time difference) between the two waveforms. In another case, the distance can also be calculated from a time-of-flight of laser scanning light. The direction acquisition unit204acquires a direction of the optical axis of laser scanning light. The direction of the optical axis is obtained by measuring an angle (horizontal angle) of the optical axis in the horizontal direction and an angle (elevation angle or depression angle) of the optical axis in the vertical direction. The direction acquisition unit204has a horizontal angle measuring unit204aand a vertical angle measuring unit204b. The horizontal angle measuring unit204ameasures a horizontal rotation angle of the horizontal rotation unit313. The horizontal rotation is rotation around the vertical direction. This angle is measured by an encoder. The vertical angle measuring unit204bmeasures a vertical rotation angle (elevation angle or depression angle) of the vertical rotation unit314. The vertical rotation is rotation around the horizontal direction. This angle is measured by an encoder. Measuring a horizontal rotation angle of the horizontal rotation unit313and a vertical rotation angle of the vertical rotation unit314provides a direction of the optical axis of laser scanning light, that is, a direction of a laser-scanned point, as seen from the laser scanning apparatus200. The light emission controller205controls timing of emission of laser scanning light of the light emission unit201. The drive controller206includes a horizontal rotation drive controlling unit206afor controlling driving to make the horizontal rotation unit313rotate horizontally and a vertical rotation drive controlling unit206bfor controlling driving to make the vertical rotation unit314rotate vertically. The driving is performed by motors. The communication device207communicates with other devices. The communication is performed by wired communication or by using a wireless local area network (LAN), a mobile phone network, or the like. The storage208is composed of a semiconductor memory or a hard disk drive and stores an operation program and data that are necessary to operate the laser scanning apparatus200and data that are obtained during processing and as a result of operation. The laser scanning controller209controls operation of the laser scanning apparatus200. The GNSS position measurement device210performs positioning using a GNSS. In the case of requiring high accuracy, relative positioning is performed. The sunlight incident direction measurement unit211measures an incident direction of sunlight that enters the laser scanning apparatus200, based on a detected waveform of incident light entering the laser scanning apparatus200. In this example, the sunlight incident direction measurement unit211obtains an incident direction as seen from the laser scanning apparatus200of incident light that satisfies predetermined conditions, thereby being presumed as being sunlight. Herein, sunlight is determined based on the following conditions. Sunlight does not include pulses of light, unlike scanning light (distance measuring light). In view of this, conditions for presuming an output waveform of a light receiving element as a detected waveform of sunlight are set, and incident light satisfying these conditions is determined as being sunlight (first determination). FIG.5shows examples of a detected waveform of light reflected back from an object and a detected waveform of sunlight in laser scanning. Laser scanning light includes pulses of light, and therefore, laser scanning light that is reflected back generates a single-peak waveform having a pulse shape, as shown inFIG.5. On the other hand, a detected waveform of sunlight does not have a pulse shape waveform. The pulse waveform of laser scanning light that is reflected back shown inFIG.5is merely an example. A pulse width, a value of wave height, and a shape of waveform can vary depending on the type and driving method of a light emitting element, a distance to a target, output of laser scanning light, conditions and a reflectivity of a reflection surface of a target that reflects, etc. In consideration of this, characteristics specific to a detection waveform of sunlight are recognized to determine whether sunlight is detected. Specifically, peculiarities of a detected waveform as illustrated inFIG.5are used to specify conditions for identifying incident sunlight, and a detected waveform satisfying the conditions is determined as being of sunlight. The incident direction of sunlight entering the laser scanning apparatus200is obtained in terms of a direction (horizontal angle and vertical angle) of an optical axis at the time of receiving light that is determined as being incident sunlight. It is determined whether a waveform is a detection waveform of incident sunlight, based on a combination of two or more of the following criteria: for example, whether a peak is detected during a predetermined time, whether fluctuation of amplitude of a waveform is detected during a predetermined time, whether a rise and a fall of a waveform are detected during a predetermined time, length of duration of a peak, width of fluctuation of a value of wave height during a predetermined time, and a slope of one or both of a rise and a fall of a waveform. The following describes a specific example. In one example, a pulse width of laser scanning light is assumed to be 0.5 μs. This pulse width depends on a light emitting element and a drive circuit thereof. In this case, when incident light exceeding a predetermined threshold is detected, the incident light is determined as being sunlight unless a single-peak waveform having a value of wave height exceeding a predetermined value is detected during a time of 2 μs. Moreover, in this example, a second determination is performed in addition to the first determination. The second determination is performed to determine whether incident sunlight is detected, by recognizing continuous reception of light in an angle range based on a viewing angle (apparent diameter) of the Sun (second determination). In one example, assuming that the vertical rotation unit314rotates 20 times per second, and a frequency of emitting scanning light is 50 kHz, pulses (50×103/20=2500 times) of scanning light are emitted in a range of 360 degrees along a vertical plane. In this case, an interval between points is 360 degrees/2500=0.144 degrees. That is, in terms of a viewing angle, an interval between adjacent scanned points is 0.144 degrees. As the rotation speed of the vertical rotation unit314decreases, this interval also decreases. On the other hand, the apparent diameter of the Sun is approximately 0.5 degrees. Under these conditions, in the case in which incident light is continuously detected in an angle range of two or more times 0.144 degrees (or three or more times 0.144 degrees), the incident light is determined as being sunlight. An appropriate upper limit of the angle range is approximately 1 degree (in the case in which the angle is 1 degree or greater, other factors should be considered). Since the apparent diameter of the Sun is approximately 0.5 degrees, it is also possible that, for example, in a case in which light is continuously received in an angle range of 0.4 to 0.6 degrees, the detected light is determined as not being light that is reflected back from a scanned point but as being sunlight. This determination is performed with respect to a horizontal direction in addition to a vertical direction. As a result, presence of the Sun is detected in terms of a two-dimensional plane. Herein, in the case in which each of the first determination and the second determination results in true (YES), it is determined that sunlight is detected, and a center direction at the time of detection is obtained. In one example, light that is determined as being sunlight may be received in an elevation angle range of 80 to 80.5 degrees in vertical scanning. In this case, it is presumed that the Sun is present at an elevation angle of (80 degrees+80.5 degrees)/2=80.25 degrees. A similar process is performed on a horizontal angle. These processes are performed by the sunlight incident direction measurement unit211. Sunlight that is scattered by fog or clouds causes an increase in apparent diameter of the Sun. In such a situation, the range of threshold for determining an apparent diameter is changed from the above-described range of 0.4 to 0.6 degrees to a range of 0.4 to 1.0 degrees by raising the upper limit. For example, settings of multiple steps, such as “Weather mode 1” and “Weather mode 2,” may be used, and in response to selection of a certain mode, the range of threshold for determining an apparent diameter may be changed, as described above. In one example, information of weather or clouds of an area in which laser scanning is to be performed, may be retrieved from the Internet, a database, or the like, and the above described weather mode may be selected on the basis of the weather information. In a case of acquiring information showing weather with a small amount of sunlight (e.g., cloudy or rainy weather), it may be determined that the processes of determining detection of sunlight cannot be correctly performed due to no detection of sunlight. It is also possible to perform determination of detection of sunlight by using only one of the first determination and the second determination. The Sun direction acquisition unit212acquires a direction of the Sun at the time the direction of sunlight has been obtained, from astronomical data. For example, there is software that calculates the direction of the Sun based on a position (latitude, longitude, and elevation) and a time that are input. The direction of the Sun at the corresponding time can be acquired by using such software. The attitude calculator213calculates an attitude of the laser scanning apparatus200, based on the direction of the detected sunlight, as seen from the laser scanning apparatus200, and the direction of the Sun, as seen from the laser scanning apparatus200, derived from astronomical data. In one example, it is assumed that the direction of the detected Sun is 10 degrees in horizontal angle and 80 degrees in vertical angle, relative to a reference direction of the laser scanning apparatus200at the time of initial set up. Herein, the horizontal angle is measured in a clockwise direction as seen from above in the vertical direction, whereas the vertical angle is measured in terms of elevation angle. On the other hand, it is also assumed that the corresponding direction of the Sun acquired from astronomical data is 180 degrees in azimuth (due south in the Northern Hemisphere) and 80.5 degrees in elevation angle. The azimuth herein is measured in a clockwise direction from 0 degrees at the north as seen from above in the vertical direction. Under these conditions, the horizontal angle in the reference direction of the laser scanning apparatus200in an absolute coordinate system is 170 degrees (angle position at 10 degrees to the east from the south), and an elevation angle reference in this direction is inclined from the horizontal direction by 0.5 degrees. If the laser scanning apparatus200is completely horizontal, an error in the elevation angle direction does not occur. In this manner, the attitude in the absolute coordinate system of the laser scanning apparatus200is determined. The process is performed by the attitude calculator213. It is noted that the absolute coordinate system is a coordinate system used in a GNSS and in a map. One, some, or all of the laser scanning controller209, the GNSS position measurement device210, the sunlight incident direction measurement unit211, the Sun direction acquisition unit212, and the attitude calculator213may be implemented by an external device that is separated from the laser scanning apparatus200, such as an external surveying data processing apparatus. This external device is composed of, for example, a personal computer (PC). It is necessary to preliminarily determine positional relationships between an antenna of the GNSS position measurement device210and an optical origin of the laser scanning apparatus200. Example of Processing FIG.4shows an example of a processing procedure. The program for executing the processing inFIG.4is stored in the storage of the built-in computer of the laser scanning apparatus200and is read and executed by the CPU of the computer. It is also possible to store this program in an appropriate storage medium and to read this program therefrom for use. Prior to the processing inFIG.4, first, the laser scanning apparatus200is set up at a site at which laser scanning is to be performed. After the laser scanning apparatus200is set up, positioning using a GNSS is performed to obtain the position of the laser scanning apparatus200(step S101). In the case of obtaining high position accuracy, relative positioning is performed. Then, 360-degree circumferential scanning is performed (step S102). The 360-degree circumferential scanning may be performed under conditions for normal scanning or may be performed by inserting a light reduction filter into an optical path, in order to avoid saturation of the light reception unit202due to effects of intense incident sunlight, causing difficulty in the detection itself of the incident light. Depending on the type of a light receiving element to be used, the latter method may be employed in consideration of effects of saturation. In a case in which an approximate direction of the Sun is already known, it is possible to perform scanning only in a limited direction. The 360-degree circumferential scanning in step S102provides light reception data. The light reception data includes a detected waveform of incident light received by the light reception unit202(output waveform of the light receiving element) and a relationship between the detected waveform and time. The detected waveform of the incident light is obtained by digitizing output of the light receiving element of the light reception unit202with the use of an analog-to-digital (A/D) converter, and the resultant digitized data is stored in the storage208in association with time. In the 360-degree circumferential scanning in step S102, in addition to the light reception data, point cloud data (measured distance values and directions of scanned points) may also be obtained. Next, the incident direction of sunlight is measured based on the light reception data obtained in the 360-degree circumferential scanning in step S102(step S103). In more detail, incident sunlight is identified based on a detected waveform, and the direction of the incident sunlight is measured. This process is performed by the sunlight incident direction measurement unit211. Although an example of performing the process in step S103after the 360-degree circumferential scanning in step S102is described herein, the process in step S103may be performed at the same time as scanning or may be performed in parallel to scanning with some delay. Next, the direction of the Sun at the time of receiving sunlight is acquired by using astronomical data (step S104). This process is performed by the Sun direction acquisition unit212. Then, the attitude in the absolute coordinate system of the laser scanning apparatus200is calculated based on results in steps S103and S104(step S105). This process is performed by the attitude calculator213. In this manner, the attitude (direction) in the absolute coordinate system of the laser scanning apparatus200is acquired by using the Sun. This processing does not require a highly accurate compass and an IMU, which eliminates the need for operation of these devices. 2. Second Embodiment Sunlight that is reflected back from a wall surface of a high-rise building may be detected. A measure for coping with this situation will be described. In this example, after incident light is determined as being sunlight in the determination in the first embodiment, it is further determined whether a surface is detected on each upper and lower side or each right and left side in the direction in which the Sun is determined to be present. This additional determination is performed based on scanning data. In a state in which the Sun is actually present in this direction, there is no surface on each side in the direction of the Sun. This process is performed by the sunlight incident direction measurement unit211. This additional determination may be determination of whether a surface surrounding the direction in which the Sun is determined to be present is detected, or may be determination of whether a surface around the direction in which the Sun is determined to be present is detected. The surface is not limited to a flat surface and may be a curved surface (when there are high-rise buildings with curved wall surfaces). 3. Other Matters Other Matter 1 This invention can also be viewed as an invention of a system. In one example, arithmetic calculations in the sunlight incident direction measurement unit211, the Sun direction acquisition unit212, and the attitude calculator213may be performed by cloud processing. In this case, laser scanning data is sent to a processing server, and the processing server executes arithmetic calculations on behalf of the sunlight incident direction measurement unit211, the Sun direction acquisition unit212, and the attitude calculator213. Then, data related to the attitude of the laser scanning apparatus200is sent from the processing server to the laser scanning apparatus200or a user. Other Matter 2 In the condition in which the area and the time are determined, it is possible to narrow down the range in which the Sun can be seen from the ground. Thus, the range of initial scanning can be set by using this relationship. This enables shortening the time required to perform scanning. Other Matter 3 The position of the laser scanner200may be approximately set. In one example, the position of the laser scanner200may be determined based on position information on the level of prefecture, city, town, village, or ward, and the determined position may be used as the position information in step S101. In this case, an error is large compared with a case of using a GNSS, but an approximate attitude of the laser scanner200can be determined. Other Matter 4 A reference detected waveform of laser scanning light that is reflected back is obtained in advance so as to be used as a reference for determining incident light that is received in actual measurement, and this reference detected waveform may be a theoretical value. Other Matter 5 There may be cases in which incident light that is not of sunlight is erroneously determined as being sunlight. For example, light of a vehicle or a construction machine, or light of a searchlight, may be erroneously determined as being sunlight. In order to prevent such erroneous determination, the following algorithm is employed. This algorithm involves acquiring a position of the laser scanning apparatus and acquiring an approximate direction of the Sun at the time of measurement from astronomical data. Then, it is further determined whether the Sun is present in the corresponding direction, although incident light is determined as being sunlight based on a detected waveform and a time width of incidence. In a case in which the corresponding direction is in an area in which the Sun cannot be present, the incident light is determined as not being sunlight. This method avoids erroneous calculation of the attitude of the laser scanning apparatus200based on incident light that is not sunlight. | 27,561 |
11859978 | It will be appreciated that, for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn accurately or to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity, or several physical components may be included in one functional block or element. Reference numerals may be repeated among the figures to indicate corresponding or analogous elements. DETAILED DESCRIPTION OF THE INVENTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components, modules, units and/or circuits have not been described in detail so as not to obscure the invention. For the sake of clarity, discussion of same or similar features or elements may not be repeated. Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing”, “analyzing”, “checking”, or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that may store instructions to perform operations and/or processes. Although embodiments of the invention are not limited in this regard, the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”. The term set when used herein may include one or more items. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently. Today, vehicles, cars, and other moving ground platforms, commonly referred to herein as vehicles, may use a speedometer to measure the velocity of the vehicle (e.g., the longitudinal velocity). A speedometer may measure the velocity of the vehicle by measuring the wheel rotation velocity and multiplying the wheel rotation velocity by the wheel radius. The speedometer may include sensors and a processing unit or module, which may be a part of or implemented by the vehicle computer. An accurate velocity estimation may be required by localization and navigation systems. However, in many applications, such localization and navigation systems do not have access to the computer of the vehicle, or to other extrinsic sensors such as global positioning system (GPS) receivers, cameras, LiDAR, RADAR, etc., and thus may not be able to obtain the velocity of the vehicle as measured by the speedometer. Embodiments of the invention may provide an accurate measurement of the velocity of the vehicle, that is independent of the vehicle's speedometer. According to embodiments of the invention, the velocity of the vehicle may be estimated using one or more inertial sensors that may be physically coupled or installed in the vehicle. Embodiments of the invention may obtain signals measured by the one or more inertial sensors installed in the vehicle and may derive the velocity of the vehicle by processing these signals. According to embodiments of the invention, the velocity of the vehicle may be estimated by processing signals generated by inertial sensors only, e.g., no readings of other sensors other than the accelerometers and gyroscopes may be required. An inertial sensor unit may be or may include an electronic device configured to measure at least one of the specific force, angular velocity, and the orientation of a vehicle, typically using one or more accelerometer and/or gyroscope. For example, an inertial sensor unit may be or may include an inertial measurement unit (IMU). An inertial sensor unit may include a three-dimensional accelerometer, to measure proper accelerations in the x, y and z directions, and a three-dimensional gyroscope, to measure angular velocities in the x, y and z directions. Where x, y, and z define a Cartesian coordinate system where the x and y axes are horizontal and the z axis is vertical. Moving along surfaces, roads, and other terrains results in a dynamic change of the readings of the inertial sensors. As such, the sensor readings contain intrinsic knowledge regarding the changes in location, which may be used to calculate the velocity of the vehicle. Typical roads may include road or surface imperfections or defects, referred to herein as a geometrical signature of the road. The road or surface imperfections may include for example bumps, potholes, cave-ins, sinkholes, hummocks, defective street cuts, surface deterioration, edge failure, cracking, rutting, subsidence, etc. It is noted that road or surface imperfections may be generated as a natural process of road degradation, or intentionally generated for various reasons. Embodiments of the invention may take advantage of those road or surface imperfections. According to embodiments of the invention, the road imperfections may be sensed by the one or more inertial sensors that are installed in the vehicle and may be manifested as changes in the accelerations and angular velocities measured by the one or more inertial sensors. The one or more inertial sensors may provide acceleration and angular velocity signals, carrying information of the geometrical signature of the road. Analyzing at least a portion of these acceleration and angular velocity signals may provide an estimation of the velocity of the vehicle. In some embodiments, analyzing at least a portion of these accelerations and angular velocity signals may be performed using any applicable measure of similarity between one or two signals, e.g., autocorrelation and cross-correlation techniques. According to some embodiments, two or more inertial sensors are installed on the vehicle, along a longitudinal axis of the vehicle. For example, a first inertial sensor may be located at a front end of the vehicle and a second inertial sensor may be located at a rear end of the vehicle. In some embodiments, a first inertial sensor may be located at a front wheel axle of the vehicle and a second inertial sensor may be located at a rear wheel axle of the vehicle. Thus, when driving over a road imperfection, each inertial sensor may sense a change in the accelerations and angular velocities of the vehicle in a different timing, depending on the distance between the front and rear wheels (e.g., the wheelbase). According to embodiments of the invention, this time differences may be measured, e.g., by performing cross-con-elation between signals of the two sensors. Using the time difference and wheelbase, the velocity of the vehicle may be calculated. According to some embodiments, one inertial sensor installed on the vehicle is sufficient for measuring the velocity of the vehicle. A single inertial sensor may capture road imperfections that are observed by both the front and rear wheels. For example, when driving forward, perturbations that originate at the front wheels reach the rear wheels after a period of time (e.g., a time difference or time lag) that depends on the distance between the front and rear wheels and the vehicle velocity. Thus, a single imperfection may be manifested twice in a single signal of an inertial sensor, successively, with a time lag between the two manifestations. When a certain imperfection is sufficiently large in magnitude and is not lost beneath the noise of the sensor (e.g., can be distinguished and detected), an autocorrelation of the single inertial sensor signal may reveal a distinct peak (other than the peak naturally located at time zero) at the time lag. The velocity of the vehicle may be estimated using the time lag and wheelbase. Embodiments of the invention may improve the technology of vehicle navigation and localization, by providing a measurement of the velocity of the vehicle that is independent of the vehicle computing system. Embodiments of the invention may use as little as a single inertial sensor, or two inertial sensors to measure the velocity of the vehicle. The velocity measured according to embodiments of the invention may be presented to the driver, may be used in navigation algorithms to obtain position of the vehicle, e.g., by integrating the measurements with respect to time, and to calculate acceleration by deriving the measurement with respect to time. This velocity measurement may be used as a back-up for other velocity measurements. FIG.1Adepicts a system100for providing a velocity of a vehicle110, according to some embodiments of the invention. According to one embodiment of the invention, system100may include a vehicle110, equipped with one or more inertial sensor unit112, also referred to as sensor unit112, that may measure and provide data including at least one of specific force, angular velocity and/or the orientation of a vehicle, typically using at least one of an accelerometer, a three-dimensional accelerometer, a gyroscope and/or a three-dimensional gyroscope. For example, sensor unit112may be or may include one or more inertial sensors or IMUs, e.g., that may be physically attached to the body of vehicle110. Vehicle110may further include a processor114and a communication module116for initial processing and transmitting of data measured by sensor unit112to navigation server130. In the example provided inFIG.1A, vehicle110may be a vehicle moving along a road, way, path or route120. This example is not limiting, and system100may include a vehicle moving in any area, such as a parking lot, a tunnel, a field, an urban canyon, or an indoor area. Processor114may provide, via communication module116, the data measured by sensor unit112or the velocity reading to a navigation server130directly or through networks140. Networks140may include any type of network or combination of networks available for supporting communication between processor114and navigation server130. Networks140may include for example, a wired, wireless, fiber optic, cellular or any other type of connection, a local area network (LAN), a wide area network (WAN), the Internet and intranet networks, etc. Each of navigation server130and processor114may be or may include a computing device, such as computing device700depicted inFIG.11. One or more databases150may be or may include a storage device, such as storage device730. In some embodiments, navigation server130and database150may be implemented in a remote location, e.g., in a ‘cloud’ computing system. According to some embodiments of the invention, navigation server130may store in database150data obtained from processor114and other data, such as mapping of terrain and/or route120, computational results, and any other data as required by the application. According to some embodiments of the invention, navigation server130may be configured to obtain accelerations and angular velocities over time of vehicle110moving in route120and calculate the velocity of vehicle110based on the runtime acceleration and angular velocities. According to some embodiments of the invention, processor114may calculate the velocity of vehicle110based on the runtime acceleration and angular velocities locally and send the velocity to navigation server130. FIG.1Bpresents a schematic illustration of sensor unit112with relation to a Cartesian coordinate system, helpful in demonstrating embodiments of the invention. Sensor unit112may measure accelerations in the x, y and z directions (referred to herein as ax, ayand azrespectively) and angular velocities in the x, y and z directions (referred to herein as wx, wyand wzrespectively), where the x-y plain is horizontal and the z-x and z-y plains are vertical. Reference is now made toFIG.2which depicts a vehicle210, equipped with sensor units222and232, according to embodiments of the invention. Vehicle210may have a front end230with front wheels234and front wheel axle236, and a rear end220with rear wheels224and rear wheel axle226. Sensor units222and232(that may be similar to sensor unit112) may be physically attached to the body of vehicle210, and located collinearly on a line parallel to the vehicle longitudinal axis1, e.g., each of sensor units222and232may be placed at different location along the longitudinal axis1of vehicle210. In some embodiments, a first sensor unit232, also referred to as front sensor unit232, is located or installed in front end230of vehicle210and a second sensor unit, also referred to as rear sensor unit222, is located or installed in a rear end230of vehicle210. For example, front sensor unit232may be located or installed on the front wheel axle236, and rear sensor unit222may be located or installed on the rear wheel axle226. The readings of the front sensor unit232and rear sensor unit222may be synchronized in time. According to some embodiments, more than one front sensor unit222may be used, and may be located installed at the front end of vehicle210, e.g., on the front wheel axle236, and more than one rear sensor unit232may be used, and may be located installed at the rear end of vehicle210, e.g., on the rear wheel axle226. Vehicle210may further be equipped with processor240, that may be configured to obtain readings of first sensor unit232and second sensor unit222. For example, processor240may be connected to front sensor unit232and rear sensor unit222, and may acquire readings of front sensor unit232and rear sensor unit222cyclically one after the other, such that the readings are matched or paired together on each acquisition cycle. Processor240may label each pair with a timestamp. Processor240may create the timestamp using an inner system clock of processor240or with an external clock. Processor240may calculate a time difference between a time when a first irregularity is sensed by first sensor unit232in a first location of the vehicle, e.g., in front end230, and a time when the irregularity is sensed in a second location of the vehicle, e.g., in rear end220, e.g., using the timestamps, and may calculate the velocity of vehicle210based on the time difference. Reference is now made toFIGS.3A and3B, which depict vehicle210driving over a road imperfection310, according to embodiments of the invention.FIG.3Adepicts vehicle210when the front wheels234of vehicle210are crossing or driving over road imperfection310, in time tk. Vehicle210keeps moving andFIG.3Bdepicts vehicle210when the rear wheels224of vehicle210are crossing or driving over road imperfection310, in time tk+τ. Where i denotes the time difference or time lag between the point in time in which the front wheels234of vehicle210are crossing or driving over road imperfection310and the point in time the rear wheels224of vehicle210are crossing or driving over road imperfection310. It may be assumed that when the front wheels234of vehicle210are crossing or driving over road imperfection310, as depicted inFIG.3A, a change in accelerations and angular rates of vehicle210may occur. This change in accelerations and angular rates of vehicle210may be sensed, picked or measured by front sensor unit232. Thus, a change in accelerations and angular rates of vehicle210may be manifested in at least one signal measured by front sensor unit232, e.g., as an irregularity. Similarly, it may be assumed that when the rear wheels224of vehicle210are crossing or driving over road imperfection310, as depicted inFIG.3B, a change in accelerations and angular rates of vehicle210may occur. This change in accelerations and angular rates of vehicle210may be sensed, picked or measured by rear sensor unit222. Thus, a change in accelerations and angular rates of vehicle210may be manifested in at least one signal measured by rear sensor unit222, e.g., as an irregularity. The time difference between the irregularities measured by front sensor unit232and rear sensor unit222may depend on the velocity of vehicle210and wheelbase280. Since wheelbase280is known, the velocity of vehicle210may be calculated by finding the time difference, and solving the equations of motion, e.g., dividing the wheelbase280by the time difference. According to embodiments of the invention, the time difference between the irregularities measured by front sensor unit232and rear sensor unit222, referred to herein as the time lag, may be calculated using statistical methods, e.g., using cross-correlation between a signal measured by one or more front sensor units232and a corresponding signal (e.g., signal of the same type) measured by one or more rear sensor units222. For example, cross-correlation between the acceleration in the direction normal to the road (the direction of the z-axis) measured by front sensor unit232and the acceleration in the direction normal to the road (the direction of the z-axis) measured by rear sensor unit222may be performed, e.g., by processor114or by navigation server130. Performing cross-correlation between the two signals may reveal a peak in a time equal to the time lag τ*. The peak in the cross-correlation may be detected and the time lag τ* may be set to the time corresponding to this peak. For example, the cross-correlation may be calculated by: C(τ)E[xk−τykT] Ck(τ)≅1ws∑i=k-ws+1kxi-τyiτk*=maxτCk(τ) Where k is a time index or time step, xkazIMU1is the acceleration in the direction of the z-axis measured by front sensor unit232, ykazIMU2is the acceleration in the direction of the z-axis measured by rear sensor unit222, ws is a window size, E indicates Expected value operation, and τ*kis the time lag for time index or time step k. At each time step k the time lag for which the maximum value of the cross-correlation occurs, τ*k, is found and used to compute an estimate for the speed of vehicle210according to, for example: vˆk=Dτk* Where, D is the wheelbase280, {circumflex over (v)}kis the estimated velocity of vehicle210that is positive when vehicle210is driving forward and negative when vehicle210is driving backwards. For example, if D is measured along the longitudinal axis1of vehicle210, then {circumflex over (v)}kis the estimated velocity of vehicle210in the longitudinal direction. According to some embodiments, readings of front sensor unit232and rear sensor unit222may be filtered before cross-correlations are performed. In some embodiments, the cross-correlation Ck(τ) is normalized. Normalization may be performed by dividing the cross-correlation Ck(τ) by the product of the standard deviation of xkazIMU1and ykazIMU2. According to some embodiments, the window size ws may be selected depending on the smallest velocity of interest min(v) at a given time of driving according to, for example: ws=αDmin(v),α>1 Where α is a constant but configurable constant greater than one. In some embodiments, a plurality of front sensor units232are used. In this case, corresponding signals (e.g., signals of the same type) of the plurality front sensor units232may be unified, e.g., averaged, to increase the signal to noise ratio. Similarly, corresponding signals of more than one rear sensor units222, if used, may be unified. The cross-correlation may be performed on the unified signals. In some embodiments, front sensor units232and rear sensor units222may be arranged in pairs, where each pair includes a front sensor unit232and rear sensor unit222, the calculation of the time lag may be repeated for pairs of a front sensor unit232and averaged. Other methods may be used to unify readings of a plurality of front sensor units232and rear sensor units222. According to some embodiments, one or more front sensor units232and one or more rear sensor units222are installed on the front and rear wheel axles of vehicle210, respectively, to decouple the measured accelerations and angular velocities from the dynamics of vehicle210. The body of vehicle210may be connected to the wheel axles of vehicle210through a system of springs and dampers. An inertial sensor (e.g., an accelerometer or a gyroscope) attached to the body of vehicle210may experience the same springs and dampers which may change or dump the measured signal of accelerations and angular rates. An inertial sensor attached to the wheel axles of vehicle210may measure accelerations and angular rates that more closely represent the actual perturbations from the ground. According to some embodiments, the front sensor unit232and rear sensor unit222(or pairs of a front sensor unit232and a rear sensor unit222) are installed collinearly on a line parallel to the longitudinal vehicle axis1to reduce errors associated with geometrical factors. For example, if not placed on a line parallel to the longitudinal vehicle axis1, the front sensor unit232and rear sensor unit222may sense different roll motion just as a result of the geometrical factors. According to some embodiments, more than one channel (e.g., measured signal over time, a signal may be one of accelerations in the x, y and z directions and angular velocities in the x, y and z directions) of front sensor unit232and rear sensor unit222are used to compute some or all elements of a cross-correlation matrix according to, for example: cross-correlation matrix=C(τ)=E[xk−τykT] Ck≅1ws∑i=k-ws+1kxi-τyiT∈ Where xk[ax, ay, az, wx, wy, wz]TϵR6×1is a vector of the accelerations in the x, y and z directions and angular velocities in the x, y and z directions measured by of front sensor unit232, yk[ax, ay, az, wx, wy, wz]TϵR6×1is a vector of the accelerations in the x, y and z directions and angular velocities in the x, y and z directions measured by of rear sensor unit222, C(τ) is a cross-correlation matrix. In some embodiments, the cross-correlation matrix includes one or more channels of one or more pairs of a front sensor unit232and a rear sensor unit222. The elements of the cross-correlation matrix over time k, Ck, may be used to extract the time lag τ*kfor which a matrix property is optimized. For example, the matrix property that is optimized may be the matrix 2-norm according to, for example: τk*=maxτCk(τ)22 Other matrix property may be optimized to extract the time lag τ*kfor example, other norm or the Eigenvalues of the cross-con-elation matrix may be optimized. According to embodiments of the invention, using additional channels may improve the clarity of the cross-correlation peak and may enable an easier and more reliable detection of the peak. Embodiments of the invention may be used to determine horizontal skid, understeer or oversteer of vehicle210, by installing two sensor units on the same axle, e.g., installing two sensor units on front wheel axle236or installing two sensor units on rear wheel axle226, and calculating estimated horizontal speed of the axle and comparing the estimated value with a threshold value. A skid may refer to an unintentional sliding of vehicle210typically sideways as a result of rapid braking or turning. Oversteer may refer to a situation in which vehicle210turns more than the driver intended, and understeer may refer to a situation in which vehicle210turn less than the driver intended. Oversteer and understeer may be detected by comparing the estimated horizontal speed to an average of past horizontal speeds of the driver. FIG.4depicts real and simulated accelerations of a real-world vehicle in the vertical direction (z-axis) measured using an IMU during a 5 seconds drive at a speed of 10.2 meters per second. The top graph (labeled as IMU1) is a measurement done by an IMU installed in a front end of a vehicle, and the bottom figure (labeled as IMU2) is a simulation of signals measured by an IMU located at a rear end of the vehicle. The simulated signal was generated by shifting the signal measured by the IMU in time and adding noise (signal to noise ratio in this simulation is equal to 5). The shift in time is equivalent to a horizontal distance of 1.5 meters between two sensors where the second sensor also measures more noise which may be due to proximity to the engine, for example. The rectangular square on the top and bottom plots shows the time interval where the cross-correlation is computed. In this example, a GPS is used to estimate the true speed. FIG.5depicts the normalized cross-correlation between the two acceleration signals depicted inFIG.4. A maximum is observed and highlighted with a circle at τ*=0.14 s. In this illustrative example, the estimated speed is 10.7 meters per second, which is within 5 percent error from the speed estimated using GPS. Negative values in the cross-correlation results presented inFIG.5may represent negative lag values which may be related to driving backwards (e.g., a peak with negative time lag. FIG.6depicts the combined normalized cross-correlation computed using all six channels provided by the IMU used inFIG.4. The three acceleration and three angular velocity components were shifted as described with relationFIG.4and noise was added with a noise to signal ratio of five for each channel. Cross-correlation was computed for each pair of matching channels (acceleration in the x direction from IMU1 was used with acceleration in the x direction from IMU2, etc.) to produce a total of six cross-correlation curves versus lag time. The six curves were combined by taking the 2-norm at each lag time to produce a single curve (one dimensional curve) that is presented inFIG.6. The combined graph clearly presents a distinct peak at the same lag time of 0.14 seconds. As can be seen by comparingFIG.6toFIG.5, incorporating additional channels substantially strengthens the clarity of the cross-correlation peak. Reference is now made toFIG.7which depicts a vehicle810, equipped with a single sensor unit822(e.g., including a single sensor unit or a plurality of sensor units whose measurements are unified or averaged), according to embodiments of the invention. Sensor unit822(that may be similar to sensor unit112) may be physically attached to the body of vehicle810. In some embodiments, sensor unit822is located or installed in front end230of vehicle810, however this is not limiting and sensor unit822may be located anywhere in vehicle810. Sensor unit822may include an accurate clock or may be provided with an accurate clock signal. Vehicle810, may further be equipped with processor840, that may be configured to obtain readings of sensor unit822, calculate a time difference between a time when a first irregularity is sensed by sensor unit822in a first location of the vehicle, e.g., in front end230, and a time when the irregularity is sensed by the same sensor unit822in a second location of the vehicle, e.g., in rear end220, and to calculate the velocity of vehicle810based on the time difference. Reference is now made toFIGS.8A and8B, which depict vehicle810driving over a road imperfection310, according to embodiments of the invention.FIG.8Adepicts vehicle810when the front wheels234of vehicle810are crossing or driving over road imperfection310, in time tk. Vehicle810keeps moving andFIG.8Bdepicts vehicle810when the rear wheels224of vehicle810are crossing or driving over road imperfection310, in time τk+τ. Where τ denotes the time difference or time lag between the point in time in which the front wheels234of vehicle810are crossing or driving over road imperfection310and the point in time the rear wheels224of vehicle810are crossing or driving over road imperfection310. According to embodiments of the invention, a single sensor unit822installed on vehicle810may be used for measuring the velocity of vehicle810. Sensor unit822may capture road imperfection310as it is observed by both the front wheels234and rear wheels224. It may be assumed that when the front wheels234of vehicle810are crossing or driving over road imperfection310, as depicted inFIG.8A, a change in accelerations and angular rates of vehicle210may occur. This change in accelerations and angular rates of vehicle810may be sensed, picked or measured by sensor unit822. Thus, a change in accelerations and angular rates of vehicle810may be manifested in at least one signal measured by sensor unit822, e.g., as an irregularity. Similarly, it may be assumed that when the rear wheels224of vehicle810are crossing or driving over road imperfection310, as depicted inFIG.8B, a change in accelerations and angular rates of vehicle810may occur. This change in accelerations and angular rates of vehicle810may be sensed, picked or measured by sensor unit822. Thus, a change in accelerations and angular rates of vehicle810may be manifested in at least one signal measured by sensor unit822, e.g., as an irregularity. The time difference between the irregularities measured by sensor unit822may depend on the velocity of vehicle810and the longitudinal distance between the front wheels234and rear wheels224, e.g., wheelbase820. Since wheelbase820is known, the velocity of vehicle810may be calculated by finding the time difference, and solving the equations of motion, e.g., dividing the wheelbase820by the time difference. Thus, when driving forward, perturbations that originate at front wheels234reach the rear wheels224after a period of time (e.g., a time lag) that depends on the distance between the wheels234and224, e.g., wheelbase820, and the vehicle velocity. Thus, a single imperfection310may be manifested twice in a signal measured by sensor unit822successively, with a time lag between the two manifestations. When a certain imperfection is sufficiently large in magnitude and is not lost beneath the noise of the sensor (e.g., can be distinguished and detected), an autocorrelation of a signal measured by sensor unit822may reveal a distinct peak (other than the peak naturally located at time zero) at the time lag. For example, the autocorrelation A may be calculated by: A(τ)E[xk−τykT] A(τ)≅1ws∑i=k-ws+1kxi-τxiτk*=maxτAk(τ) Where xkazIMU1is the acceleration in the direction of the z-axis measured by sensor unit822. The velocity of the vehicle210may be estimated using the time lag and wheelbase, using for example: τk*=wheelbasevehiclevelocityk FIG.9shows a flowchart of a method for estimating a velocity of a vehicle using at least two inertial sensors, according to some embodiments of the present invention. The operations ofFIG.9may be performed by the systems described inFIGS.1A,2,3A,3B and11, but other systems may be used. In operation910, a processor, e.g., processor240may obtain readings of at least two inertial sensors, e.g., front sensor unit232and rear sensor unit222, that are attached to a vehicle, e.g., vehicle210. The reading obtained from a single sensor may include one or more of the accelerations and angular velocities at x, y and z directions measured by the sensor over time. According to embodiments of the invention, the at least two inertial sensor units may be located collinearly on a line parallel to a longitudinal axis of the vehicle, with a distance between them. For example, a first inertial sensor unit may be located in a front end of the vehicle, and a second inertial sensor unit may be located in a rear end of the vehicle. In operation920, the processor may perform at least one cross-correlation, where each cross-correlation is calculated between a signal measured by the first inertial sensor unit and a corresponding signal measured by the second inertial sensor unit. The cross-correlation may be calculated over time windows of the measured signals, and provided as a signal over time. In some embodiments, a cross-correlation between accelerations measured in a vertical direction (z direction) by the first inertial sensor unit and accelerations measured in a vertical direction by the second inertial sensor unit is calculated. In some embodiments, more than one cross-correlation is calculated. For example, cross-correlation between one or more of the accelerations and angular velocities at x, y and z directions measured by the first inertial sensor unit and a corresponding accelerations and angular velocities at x, y and z directions measured by the second inertial sensor unit may be calculated. In operation930, the processor may calculate or find a time difference between a time when an irregularity is sensed in a first location of the vehicle and a time when the irregularity is sensed in a second location of the vehicle. For example, the processor may find a peak in a cross-correlation signal where the time difference may be the time associated with the peak. If more than one cross-correlation signals are calculated, the cross-con-elation signals may be arranged in a matrix and the time difference may equal a time lag that optimizes a matrix property, e.g., 2-norm. In operation940, the processor may calculate the velocity of the vehicle based on the time difference and the wheelbase. For example, for each two sensors, the processor may divide the wheelbase by the time lag. In operation950the processor may provide the velocity to a localization and navigation system, e.g., to navigation server130, or to a user. FIG.10shows a flowchart of a method for estimating a velocity of a vehicle using a single inertial sensor, according to some embodiments of the present invention. The operations ofFIG.9may be performed by the systems described inFIGS.1A,2,8A,8B and11, but other systems may be used. In operation1010, a processor, e.g., processor840may obtain readings of a single inertial sensor, e.g., sensor unit822, that may be attached to a vehicle, e.g., vehicle810. The reading obtained from the single sensor may include one or more of the accelerations and angular velocities at x, y and z directions measured by the sensor over time. In operation1020, the processor may perform at least one autocorrelation, where each autocorrelation is calculated for a signal measured by the inertial sensor unit. The autocorrelation may be calculated over time windows of the measured signals, and provided as a signal over time. In some embodiments, an autocorrelation of accelerations measured in a vertical direction (z direction) by the inertial sensor unit is calculated. In some embodiments, more than one autocorrelation is calculated. For example, autocorrelation of one or more of the accelerations and angular velocities at x, y and z directions measured by the inertial sensor unit may be calculated. In operation1030, the processor may calculate or find a time difference between a time when an irregularity is sensed in a first location of the vehicle and a time when the irregularity is sensed in a second location of the vehicle. For example, the processor may find a peak in an autocorrelation signal where the time difference may be the time associated with the peak. If more than one autocorrelation signals are calculated, the autocorrelation signals may be arranged in a matrix and the time difference may equal a time lag that optimizes a matrix property, e.g., 2-norm. In operation1040, the processor may calculate the velocity of the vehicle based on the time difference and wheelbase of the vehicle. For example, the processor may divide the wheelbase by the time difference. In operation950the processor may provide the velocity to a localization and navigation system, e.g., to navigation server130, or to a user. Reference is made toFIG.11, showing a high-level block diagram of an exemplary computing device according to some embodiments of the present invention. Computing device700may include a processor705that may be, for example, a central processing unit processor (CPU) or any other suitable multi-purpose or specific processors or controllers, a chip or any suitable computing or computational device, an operating system715, a memory120, executable code725, a storage system730, input devices735and output devices740. Processor705(or one or more controllers or processors, possibly across multiple units or devices) may be configured to carry out methods described herein, and/or to execute or act as the various modules, units, etc. for example when executing code725. More than one computing device700may be included in, and one or more computing devices700may be, or act as the components of, a system according to embodiments of the invention. Various components, computers, and modules ofFIGS.1,2,3A,3B,8A and8Bmay include devices such as computing device700, and one or more devices such as computing device700may carry out functions such as those described inFIGS.9and10. For example, navigation server130and processors114may be implemented on or executed by a computing device700. Operating system715may be or may include any code segment (e.g., one similar to executable code725) designed and/or configured to perform tasks involving coordination, scheduling, arbitration, controlling or otherwise managing operation of computing device700, for example, scheduling execution of software programs or enabling software programs or other modules or units to communicate. Memory720may be or may include, for example, a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory or storage units. Memory720may be or may include a plurality of, possibly different memory units. Memory720may be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM. Executable code725may be any executable code, e.g., an application, a program, a process, task or script. Executable code725may be executed by processor705possibly under control of operating system715. For example, executable code725may configure processor705to estimate a velocity of a vehicle using readings of one or more inertial sensors, and perform other methods as described herein. Although, for the sake of clarity, a single item of executable code725is shown inFIG.11, a system according to some embodiments of the invention may include a plurality of executable code segments similar to executable code725that may be loaded into memory720and cause processor705to carry out methods described herein. Storage system730may be or may include, for example, a hard disk drive, a CD-Recordable (CD-R) drive, a Blu-ray disk (BD), a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. Data such as the measured velocities as well as other data required for performing embodiments of the invention, may be stored in storage system730and may be loaded from storage system730into memory720where it may be processed by processor705. Some of the components shown inFIG.11may be omitted. For example, memory720may be a non-volatile memory having the storage capacity of storage system730. Accordingly, although shown as a separate component, storage system730may be embedded or included in memory720. Input devices735may be or may include a mouse, a keyboard, a microphone, a touch screen or pad or any suitable input device. Any suitable number of input devices may be operatively connected to computing device700as shown by block735. Output devices740may include one or more displays or monitors, speakers and/or any other suitable output devices. Any suitable number of output devices may be operatively connected to computing device700as shown by block740. Any applicable input/output (I/O) devices may be connected to computing device700as shown by blocks735and740. For example, a wired or wireless network interface card (NIC), a printer, a universal serial bus (USB) device or external hard drive may be included in input devices735and/or output devices740. In some embodiments, device700may include or may be, for example, a personal computer, a desktop computer, a laptop computer, a workstation, a server computer, a network device, a smartphone or any other suitable computing device. A system as described herein may include one or more devices such as computing device700. FIG.12presents a system for providing localization of a vehicle in further details, according to some embodiments of the invention. According to some embodiments of the present invention, a system of training a deep learning neural network (DL NN) model for determining a location of a vehicle moving along a known route in terms of geographic location, based on inertial measurement unit (IMU) measurements is implemented by navigation server130. The system may include: an IMU20within a vehicle10configured to measure a series of angular velocities and accelerations sensed at a plurality of locations for each section of a plurality of sections2A to2C along route1; a computer processor30configured to calculate, for each of sections2A to2C along route1, and based on the series of angular velocities and accelerations sensed at the plurality of locations in one of the sections, a kinematic signature which is unique to the one of the sections, compared with kinematic signatures of rest of the sections; and a positioning source40other than IMU20configured to obtain a positioning measurement of vehicle10for each of the sections2A to2C, wherein the computer processor30is further configured to associate each one of the kinematic signatures with a respective positioning measurement obtained via the positioning source other than the IMU, and wherein the computer processor is further configured to train a deep learning neural network (DL NN) model using a dataset50comprising the kinematic signatures associated with the respective positioning measurements, to yield trained DL NN model60. According to some embodiments of the invention, navigation server130may be further configured to, during a runtime phase, obtain runtime accelerations and angular velocities over time of a vehicle110moving in the defined area or route120and use the trained model to obtain current location of vehicle110based on the runtime acceleration and angular velocities. According to some embodiments of the invention, navigation server130may be further configured to, during the training phase, extract features from the accelerations and angular velocities of the training dataset and add the features to the training dataset. For example, the features may include velocity, horizontal slope and/or other features. Navigation server130may be further configured to, during the runtime phase, extract the same type of features from the runtime accelerations and angular velocities, and use the trained model to obtain the current location of the vehicle112based on the runtime acceleration, the runtime angular velocities and the runtime features. According to some embodiments of the invention, navigation server130may have mapping of the defined area or route120. In some embodiments, navigation server130may divide the mapping of the defined area or route120into segments and may provide or express the location of vehicle110a segment in which the vehicle110is located When discussed herein, “a” computer processor performing functions may mean one computer processor performing the functions or multiple computer processors or modules performing the functions; for example, a process as described herein may be performed by one or more processors, possibly in different locations. In the description and claims of the present application, each of the verbs, “comprise”, “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of components, elements or parts of the subject or subjects of the verb. Unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of an embodiment as described. In addition, the word “or” is considered to be the inclusive “or” rather than the exclusive or, and indicates at least one of, or any combination of items it conjoins. Descriptions of embodiments of the invention in the present application are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments. Embodiments comprising different combinations of features noted in the described embodiments, will occur to a person having ordinary skill in the art. Some elements described with respect to one embodiment may be combined with features or elements described with respect to other embodiments. The scope of the invention is limited only by the claims. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. | 45,582 |
11859979 | In accordance with common practice, the various described features are not drawn to scale but are drawn to emphasize specific features relevant to the example embodiments. DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific illustrative embodiments. However, it is to be understood that other embodiments may be utilized and that logical, mechanical, and electrical changes may be made. Systems and methods for providing external aiding of inertial navigation systems are provided herein. Some navigation systems may include processors that receive information from cameras, lidars, or other sensors (referred to hereafter as imaging sensors) that sense observable features within the environment of an object associated with the navigation system. The processors may use the observed environmental information to perform visual odometry, lidar odometry, point cloud registration, simultaneous localization and mapping (SLAM) or other algorithms that acquire navigation information from the information provided by the imaging sensors. The acquisition of navigation information from the observed environmental information may be a computationally intensive task that is beyond the capability of processors found within inertial navigation systems (INS). Accordingly, an imaging processing device typically acquires the observed environmental information from the imaging sensors and additional navigation information from an INS to calculate a reliable navigation solution. In a typical operational scenario, the INS outputs raw inertial data, position, velocity and attitude which a coupled device accepts as input. A map building function in an application executing on a coupled device may use the raw inertial data to perform odometry and register the new point clouds to a map. However, in certain scenarios, GNSS signals may become obstructed, leading to a decrease in the accuracy of the data provided by the INS to the coupled device. Usually, though, environments that obscure GNSS signals are feature rich. Accordingly, the performance of odometry and other imaging techniques will have much slower position error growth rates than those experienced by the INS. Thus, the INS may have an interface for receiving information from the coupled device, wherein the INS uses the received information to reduce the error growth rate in GNSS denied environments or other situations that can benefit from the information provided by the coupled device. FIG.1is a block diagram illustrating a conventional navigation system100that uses imaging sensors to acquire navigation information from an environment. As used herein, an imaging sensor may be any sensor capable of acquiring information from the environment through which an object (such as a vehicle, a personal electronic device, and other movable electronics) travels. In particular, an imaging sensor may acquire information from the environment through which a vehicle travels by detecting information reflected by objects within an observed field of view in the environment. For example, an imaging sensor may detect light, sound, electromagnetic radiation, and the like. In some implementations, an imaging sensor may emit signals towards objects in the environment through which the navigation system100travels and may detect portions of the signals that are reflected by surfaces in the environment. For example, an imaging sensor may be a camera103or a lidar105. Additionally, the imaging sensor may be one or a combination of an electro-optical/infrared camera (EO/IR), radar, sonar, or other similar image capture system. In some embodiments, an imaging sensor may include multiple imaging sensors. Further, the multiple imaging sensors may be the same type of sensors (i.e. multiple cameras) or implement multiple image sensing technologies. Additionally, the fields of view associated with each of the imaging sensors (camera103and/or lidar105) may be non-overlapping, overlapping. or substantially identical. Depending on the sensor type, information acquired from multiple sensors having overlapping fields of view may be subsequently processed to acquire three-dimensional descriptions of objects in the observed environment. When capturing information from the environment observed by the imaging sensors, the imaging sensors may capture multiple frames of image data describing the environment. Generally, a frame of image data contains information describing features within the observed environment. The information can be extracted and matched to similar information in other frames acquired at different times to determine the relative position and orientation of the imaging sensor and the attached navigation system100within the environment. An image frame captured by the imaging sensors may be characterized by a two-dimensional grid of pixels, a three-dimension point cloud, statistical descriptors, other types of information that could capably describe objects within an environment for subsequent comparisons. For example, a feature within an image frame may be a collection of pixels or points that are distinguishable from the surrounding pixels. The features may be points having particular relationships to neighbors, planes, textures, statistical distributions, and the like. Generally, identified features described in the image frame correlate to objects in the environment. As discussed above, features found in multiple image frames may be tracked either by identifying the collections of pixels or points as the same objects, or by estimating the position of features using measurements from systems other than the imaging sensors. The image frames captured by the imaging sensors may be analyzed by a processing device101. As used herein, one or more computational devices, such as the processing device101or other processing unit, used in the system and methods described in the present disclosure may be implemented using software, firmware, hardware, circuitry, or any appropriate combination thereof. The one or more computational devices may be supplemented by, or incorporated in, specially-designed application-specific integrated circuits (ASICSs) or field programmable gate arrays (FPGAs). In some implementations, the one or more computational devices may communicate through an additional transceiver with other computing devices outside of the navigation system100. The one or more computational devices can also include or function with software programs, firmware, or other computer readable instructions for carrying out various process tasks, calculations, and control functions used in the present methods and systems. The present methods may be implemented by computer executable instructions, such as program modules or components, which are executed by the at least one computational device. Generally, program modules include routines, programs, objects, data components, data structures, algorithms, and the like, which perform particular tasks or implement particular abstract data types. Instructions for carrying out the various process tasks, calculations, and generation of other data used in the operation of the methods described herein can be implemented in software, firmware, or other computer readable instructions. These instructions are typically stored on any appropriate computer program product that includes a computer readable medium used for storage of computer readable instructions or data structures. Such a computer readable medium can be any available media that can be accessed by a general purpose or special purpose computer or processing unit, or any programmable logic device. Suitable computer readable storage media may include, for example, non-volatile memory devices including semi-conductor memory devices such as Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory devices; magnetic disks such as internal hard disks or removable disks or any other media that can be used to carry or store desired program code in the form of computer executable instructions or data structures. The processing device101may be a processing device (such as the computational device described above with respect toFIG.1). Upon receiving image frames of data from the imaging sensors, such as camera103and/or lidar105, the processing device101may extract feature descriptors from the data provided by the imaging sensors. For example, extracted feature descriptors may include points that define planes, points, orientations of the various points and planes, statistical descriptions, histograms, image intensity, reflectivity of observed surfaces, and the like. Using the information acquired by the imaging sensors for multiple images acquired at different times, the processing device101may also execute feature matching algorithms for the features identified in the received image frames. The processing device101may calculate changes in relative orientation and relative translation with respect to the features based on differences between the feature descriptors in separate images for a matched feature. The processing device101may use information that describes matched features in the frames of image data to generate odometry information. Additionally, the navigation system100may store information used and produced by the processing device101that describes the tracked features from multiple image frames acquired at different times by the imaging sensors. In some embodiments, a memory unit, in communication with or part of the functionality ascribed to the processing device101and/or imaging sensors, may store information describing extracted features from the image frames. As the imaging sensors capture sequential image frames, one or more features identified within an image frame may correspond to features identified in previously and/or subsequently acquired image frames. In certain embodiments, the navigation system100may also include a GNSS receiver107. The GNSS receiver107may include at least one antenna that receives satellite signals from GNSS satellites. A GNSS satellite, as used herein, may refer to a space satellite that is part of a global navigation satellite system that provides autonomous geo-spatial positioning with global coverage. Generally, a GNSS receiver107receives line-of-sight time signals from GNSS satellites and calculates a geo-spatial position based on the time signals received from multiple GNSS satellites. Examples of GNSS systems may include the Global Positioning System (GPS) maintained by the United States government, the Galileo system maintained by the European Union (EU) and European space agency (ESA), the BeiDou navigation system maintained by China, among other navigation systems maintained by various national governments and political entities. In certain embodiments, the processing device101or other computational device may be coupled to the GNSS receiver107and may receive pseudorange measurements associated with the separate GNSS satellites within the line of sight of the GNSS receiver107. When the processing device101receives measurements from four or more satellites, the processing device101may calculate location information for the navigation system100anywhere on or near the Earth. In particular, during operation, the GNSS receiver107may extract the position, velocity and time (PVT) from the signals received from visible GNSS satellites and provide the pseudorange measurements to the processing device101. The computational device may derive PVT information for the navigation system100. In certain embodiments, the GNSS receiver107may provide pseudorange measurements or position information calculated therefrom to the processing device101, the processing device101may fuse the relative position information based on measurements from the imaging sensors with the absolute position information acquired from the GNSS receiver107to provide a navigation output113. The navigation output113may include an updated map, a georeferenced point cloud, notifications on obstacles to avoid, a planned path, and other useful navigation information. Because information is fused, when the navigation system100passes through GNSS denied environments, the navigation system100may use previously received GNSS measurements fused with previously received odometry information from the imaging sensors to provide current position estimates based on recently received information from the imaging sensors. Additionally, the processing device101may provide pseudorange corrections to adjust the pseudorange measurements from the GNSS receiver107based on calculated estimates from the processing device101. As illustrated inFIG.1, the imaging sensors may include a lidar105and a camera103. The lidar105and the camera103, in conjunction with the processing device101, may respectively perform lidar odometry and camera odometry. As used herein, the lidar105may be a sensor that uses a laser to measure distances to nearby objects. The measured distances provided by the lidar105may be used to perform odometry, high-fidelity 2D and 3D map making, vehicle state-estimation, real-time path planning and obstacle avoidance, among other navigation operations. When performing lidar related operations, the navigation system100may perform a process referred to herein as “point set registration” or “point cloud registration.” The lidar105may provide measurements as three dimensional points in relation to the navigation system100. The lidar105or processing device101may group points or sets of points acquired from the same lidar scan into point cloud sets. Point cloud registration may be the process of determining how one point cloud set acquired from a lidar scan at a first time relates to another point cloud set subsequently acquired from another lidar scan at a second time. Specifically, the processing device101or lidar105may calculate a transformation that aligns the points in different point cloud sets. In some embodiments, the performance of point cloud registration may include two steps. A first step in performing point cloud registration may begin by determining whether there are matching points in different sets and, if so, determining which points are the same between the different sets. A second step in performing point cloud registration may include estimating the transformation (rotation and translation) required to align the matching points. Point cloud registration may be a computationally intensive process and the number of points measured by the lidar105is proportional to the intensity of the computations performed by the processing device101. Accordingly, efforts are made to reduce the computational burden while maintaining accuracy. For example, some efforts attempt to reduce the number of points which require matching between point cloud sets. Some efforts randomly reduce the number of points in the sets of points. Other efforts attempt to extract features like lines, planes, and surfaces in an attempt to reduce the number of points in a set from millions to hundreds or less. In some embodiments, the use of a lidar105may provide advantages that include accurate range measurements, accurate odometry in position and attitude, widely available high-fidelity maps, not impacted by light conditions. However, the use of a lidar105may be subject to several disadvantages that may include computational intensity, large amounts of physical data storage, size, power consumption, cost, may lose track during fast rotations and translations, laser range is too short for some applications, and different point clouds must have matching features. In similar embodiments, the navigation system100may perform visual odometry based on measurements acquired from the camera103. As used herein, visual odometry may refer to the process of estimating the motion of an object (e.g., a vehicle, human, robot, and the like) using only the input of one or more cameras103attached to the object. The processing device101may analyze multiple images, acquired by the camera103, to determine a transformation (translation, rotation and scale) between the images in a similar manner to processes performed with lidar odometry as described above. In other implementations, features may be extracted from the images and tracked over time. However, direct methods for acquiring navigation information also exist. For example, the measurements presented to the visual odometry algorithms may include pixel representations of identifiably unique features. A camera model stored in memory may be used to estimate a three dimensional location of the feature in the real world. The rotation and translation may then be solved for up to an unknown scale factor for translation. If the camera103includes two or more cameras that are separated from one another by a known distance, then the scale of the translation may be determined. Alternatively, if only a single camera is used, then the scale of the translation may be determined using information acquired from other sources of navigational information. In some embodiments, the advantages of using one or more cameras103to perform visual odometry may include the relatively inexpensive cost of cameras, small size, lightweight, and the ability to provide data at fast rates. However, the use of cameras103to perform visual odometry is also subject to disadvantages that include the use of high quality cameras may be expensive, especially when multiple cameras are used to provide stereo images, thus requiring time synchronization between the images. Further disadvantages include camera calibration that may subsequently change with temperature and time changes, the use of additional information from other sensors to support monocular implementations, performance affected by the lighting of an environment, unable to track fast rotations and translations, and different images must have matching features to determine motion based on changes in the two images. As illustrated, the navigation system100includes both a camera103and a lidar105. The processing device101may use the measurements provided by the camera103and the lidar105to combine visual and lidar odometry thus improving the performance of both thereby. For example, the processing device101may perform a lidar odometry and mapping (LOAM) algorithm to combine the measurement from the camera103with the measurements from the lidar105. Combinations of lidar odometry and visual odometry may provide benefits where there is aggressive motion and environments lacking visual features. For instance, when measurements from the camera103is unable to provide a state estimate, the lidar105may provide a state estimate and vice versa. While the combination of measurements from the lidar105and the camera103may address problems related to featureless environments, the processing device101may still lose track when the navigation system100experiences excessive motion. In contrast to the system100inFIG.1, an inertial measurement unit or inertial navigation system (INS) may be used to supplement the information acquired from the imaging sensors and the GNSS receiver107. For example,FIG.2is a block diagram illustrating a conventional navigation system200with an integrated inertial measurement unit (IMU)209. As illustrated, the navigation system200may include imaging sensors, such as a camera203, a lidar205, or other types of imaging sensors. Also, the navigation system200may include a processing device201. The processing device201functions with the camera203and the lidar205to perform visual and lidar odometry in a similar manner to that described above with respect to the camera103, the lidar105, and the processing device101inFIG.1. Additionally, the navigation system200may include a GNSS receiver207, wherein the GNSS receiver207may provide pseudorange measurements for combining with the odometry calculations in a similar manner to the GNSS receiver107inFIG.1. As discussed above, the performance of visual odometry and lidar odometry is limited. In particular, performance may be limited when the navigation system200experiences aggressive motion, moves through a featureless environment, or both. To account for aggressive motion and featureless environments, the navigation system200may include additional sensors that can provide navigational measurements when the navigation system200experiences aggressive motion and/or travels through a featureless environment. For example, the navigation system200may include an IMU209. As referred to herein, the IMU209may be a system that provides raw inertial measurements of the motion experienced by the navigation system200. For example, the IMU209may include a series of gyroscopes, accelerometers, and/or magnetometers that measure the acceleration and rotation of the navigation system200along one or more axes. Using dead-reckoning, processing devices (such as processing device201) may create a state estimate of the position, velocity, and attitude of the object tracked by the navigation system200and identify the location of the navigation system200on a map. In some implementations, the processing device201may use the state estimates calculated from the inertial measurements provided by the IMU209to address the limitations of the lidar and visual odometry calculated as described above. Additionally, in some implementations, the imaging sensors and processing device201may form an odometry measuring system that is developed independently from the IMU209, where the IMU209may provide the inertial measurements to odometry system. In such implementations, the applications executed by the processing device201may control how to use the provided inertial measurements when providing a navigation output213. For example, applications executing on the processing device201may make simplifying assumptions based on the quality of the measurements provided by the IMU or based on the inertial measurements that are used to aid the odometry calculations. For example, an application executing on the processing device201that is associated with odometry calculations may neglect a Coriolis acceleration term in a velocity update equation and also neglect earth rate in an attitude update equation. Also, the application may control a data rate and coordinate system in which the angular velocity and linear acceleration are numerically integrated. Using inertial measurements from an IMU209to aid the odometry calculation may improve the initial conditions used to execute point cloud registration functions. Also, the inertial measurements may help account for motion that occurs between image captures by the camera203and scans by the lidar205. However, the use of the IMU209may also be subject to various disadvantages. For example, applications executing on the processing device201may consume computational resources to perform inertial navigation functions in combination with the performance of map-making, autonomous vehicle operations and/or other application purposes. Further, the processing device201produces a single navigation solution through the navigation output213. Thus, redundancy and quality checks may be difficult to perform. FIG.3is a block diagram illustrating a conventional navigation system300having multiple components. For example, the navigation system300may include an image sensor based component310and an inertial navigation system (INS) based component320. The image sensor based component310may function in a similar manner to the navigation system100described above inFIG.1. As illustrated, the navigation system300may include imaging sensors, such as a camera303, a lidar305, or other types of imaging sensors. Also, the image sensor based component310may include a processing device301that produces a navigation output313. The processing device301functions with the camera303and the lidar305to perform visual and lidar odometry in a similar manner to that described above with respect to the camera103, the lidar105, and the processing device101inFIG.1. Also, the image sensor based component310may receive inertial measurements from INS based component320. Accordingly, the INS based component320may include an INS311. Moreover, the INS based component320may include one or more aiding sensors304. In certain implementations, the navigation system300may also include an INS311. As illustrated, the INS311may be a computational device that performs inertial measurements and calculates navigation parameters without the need for external references. For example, the INS311may include a computational device, motion sensors, and rotation sensors to calculate the position, the orientation, and the velocity of a moving object using dead reckoning. The INS311may provide the navigation parameters to the processing device301for fusing with the odometry measurements provided by the imaging sensors (camera303and lidar305). To provide the navigation parameters, the INS311may track the position and orientation of an object relative to a known starting point, orientation, and velocity. The INS311may further include an IMU that provides inertial measurements in a similar manner to the IMU209described above with respect toFIG.2. A computational device within the INS311may process the signals provided by the IMU209to track the position and orientation of an object. As illustrated the INS311may provide the navigation parameters that result from the computations to the processing device301through an interface315on the INS311. The navigation parameters provided through the interface315from the INS311to the processing device301may include an earth referenced velocity, an earth referenced position, an earth referenced acceleration, a body to earth angular position, a body to earth angular velocity, a body linear acceleration, a body angular velocity, and the like. The processing device301may use the provided navigation parameters to calculate the navigation output313, which is similar to the navigation output113described above inFIG.1. A problem inherent in inertial navigation systems, such as the INS311, is that the performed calculations are subject to integration drift. For example, small errors in the measurement of acceleration and angular velocity may be integrated into progressively larger errors in velocity, which are compounded into still greater errors in position. Since new position estimates are calculated from previously calculated position estimates and the measured acceleration and angular velocity, the errors accumulate in proportion to the time since the initial position was used as an input. Accordingly, the INS311may include other aiding sensors304to bound the errors caused by integration drift. In certain embodiments, the aiding sensors304may include a magnetometer. The magnetometer may provide a measure of magnetic heading. When combined with a magnetic declination map, the magnetic heading measurement may be used to estimate a true heading. The INS311may use the true heading estimate to initialize heading for the IMU and as an aiding source during operation. Additionally, the aiding sensors304may include a barometric altimeter that provides an altitude and altitude change measurement. The INS311may use measurements produced by the barometric altimeter to stabilize a vertical channel measured by the INS311. Also, the aiding sensors304may include a wheel encoder as an aiding source to acquire quadrature encoder measurements from wheels and/or steering wheels when the navigation system is associated with a vehicle. In further embodiments, the aiding sensors304may include a RADAR that transmits a radio signal that is reflected off the environment to acquire range and/or range rate measurements. Moreover, the aiding sensors304may include a Sonar that transmits an acoustic signal that is reflected off the environment to acquire range and/or range rate measurements. Also, the aiding sensors304may include a transceiver used for radio navigation, where a radio signal is exchanged by two transceivers to measure range and/or range rate to a radio transceiver located at a different location from the navigation system. Additionally, the aiding sensors304may include ultrawide band ranging radios and acoustic navigation to measure range and/or range rate. Further, the aiding sensors304may include an airspeed sensor to measure the airspeed, angle of attack, and sideslip of an aircraft. The airspeed sensor may be used as an aiding sensor when the velocity of an airmass can be estimated. In certain embodiments, the aiding sensors304may include a GNSS receiver, where the GNSS receiver provides pseudorange measurements from four or more satellites along with their positions, facilitating the calculation of the position of the navigation system. The INS311may use the GNSS position to bound the integration drift. The use of aiding sensors for an INS311that provides navigational information to the processing device301may provide several advantages. Some of the advantages provided by the INS311include that the processing device in the INS311performs some processing of inertial measurements, GNSS pseudoranges, and other measurements from aiding sources, thus freeing up the processing device301to dedicate more resources to the computationally intensive lidar and visual odometry computations. Also, the INS311may provide a navigation solution that can be used for redundancy and quality checks against the navigation solutions calculated using the lidar and visual odometry. However, the use of the INS311may present some disadvantages. For example, when the navigation system300passes through a GNSS denied environment or experiences a GNSS receiver outage, the error growth in the measurements presented by the INS311may be uncontrolled. Accordingly, the navigation parameter315provided by the INS311may be inaccurate and be unusable by the processing device301when registering the odometry measurements to a map coordinate system. FIG.4is a block diagram illustrating a navigation system400that includes an image sensor based component410and an INS based component420. The image sensor based component410and the INS based component420may function in similar manner as to the image sensor based component310and the INS based component320inFIG.3. However, the INS402in the INS based complement420may have an input interface417for receiving feedback from a processing device401in the image sensor based component410, where the processing device401may be associated with the calculation of visual and lidar odometry, where the calculation of the visual and lidar odometry is substantially described above. As illustrated, the processing device401, camera403, lidar405, and INS402may function substantially as described above. Accordingly, the INS402may include a computational device that calculates navigation parameters from an IMU and other aiding sources404and provides the navigation parameters through an interface415to the processing unit401. Typically, the processing device401performs more computationally intensive tasks than the computational device on the INS402and the processing device401and computational device of the INS402may have processing capabilities commensurate with the computational intensity of the performed tasks. However, any processor that can capably perform the requested computations may function as the processing unit401or the computational device on the INS402. In some embodiments, in addition to including an interface415for providing navigational parameters to the processing device401, the INS402may also include an input interface417for receiving navigation parameters from the processing device401. Accordingly, the INS402may provide navigation parameters through the interface415to the processing device401that include earth referenced position, earth referenced velocity, earth referenced acceleration, body to earth angular position, body to earth angular velocity, body linear acceleration, and body angular velocity, and the like. Also, the processing device401may provide navigation parameters such as linear position in map coordinates, angular position in map coordinates, change in position in body coordinates, change in attitude in body coordinates, and the like. In certain embodiments, the INS402may function as an aiding source for the processing device401through the interface415. For example, the INS may provide navigation parameters through the interface415, where the processing unit401is coupled to the interface415to accept the provided navigation parameters as inputs. The processing device401may use the received navigation parameters when performing various functions. For example, the processing device401may execute a map building function that uses the received navigation parameters when performing odometry and registering new point clouds or image data acquired from the lidar405and/or the camera403. Also, the map building function may use the received navigation parameters to register point clouds or image data to a georeferenced map. The processing unit401may perform other functions to provide the navigation output413, where the navigation output413is substantially described above with respect to the navigation output113,213, and313. In alternative embodiments, the processing unit401may function as an aiding source for the INS402through the input interface417. For example, the INS may receive the linear and angular position in map coordinates or changes in position and attitude in body coordinates. The INS may use the received data to bound the drift that may occur to the inertial measurements during operation. For example, when the object associated with the navigation system400passes through a GNSS obstructed environment, there is a high probability that the object is passing through a feature rich environment thus increasing the accuracy of the odometry measurements produced by the processing unit401. Conversely, without the GNSS signals, without additional aiding sources, position and attitude errors in the navigation parameters produced by the INS402may increase. Accordingly, the processing unit401may provide odometry measurements to the INS402through the input interface417, which the INS402may then use to reduce the error growth rate in the navigation parameters produced by the INS402. In some embodiments, to implement the input interface417, the processing unit401may compute a change in linear and angular position over a time period. In some implementations, the time period may be a fixed interval but in other implementations, the time period between computations may be variable. The processing unit401may then communicate the calculated measurements to the INS402across the input interface417through a message that is understandable to the INS402. When the INS402receives the message from the processing unit401, the INS402may use the information within a Kalman filter in combination with other measurements to slow the development of errors in the inertial measurements. In certain embodiments, during initialization of the system, a digital message may be defined for allowing a user or other system to pass sensor installation data through the input interface417. Additionally, during operation, the input interface417may define messages for receiving information from other devices such as the processing unit401. For example, the interface may provide for a message for receiving the following messages: r(t)M→PM, andq(t)MP. The r(t)M→PMmay refer to the position of the imaging sensor body frames with respect to a map frame at a particular time. The q(t)PMmay refer to an attitude of the imaging sensor body frames with respect to the map frame at a particular time. In some embodiments, when the INS receives a message through the input interface417, the computational device of the INS402may apply a time stamp to the message of the time of reception and place the message in a circular buffer. The INS402may retrieve two messages from the circular buffer which have not been processed yet and fall with a look back window. The contents of the message may be referred to as: The past data valid at timetp=tk−Δt: r(tp)M, and q(tp)PM. The current data valid at timetc=tk: r(tc)M, and q(tc)PM. Additionally, the following variables may be defined: C(t)PM=f(q(t)PM): Rotation matrix which takes a vector from the body frame to the map frame. C(t)MP=(C(t)PM)T: Rotation matrix which takes a vector from the map frame to the body frame. In additional embodiments, the angular position of the point cloud body frame to map frame CPMmay be assumed to be constant and not vary with time. Additionally, the measurement provided through the interface417:CPM=CPM+δCPM, may be an estimate of the angular position of the point cloud body frame to map frame. Algorithms executing within the INS402may assume that the estimate of the angular position is in error. The error may be calculated using the following equation: δCPM=−{μM+ηM}CPM: The error in the measurement includes a time correlated bias and noise. Also, the position of the point cloud frame origin with respect to the map frame coordinatized in the map frame may be represented as rM→PM. Additionally, the measurement provided through the interface417may be an estimate of the position of the point cloud frame in the map frame coordinatized in the map frame represented as follows:rM→PM. Further, the angular position of the body frame (P) of the imaging sensors to the body frame (B) of the INS402may be represented as: CPBMoreover, the estimate of the angular position of the body frame of the imaging sensors to the body frame of the INS402may be represented as follows:=CPB+δCPB. Also, misalignment errors of the imaging sensor reference frame relative to the INS body frame may be represented as: δCPB=−{βB}CPB. Further, the error may be assumed constant in the INS body frame. Also, CBL:BtoL; CLE:LtoE; and CBE:CLECLB. In certain embodiments, given the past and current position data defined above with the following: r(tp)M,C(tp)PM,r(tc)M,C(tc)PM, the delta position measurement in the customer body frame is formed as follows: ΔrP=C(tc)MPr(tc)M−C(tp)MPr(tp)M. To be used in the Kalman filter of the INS402it must be put into the ECEF frame as follows: yΔpos=CBECPB[ΔrP]. This measurement is linearized and presented to the INS Kalman filter. In further embodiments, the delta attitude measurement may be the change in angular position of the body frame of the imaging sensors. The delta attitude measurement may be defined as follows: CP(t-Δt)P(t)=CMP(t)CP(t-Δt)M=(C(tc)PM)TC(tp)PM. To be used in the Kalman filter of the INS402, the delta attitude measurements may be put into the ECEF frame: yΔAtt=CBECPBCP(t-Δt)P(t)(CPB)T(CBE)T=CMP(t)CP(t-Δt)M=(C(tc)PM)TC(tp)PM This measurement may be linearized and presented to the Kalman filter of the INS402. FIG.5is a flowchart diagram illustrating an exemplary method500for providing feedback to an inertial navigation system through an input interface. In certain embodiments, method500proceeds at501, where one or more measurements are received from an external system across an input interface. Additionally, method500proceeds at503, where delta position and delta attitude measurements are identified from the received one or more measurements. As used herein, the delta position and delta attitude may refer to changes in position and attitude over a period of time. Method500then proceeds at505, where inertial estimates are calibrated based on the delta position and the delta attitude measurements. EXAMPLE EMBODIMENTS Example 1 includes a device comprising: an inertial navigation system, the inertial navigation system comprising: one or more inertial sensors; an input interface for receiving measurements, wherein the measurements comprise at least one of: delta attitude and/or delta position measurements from an external system; and position and attitude information in an arbitrary map frame; and a computation device that is configured to calibrate the errors from the one or more inertial sensors using the received measurements. Example 2 includes the device of Example 1, wherein the inertial navigation system receives the delta attitude and/or delta position measurements and the position and attitude information in a defined message format. Example 3 includes the device of any of Examples 1-2, wherein the inertial navigation system applies a time stamp to the received measurements. Example 4 includes the device of any of Examples 1-3, wherein the inertial navigation system stores the received measurements in a circular buffer. Example 5 includes the device of any of Examples 1-4, wherein the inertial navigation system further receives navigation information from a plurality of aiding sources, wherein the computation device uses a Kalman filter to combine the received navigation information with the received measurements. Example 6 includes the device of any of Examples 1-5, wherein the inertial navigation system receives an initial configuration through the input interface. Example 7 includes the device of any of Examples 1-6, wherein the computation device calibrates the errors using the received measurements when the computation device determines that the inertial navigation system is in a GNSS denied environment. Example 8 includes a method comprising: receiving one or more measurements from an external system across an input interface, wherein the one or more measurements are related to the position and attitude of the external system within a local environment; identifying delta position and delta attitude measurements from the received one or more measurements; and calibrating inertial estimates based on the delta position and the delta attitude measurements. Example 9 includes the method of Example 8, wherein receiving the one or more measurements further comprises receiving the one or more measurements in a defined message format. Example 10 includes the method of any of Examples 8-9, further comprising applying a time stamp to the received one or more measurements. Example 11 includes the method of any of Examples 8-10, further comprising storing the received one or more measurements in a circular buffer. Example 12 includes the method of any of Examples 8-11, further comprising: receiving navigation information from a plurality of aiding sources; and using a Kalman filter to combine the received navigation information with the received one or more measurements. Example 13 includes the method of any of Examples 9-12, further comprising receiving an initial configuration through the input interface. Example 14 includes the method of any of Examples 8-13, further comprising: determining that reliable GNSS measurements are unavailable; and calibrating the inertial estimates based on the determination. Example 15 includes a system comprising: an inertial navigation system coupled to an external device, the inertial navigation system comprising: one or more inertial sensors configured to provide inertial measurements of motion experienced by the system; an input interface configured to receive one or more measurements through one or more messages defined for communications with the inertial navigation system from the external device; a computation device configured to acquire delta position measurements and delta attitude measurements for the system from the one or more measurements, wherein the processing unit calibrates the inertial measurements based on the delta position measurements and the delta attitude measurements. Example 16 includes the system of Example 15, wherein the inertial navigation system further receives navigation information from a plurality of aiding sources, wherein the computation device uses a Kalman filter to combine the received navigation information with the received one or more measurements. Example 17 includes the system of any of Examples 15-16, wherein the inertial navigation system applies a time stamp to the one or more measurements. Example 18 includes the system of any of Examples 15-17, wherein the inertial navigation system stores the one or more measurements in a circular buffer. Example 19 includes the system of any of Examples 15-18, wherein the inertial navigation system receives an initial configuration through the input interface. Example 20 includes the system of any of Examples 15-19, wherein the processing unit calibrates the errors using the received measurements when the computation device determines that the inertial navigation system is in a GNSS denied environment. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement, which is calculated to achieve the same purpose, may be substituted for the specific embodiments shown. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof. | 45,224 |
11859980 | DETAILED DESCRIPTION Referring toFIG.1, an embodiment of a lawnmower1according to the disclosure is illustrated. The lawn mower1is adapted to cut grass. The lawn mower1includes a processing module11, a driving module12, a cutting module13, a control module14, and a main body (not shown) accommodating the processing module11, the driving module12, the cutting module13and the control module14. The control module14, such as a microcontroller, is configured to control operation of the driving module12and the cutting module13. More specifically, the driving module12exemplarily includes a plurality of wheels (not shown) and a driver, such as a motor (not shown). The driver is couple to the wheels, and is electrically connected to the control module14. The control module14controls the driving module12to drive the main body of the lawn mower1to proceed and rotate. The cutting module13exemplarily includes at least one blade (not shown), and is controlled by the control module14to cut grass. Since implementation of the physical structure of the lawn mower1has been well known to one skilled in the relevant art, detailed explanation of the same is omitted herein for the sake of brevity. The processing module11includes a first positioning device111, a second positioning device112, an image capturing device113, and a processor114electrically connected to the first positioning device111, the second positioning device112and the image capturing device113. The processor114may be implemented by a central processing unit (CPU), a microprocessor, a micro control unit (MCU), a system on a chip (SoC), or any circuit configurable/programmable in a software manner and/or hardware manner to implement functionalities discussed in this disclosure. The image capturing device113is implemented by a camera mounted on the main body of the lawn mower1. The image capturing device113is configured to perform image capturing to capture images of surroundings in front of the lawnmower1so as to generate image data. In this embodiment, the image data includes a video, but is not limited thereto. For example, the image data may include a photo in other embodiments. The first positioning device111is configured to receive an external positioning signal, and to perform positioning based on the external positioning signal. In this embodiment, the first positioning device111is implemented by a global positioning system (GPS) receiver which adopts the real time kinematic (RTK) technique and provides up to millimetre-level accuracy; the external positioning signal is a GPS signal. However, implementations of the first positioning device111and the external positioning signal are not limited to the disclosure herein and may vary in other embodiments. For example, in other embodiments, the first positioning device111may be implemented by a GPS receiver which does not adopt the RTK technique, or by a wireless-signal positioning device which performs positioning based on wireless signals such as Bluetooth signals or Wi-Fi signals. The second positioning device112is configured to perform inertial measurement (IM), i.e., to perform measurement of a variation in movement of the processing module11. In this embodiment, the variation in movement of the processing module11includes angular velocity and linear acceleration. The second positioning device112is further configured to perform positioning based on a result of the inertial measurement. In this embodiment, the second positioning device112is implemented by an inertial measurement unit (IMU), which exemplarily includes a six-axis sensor. Specifically speaking, the six-axis sensor includes a three-axis accelerometer and a three-axis gyroscope. The second positioning device112is capable of measuring angular velocity and linear acceleration that are related to movement of the second positioning device112in three dimensional space, and is capable of performing positioning based on the measurement of angular velocity and linear acceleration without relying on any external signal so as to locate the second positioning device112with respect to a reference position. However, implementation of the second positioning device112is not limited to the disclosure herein and may vary in other embodiments. For example, in other embodiments, the second positioning device112may be implemented by an IMU including a nine-axis sensor, which includes a three-axis accelerometer, a three-axis gyroscope and a three-axis magnetometer. It should be noted that since positioning by the second positioning device112does not rely on any external signal, an error in a positioning result obtained by the second positioning device112may be larger than that in a positioning result obtained by the first positioning device111. Moreover, the error in the positioning result obtained by the second positioning device112may accumulate and become large over time during positioning. In this embodiment, the processing module11is switchable between a first recording mode and a second recording mode based on signal strength of the external positioning signal (i.e., the GPS signal). When it is determined by the processing module11that the signal strength of the external positioning signal is greater than a predefined strength threshold, the processing module11switches to the first recording mode. On the other hand, when it is determined by the processing module11that the signal strength of the external positioning signal is not greater than the predefined strength threshold, the processing module11switches to the second recording mode. When the processing module11operates in the first recording mode, the first positioning device111is configured to periodically perform positioning based on the external positioning signal so as to locate the processing module11and generate a plurality of coordinate sets. The coordinate sets thus generated are in chronological order, and are GPS coordinates in this embodiment. The first positioning device111is further configured to continuously provide the coordinate sets (referred to as a first group of coordinate sets) to the processor114. The processor114is configured to generate a first path record based on a result (i.e., the first group of coordinate sets) of the positioning performed by the first positioning device111, and to periodically update the first path record based on the latest coordinate set among the first group of coordinate sets. It should be noted that in this embodiment, operation of the second positioning device112is suspended when the processing module11operates in the first recording mode. However, operation of the second positioning device112when the processing module11operates in the first recording mode is not limited to being suspended as disclosed herein and the second positioning device112may have different operation states when the processing module11operates in the first recording mode in other embodiments. When the processing module11operates in the second recording mode, the second positioning device112is configured to periodically perform positioning by means of inertial measurement (IM, i.e., measurement of a variation in movement of the processing module11) so as to locate the processing module11and generate a plurality of coordinate sets. The coordinate sets thus generated are in chronological order, and the second positioning device112is configured to continuously provide the coordinate sets (referred to as a second group of coordinate sets) to the processor114. The processor114is configured to generate a second path record based on a result (i.e., the second group of coordinate sets) of the measurement of the variation in movement performed by the second positioning device112, and to periodically update the second path record based on the latest coordinate set among the second group of coordinate sets. Likewise, operation of the first positioning device111is suspended when the processing module11operates in the second recording mode. However, operation of the first positioning device111when the processing module11operates in the second recording mode is not limited to being suspended as disclosed herein and the first positioning device111may have different operation states when the processing module11operates in the second recording mode in other embodiments. It should be noted that the first positioning device111is configured to, when it is determined by the processing module11that the signal strength of the external positioning signal increases from being not greater than the predefined strength threshold to being greater than the predefined strength threshold (i.e., the processing module11switching from the second recording mode to the first recording mode), perform positioning based on the external positioning signal so as to obtain a coordinate set of an exact position (PN′) of the processing module11(seeFIG.3). Subsequently, the processor114is configured to compute, based on the second path record and the coordinate set of the exact position (PN′) of the processing module11, corrected path data related to the second path record, and to generate a movement path record based on the first path record and the corrected path data. Referring toFIGS.5and6, the movement path record indicates a surrounding boundary (b) which defines and surrounds a working area (A). It is worth to note that the determination as to whether the signal strength of the external positioning signal is greater than the predefined strength threshold may be made by the processor114or by the first positioning device111. In one embodiment, the processor114obtains the signal strength of the external positioning signal via the first positioning device111, and makes the aforementioned determination by comparing the signal strength of the external positioning signal and the predefined strength threshold. In one embodiment, the first positioning device111receives the external positioning signal, makes the aforementioned determination by comparing the signal strength of the external positioning signal and the predefined strength threshold, and transmits a result of the aforementioned determination to the processor114. Referring toFIGS.2and4, an embodiment of a method of movement tracking according to the disclosure is illustrated. The method according to the disclosure is implemented by the processing module11that is previously described. The method includes a tracking procedure as shown inFIG.2, and a planning and navigation procedure as shown inFIG.4. In this embodiment, the planning and navigation procedure is executed subsequent to the tracking procedure. In a scenario where grasses in the working area (A) as exemplarily shown inFIGS.5and6are to be cut, the tracking procedure has to be executed by the processing module11at first so as to define the working area (A) for the lawnmower1to move around in the working area (A) and cut grasses therein. After the working area (A) has been specified, the planning and navigation procedure is executed by the processing module11such that the lawnmower1automatically moves all around the working area (A) and cuts grasses therein. It should be noted that the tracking procedure should be executed in a condition that the lawnmower1is being operated to move by a user. For example, in a scenario where the working area (A) is a yard, the user has to operate the lawnmower1to make a loop around the yard at first so as to allow the processing module11to execute the tracking procedure. It should be noted that in one embodiment, the tracking procedure is executed to define at least one region of obstacle (s) (e.g., a building, a pool, a tree, a flowerbed or the like) in the working area (A). In other words, the movement path record further indicates at least one obstacle boundary which defines an obstacle therein, and the obstacle boundary may later be taken into consideration in the planning and navigation procedure. In this way, the lawn mower1moving in the working area (A) will automatically avoid the obstacle. Referring toFIG.2, the tracking procedure includes steps S11to S15delineated below. Initially, it is assumed that at the moment the lawn mower1starts being operated by the user to move, the signal strength of the external positioning signal (i.e., the GPS signal) is greater than the predefined strength threshold. When the lawn mower1is activated and operated by the user to start moving, the processing module11determines whether the signal strength is greater than the predefined strength threshold. When it is determined that the signal strength of the external positioning signal is greater than the predefined strength threshold, the processing module11operates in the first recording mode, and step S11is performed. In step S11, the processor114of the processing module11generates the first path record that indicates a first movement path (t1) as shown inFIG.3based on the result of the positioning performed by the first positioning device111(hereinafter also referred to as GPS positioning), and keeps updating the first path record. In particular, the first path record contains a first group of coordinate sets which are generated by the first positioning device111based on the GPS positioning, which respectively correspond to positions that are sequentially arranged, and which cooperatively indicate the first movement path (t1). Then, it is assumed that the lawnmower1is operated by the user to move to a dead zone where the signal strength of the external positioning signal is not greater than the predefined strength threshold. When it is determined that the signal strength is not greater than the predefined strength threshold, the processing module11switches to the second recording mode, and the flow of procedure proceeds to step S12. In step S12, the processor114of the processing module11in the second recording mode generates the second path record that indicates a second movement path (t2) following the first movement path (t1) as shown inFIG.3based on the result of the positioning performed by the second positioning device112(hereinafter also referred to as IM positioning), and keeps updating the second path record. In particular, the second path record contains a second group of coordinate sets which are generated by the second positioning device112based on the IM positioning, which respectively correspond to positions that are sequentially arranged, and which cooperatively indicate the second movement path (t2). In the following description, it is assumed that there are totally N number of coordinate sets in the second group of coordinate sets, where N is a positive integer greater than one. Subsequently, it is assumed that the lawn mower1is operated by the user to move to leave the dead zone. When it is determined by the processing module11that the signal strength of the external positioning signal increases from being not greater than the predefined strength threshold to being greater than the predefined strength threshold, the processing module11switches from the second recording mode to the first recording mode, and the flow of procedure proceeds to step S13. In step S13, the first positioning device111performs positioning based on the external positioning signal (i.e., performing the GPS positioning) so as to obtain the coordinate set of the exact position (PN′) of the processing module11based on the GPS positioning. The first positioning device111further periodically performs the GPS positioning so as to locate the processing module11to allow the processor114to generate a third path record that indicates a third movement path (t3) starting from the exact position (PN′). In particular, the third path record contains a third group of coordinate sets which are generated by the first positioning device111based on the GPS positioning, which respectively correspond to positions that are sequentially arranged, and which cooperatively indicate the third movement path (t3). The coordinate set of the exact position (PN′) is the leading one of coordinate sets in the third group of coordinate sets. The exact position (PN′) represents a position of the lawn mower1at which the processing module11determines that the signal strength of the external positioning signal increases from being not greater than the predefined strength threshold to being greater than the predefined strength threshold, and at which the processing module11switches from the second recording mode to the first recording mode. It is noted that as shown inFIG.3, a deviation (Δ) exists between an end position (PN) of the second movement path (t2) and the exact position (PN′). Since the error in the positioning result obtained by the second positioning device112may accumulate over time during the IM positioning, the deviation (Δ) may increase as more coordinate sets in the second group are generated. That is to say, the longer the second movement path (t2), the greater the deviation (Δ). To correct the second movement path (t2), the flow of procedure proceeds to step S14. In step S14, the processor114computes the corrected path data based on the deviation (Δ) between the end position (PN) of the second movement path (t2) and the exact position (PN′). The corrected path data indicates a corrected movement path (t2′) following the first movement path (t1) and terminating at the coordinate set of the exact position (PN′). In particular, the corrected path data contains a corrected group of coordinate sets which respectively specify positions that are sequentially arranged and which cooperatively indicate the corrected movement path (t2′). The coordinate sets in the corrected group respectively correspond to the coordinate sets in the second group, and thus there are totally N number of coordinate sets in the corrected group. The processor114computes each coordinate set in the corrected group based on a corresponding coordinate set in the second group and the coordinate set of the exact position (PN′) of the processing module11. By utilizing the corrected group of coordinate sets, accumulated error made in the IM positioning performed by the processing module11is corrected, and thus movement of the lawn mower11may be faithfully and correctly recorded. In this embodiment, positions of the lawn mower1are expressed in the two-dimensional Cartesian coordinate system, and each coordinate set has an X-element and a Y-element. For example, a difference set (xdiff,ydiff) representing the deviation (Δ) has a difference value in X-axis xdiffthat is obtained by subtracting an X-element x′Nof the coordinate set (x′N,y′N) that represents the exact position (PN′) from an X-element xNof the coordinate set (xN,yN) that represents the end position (PN) of the second movement path (t2), and a difference value in Y-axis ydiffthat is obtained by subtracting a Y-element y′Nof the coordinate set (x′N,y′N) that represents the exact position (PN′) from a Y-element yNof the coordinate set (xN,yN) that represents the end position (PN) of the second movement path (t2). More specifically, to obtain each coordinate set in the corrected group, the processor114computes the coordinate set by shifting the corresponding coordinate set in the second group by a weighted deviation. The weighted deviation is equal to the deviation (Δ) multiplied by a weight and the weight is related to an ordinal number of the corresponding coordinate set in the second group divided by a total number of coordinate sets in the second group (i.e., N). For example, referring toFIG.3, for the n-th coordinate set (xn,yn) in the second group that represents an n-th position (Pn) on the second movement path (t2), where n is a positive integer in a numerical range ranging from2to N and represents the ordinal number of the coordinate set (xn,yn) in the second group, the processor114computes an X-element x′nof a corresponding n-th coordinate set (x′n,y′n) in the corrected group that represents a corresponding n-th position (Pn′) on the corrected movement path (t2′) by subtracting, from an X-element xnof the n-th coordinate set (xn,yn) in the second group, the difference value in X-axis xdiffof the difference set (xdiff,ydiff) multiplied by a weight which is equal to the ordinal number of the n-th coordinate set (xn,yn) in the second group divided by the total number of coordinate sets in the second group, i.e., nN Similarly, the processor114computes a Y-element y′nof the corresponding n-th coordinate set (x′n,y′n) in the corrected group by subtracting, from a Y-element ynof the n-th coordinate set (xn,yn) in the second group, the difference value in Y-axis ydiffof the difference set (xdiff,ydiff) multiplied by the weight. In brief, the aforementioned computation can be expressed by a mathematical formula shown below. (xn′,yn′)=(xn,yn)-[(xdiff,ydiff)×nN] It is worth to note that the weight is set to zero for the first coordinate set (x1,y1) in the second group of coordinate sets, and increases as the ordinal number of the coordinate set increases. In other words, the later a coordinate set is in the second group of coordinate sets in chronological order, the larger the weight that corresponds to the coordinate set is. It should be noted that since only the coordinate set of the exact position (PN′) is required for computing the corrected path data, computation of the corrected path data may be performed while the processing module11is generating the third path record based on the result of the GPS positioning. Thereafter, it is assumed that the signal strength of the external positioning signal remains greater than the predefined strength threshold until the lawn mower1is operated by the user to finish defining the working area (A), i.e., to complete the loop around the working area (A). Eventually, the procedure proceeds to step S15. In step S15, the processor114generates the movement path record based on the first path record, the corrected path data and the third path record. As what has been previously mentioned, the movement path record indicates the surrounding boundary (b) which outlines the working area (A). It is worth to note that in the abovementioned scenario, the lawnmower1has only passed the dead zone (i.e., an area where the signal strength of the external positioning signal is not greater than the predefined strength threshold) once during the tracking procedure. However, the number of times for which the lawn mower1passes the dead zone(s) during the tracking procedure is not limited to the disclosure herein and may vary in other embodiments. In a scenario where the lawnmower1is to pass the dead zone(s) multiple times during the tracking procedure, steps S12to S14will be executed by the processing module11multiple times. That is to say, whenever it is determined that the signal strength of the external positioning signal is greater than the predefined strength threshold, the processing module11switches to the second recording mode and generates a second path record based on the result of the IM positioning performed by the second positioning device112. Whenever it is determined that the signal strength of the external positioning signal increases from being not greater than the strength threshold to being greater than the strength threshold, the processing module11switches back to the first recording mode, and generates, based on the result of GPS positioning performed by the first positioning device111, a third path record that contains the coordinate set of the exact position (PN′) of the processing module11. In addition, the processor114computes, based on the second path record and the coordinate set of the exact position (PN′), the corrected path data to achieve an effect of correcting the error caused by the IM positioning. Referring toFIG.4, the planning and navigation procedure includes step S21and S22delineated below. In step S21, the processor114generates first navigation data and second navigation data based on the movement path record. More specifically, referring toFIG.5, the first navigation data contains entries of first working path data that respectively indicate a plurality of first working paths (r1) which are arranged in sequence in the working area (A) and which are parallel to and spaced apart from each other. Moreover, two consecutive ones of the first working paths (r1) are directed to opposite directions. Referring toFIG.6, the second navigation data contains entries of second working path data that respectively indicate a plurality of second working paths (r2) which are arranged in sequence in the working area (A) and which are parallel to and spaced apart from each other. Likewise, two consecutive ones of the second working paths (r2) are directed to opposite directions. It is worth to note that any adjacent two of the second working paths (r2) arranged in the working area (A) are closer to each other than any adjacent two of the first working paths (r1) arranged in the working area (A) are to each other. In this embodiment, a distance between any adjacent two of the second working paths (r2) is half of the distance between any adjacent two of the first working paths (r1), but is not limited thereto. Consequently, the number of the first working paths (r1) passing through a unit area in the working area (A) is less than the number of the second working paths (r2) passing through the unit area. In step S22, the processing module11receives a navigation-enabling signal, and is activated by the navigation-enabling signal to control the lawn mower1to automatically move based on one of the first navigation data and the second navigation data. The navigation-enabling signal may be generated based on a user operation at a nearby location or at a remote location. In one embodiment, the navigation-enabling signal may be automatically generated by the control module14at a designated time point, for example 10 A.M. every day. In autonomous operation of the lawn mower1, the processing module11determines whether the signal strength of the external positioning signal is greater than a predefined power threshold. The predefined power threshold is equal to the predefined strength threshold in this embodiment, but is not limited thereto in other embodiments. When it is determined by the processing module11that the signal strength of the external positioning signal is greater than the predefined power threshold, the processing module11operates in a first navigation mode. In the first navigation mode, the first positioning device111performs positioning based on the external positioning signal so as to locate the processing module11, and, based on the first navigation data and the result of the positioning performed by the first positioning device111, the processor114outputs to the control module14a first navigation signal that is related to the first working paths (r1). After receiving the first navigation signal, the control module14controls, based on the first navigation signal, the driving module12to drive the main body of the lawn mower1to move along the first working paths (r1) as exemplarily shown inFIG.5, one after the other, and controls the cutting module13to cut grasses at the same time. On the other hand, when it is determined by the processing module11that the signal strength of the external positioning signal is not greater than the predefined power threshold, the processing module11operates in a second navigation mode. In the second navigation mode, the second positioning device112performs positioning by means of inertial measurement so as to locate the processing module11, and, based on the second navigation data and the result of the positioning performed by the second positioning device112, the processor114outputs to the control module14a second navigation signal that is related to the second working paths (r2). Similarly, after receiving the second navigation signal, the control module14controls, based on the second navigation signal, the driving module12to drive the main body of the lawn mower1to move along the second working paths (r2) as exemplarily shown inFIG.6, one after the other, and controls the cutting module13to cut grasses at the same time. In this way, automation of the lawn mower1is realized. It is worth to note that when it is determined by the processing module11that the signal strength of the external positioning signal decreases from being greater than the predefined power threshold to being not greater than the predefined power threshold, the processing module11switches from the first navigation mode to the second navigation mode, and the processor114outputs the second navigation signal to the control module14based on the second navigation data and the result of the IM positioning performed by the second positioning device112. Subsequently, the control module14controls, based on the second navigation signal, the driving module12to drive the main body of the lawn mower1to move from a current position on one of the first working paths (r1) to a closest one of the second working paths (r2) that is closest to the current position, and then to follow the closest one of the second working paths (r2) to continue cutting grasses. Similarly, when it is determined by the processing module11that the signal strength of the external positioning signal increases from being not greater than the predefined power threshold to being greater than the predefined power threshold, the processing module11switches from the second navigation mode to the first navigation mode, and the processor114outputs the first navigation signal to the control module14based on the first navigation data and the result of the GPS positioning performed by the first positioning device111. Subsequently, the control module14controls, based on the first navigation signal, the driving module12to drive the main body of the lawn mower1to move from a current position on one of the second working paths (r2) to a closest one of the first working paths (r1) that is closest to the current position, and then to follow the closest one of the first working paths (r1) to continue cutting grasses. Since any adjacent two of the second working paths (r2) arranged in the working area (A) are closer to each other than any adjacent two of the first working paths (r1) arranged in the working area (A), tracks of the lawnmower1would be more dense, meaning that more areas would be repeatedly worked by the lawn mower1, when the processing module11operates in the second navigation mode than in the first navigation mode. In this way, adverse effect (e.g., incomplete coverage of grass cutting in the working area (A)) caused by the relatively inaccurate navigation in the second navigation mode due to the error existing in the IM positioning can be mitigated. In one embodiment where the working area (A) and a surrounding area outside the working area (A) and nearby the surrounding boundary (b) have rather definite color distinctions, whenever the processing module11is in the first navigation mode or the second navigation mode, the processor114determines, based on the result of the positioning (relying on the external positioning signal or by means of inertial measurement), whether a distance between the processing module11and the surrounding boundary (b) is smaller than a predetermined distance threshold, i.e., whether the processing module11is nearby the surrounding boundary (b). When it is determined that the distance between the processing module11and the surrounding boundary (b) is smaller than the predetermined distance threshold, the processor114controls the image capturing device113to perform image capturing so as to generate the image data. Next, the processor114determines, by means of color recognition based on the image data, whether the processing module11has reached the surrounding boundary (b). When it is determined that the processing module11has reached the surrounding boundary (b), the processor114outputs the first navigation signal or the second navigation signal to the control module14such that the lawn mower1turns and moves to a subsequent one of the first working paths (r1) or a subsequent one of the second working paths (r2). In one implementation, color recognition performed on the image data is implemented by using an artificial neural network that has been trained in advance by conducting an algorithm of machine learning, but implementation of the color recognition in other embodiments is not limited thereto. More specifically, the processor114distinguishes a boundary between a lawn zone and a none-lawn zone based on color patterns and color variations. Hence, determination as to whether the processing module11has reached the surrounding boundary (b) may be correctly made based on machine vision realized by the processor114and the image capturing device113. In one embodiment, the processing module11is separable from the main body of the lawn mower1, and is capable of independently executing the tracking procedure of the method according to the disclosure when the processing module11is separated from the main body of the lawn mower1. That is to say, during the tracking procedure of the method, the processing module11is not required to be moved along with rest of the lawn mower1. For example, the processing module11may be separated from the main body of the lawn mower1, and then be carried by the user to make a loop around the working area (A) so as to define the working area (A) with the loop serving as the surrounding boundary (b). Therefore, the user does not have to operate the whole lawn mower1to make a loop for defining the working area (A), which saves labor and enhances convenience of use. In one embodiment, the processing module11does not include the image capturing device113, and the method includes the tracking procedure but not the planning and navigation procedure. Moreover, the processing module11may be mounted on different kinds of devices such as an electronic device (e.g., a robot vacuum cleaner, a mobile sprinkler or a smart watch), a wearable accessory (e.g., a wristband, a watch, a necklace, a collar), a transportation device (e.g., a vehicle) and so on. Additionally, the tracking procedure simply involves tracking movement of the aforementioned devices but not defining the working area (A). To sum up, the method of movement tracking according to the disclosure utilizes the processing module11to periodically perform positioning based on the external positioning signal when the signal strength of the external positioning signal is greater than the predefined strength threshold, to periodically perform positioning by means of inertial measurement when the signal strength of the external positioning signal is not greater than the predefined strength threshold, and to selectively generate the first or second path record based on the result of the positioning. Moreover, the processing module11is utilized to obtain the coordinate set of the exact position of the processing module11when it is determined that the signal strength of the external positioning signal increases from being not greater than the predefined strength threshold to being greater than the predefined strength threshold, to compute corrected path data based on the second path record and the coordinate set of the exact position, and to generate the movement path record based on the first path record and the corrected path data. In this way, the movement of the processing module11can be exactly tracked, and the processing module11is capable of planning reliable working paths for navigating the lawn mower1based on the movement path record thus generated. In addition, it is not necessary to use physical objects to define a boundary of the working area for a lawn mower, thereby enhancing convenience and reliability of use. In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiment. It will be apparent, however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. It should also be appreciated that reference throughout this specification to “one embodiment,” “an embodiment,” an embodiment with an indication of an ordinal number and so forth means that a particular feature, structure, or characteristic may be included in the practice of the disclosure. It should be further appreciated that in the description, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects, and that one or more features or specific details from one embodiment may be practiced together with one or more features or specific details from another embodiment, where appropriate, in the practice of the disclosure. While the disclosure has been described in connection with what is considered the exemplary embodiment, it is understood that this disclosure is not limited to the disclosed embodiment but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements. | 37,436 |
11859981 | DESCRIPTION OF THE SPECIFIC EMBODIMENTS Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the invention. Accordingly, examples of embodiments of the invention described below are set forth without any loss of generality to, and without imposing limitations upon, the claimed invention. While numerous specific details are set forth in order to provide a thorough understanding of embodiments of the invention, those skilled in the art will understand that other embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure aspects of the present disclosure. Some portions of the description herein are presented in terms of algorithms and symbolic representations of operations on data bits or binary digital signals within a computer memory. These algorithmic descriptions and representations may be the techniques used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm, as used herein, is a self-consistent sequence of actions or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Unless specifically stated or otherwise as apparent from the following discussion, it is to be appreciated that throughout the description, discussions utilizing terms such as “processing”, “computing”, “converting”, “reconciling”, “determining” or “identifying,” refer to the actions and processes of a computer platform which is an electronic computing device that includes a processor which manipulates and transforms data represented as physical (e.g., electronic) quantities within the processor's registers and accessible platform memories into other data similarly represented as physical quantities within the computer platform memories, processor registers, or display screen. A computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks (e.g., compact disc read only memory (CD-ROMs), digital video discs (DVDs), Blu-Ray Discs™, etc.), and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memories, or any other type of non-transitory media suitable for storing electronic instructions. The terms “coupled” and “connected,” along with their derivatives, may be used herein to describe structural relationships between components of the apparatus for performing the operations herein. It should be understood that these terms are not intended as synonyms for each other. Rather, in some particular instances, “connected” may indicate that two or more elements are in direct physical or electrical contact with each other. In some other instances, “connected”, “connection”, and their derivatives are used to indicate a logical relationship, e.g., between node layers in a neural network. “Coupled” may be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical or electrical contact with each other, and/or that the two or more elements co-operate or communicate with each other (e.g., as in a cause an effect relationship). According to aspects of the present disclosure, a user's involuntary movements may be translated into virtual events to create an enhanced experience for the user by detecting an impending change in center of mass motion of the user and responding to the impending change with a virtual event. Motion in the virtual event is associated the impending change in center of mass motion of the user. As used herein, the term “center of mass motion” refers to translation of the user's center of mass or rotation of the user about some axis or some combination of these. In the context of rotation, the user's center of mass may lie on the axis about which the user rotates but need not do so. According to aspects of the present disclosure, motion in virtual events may be tied to physical events. A computer system may detect an impending change in center of mass motion of a user by analyzing motion information from a detecting device or by analyzing contextual information. The computer system may then respond to the detected impending change in center of mass motion by initiating a virtual event with a display device and associating motion in the virtual event with subsequent motion information from the detecting device. FIG.1is a block diagram showing the method for tying impending physical events to virtual events according to aspects of the present disclosure. In the example shown, an impending physical event may be detected at101by analyzing motion information from a detecting device or by analyzing contextual information. For example, information from an inertial sensor may be compared to a threshold to determine whether an inertial event is sufficiently perceptible to a user. An impending physical event may be an event that has not yet been felt by the user but will occur and will cause a perceptible change in center of mass motion of the user. The impending physical event may be detected by analyzing motion information from a detecting device, such as an inertial measurement unit or other motion detector, or by analyzing contextual information. The inertial measurement unit (IMU) may include any number of sensors that respond to changes in motion, e.g., accelerometers, gyroscopes, or tilt sensors. To detect an “impending” change in center of mass motion, the system that implements the method is presumably able to detect or predict the change in motion before user is able to experience it. After an impending physical event has been identified, a corresponding virtual event may be initiated as indicated at102. Initiation of the virtual event may include for example and without limitation, scheduling the virtual event for a time when the physical event is perceptible to the user or starting the virtual event when the physical event is perceptible to the user. The virtual event may be associated with the physical event by creating events displayed on a display that are similar in some respect to the physical event. The virtual event may be configured to mask or enhance the physical event. By way of example, and not by way of limitation, the virtual event may be an in-application event that involves some form of movement that is coordinated somehow with the physical event. The virtual event may include a change in musical tone or tempo, a change the tone or hue of colors displayed on a screen, a change in events in the application to have additional movement or special minigames. The virtual event may be displayed from any suitable point of view. By way of example, the point of view may be “first person”, i.e., one where the display shows a portion of a virtual world from a playable character's point of view. In such a first person point of view, the display shows what the playable character would see. Alternatively, point of view may be configured to show the playable character in virtual world from a suitable vantage point, e.g., from behind the playable character. In some implementation, the point of view may change as the orientation of the display changes, e.g., if the display is part of a hand-held device. After detecting the impending physical event, the computer system may dynamically associate motion in the virtual event with motion information obtained from the detecting device, as indicated at103. As used herein, “dynamically associating” refers to contemporaneously obtaining motion information from the detecting device, analyzing the motion information, and displaying corresponding motion in the virtual event according to the obtained motion information. Within this context, “contemporaneously” means within a sufficiently short timeframe that the user does not notice a latency between a physical motion and display of a corresponding motion in the virtual event. According to some aspects of the present disclosure, a frequency or pattern that characterizes the virtual event may match to a measured frequency or pattern that characterizes the impending physical event or a harmonic of such measured frequency. Matching a frequency or pattern of the virtual event to a frequency or pattern of a physical event may include for example and without limitation matching a beat, tempo, or rhythm of music to repeated pattern of motion that characterizes the physical event. Examples of repeated patterns of motion in a physical event include, but are not limited to, the gait of a horse, the rocking of a boat in response to waves on water, the motion of an airplane in response to air turbulence. As another example is, the shape of terrain or the pattern of turbulence or similar features in a virtual environment, such as a game environment, may change in accordance with the frequency or pattern of the physical event. In this way, the user may feel like the physical event was part of the event in the application. Thus, a physical event that may be scary or unenjoyable becomes more fun. Detection of a frequency or pattern of the physical event may be performed similarly to detection of the impending physical event itself, e.g., through the use of the detection device. The frequency of the physical event may be detected using, for example and without limitation, a threshold and peak detection, where an amplitude of a signal from an inertial sensor is calculated based on the frequency analysis of the data from the detection device and averaged on a set interval or a dynamic moving interval window. The frequency of the physical event may be determined from the number and timing of peaks after the inertial data from the IMU meets or exceeds a threshold. An end to the physical event may signal to the device to cease the virtual event or enter a standard mode of operation, as indicated at104. The end to the physical event may determined by comparison of inertial data to the threshold. A comparison of the inertial event to the threshold may determine that the physical event has ceased when the inertial data is below the threshold. In some implementations, the physical event may be determined to have ceased if the inertial data drops below the threshold and stays below the threshold for some predetermined time. The next detected impending physical event would then be a new instance instead of a continuation of the previous instance of a physical event. In some alternative implementations, the previous physical event may cease and a new instance of an impending physical event may begin when the inertial data exceeds a threshold for some predetermined period of time. In this way, the device may account for constant motion of the user or errors in inertial measurement. After an end to the physical event has been detected, the virtual event may be ended105. Alternatively, each virtual event may last only a set duration and end after that period105regardless of whether the physical event has been determined to end. According to some alternative aspects of the present disclosure virtual events having a set duration may be initiated in succession while a physical event is detected to provide the effect of a continuous virtual event during the duration of the physical event. Once the end of the physical event is detected from inertial data, or the physical event was not detected during a set duration, virtual events may no longer be initiated. FIG.2is a flow diagram depicting an example of contextual determination of impending physical events and tying the impending physical event to virtual events according to aspects of the present disclosure. Contextual determination of impending physical events at201may involve using contextual information to determine whether there is an impending physical event. As discussed above, the impending physical event is one that would result a change in center of mass motion of the user. The contextual information may include information from other applications, such as information indicating whether a device is in a particular mode of operation, an image taken by an image capture unit or geographic information relating to a past, present, or future location of the user and/or the user's device. Additionally, contextual detection may use information provided by other systems such as computing devices in aircrafts or motor vehicles. By way of example and not by way limitation a contextual event may occur when a user changes a device setting, such as the airplane mode on a cellphone or game device. The device may detect that the user has changed a setting to place the device in airplane mode. From change of the device to airplane mode setting the device may determine that there is an impending physical event, e.g., imminent takeoff of an aircraft. Another example of contextual event detection of impending physical events may be through the use of navigation data as shown inFIG.3. A user may start a navigation app on a device301in communication by suitable link302with a virtual event application that associates an impending physical event with a virtual event303. The virtual event application, e.g., a video game, may run on the same device301as the navigation app or on a different device. As shown, when information from the navigation app indicates an impending navigation event such as the user is going to make a turn304, the virtual event application may determine an impending physical event and tie the determined impending physical event to the virtual event303. As shown a motion in the virtual event303is associated with the motion in the impending physical event304because both the gameplay vehicle306and the user301are making right turns305,304. It should be noted that the association of motion in virtual events to physical events is not limited to turns in the same direction and motion in the virtual event may be associated in any way that takes advantage of the feeling of movement a user has during a change in center of mass motion. In other alternative aspects of the present disclosure, the virtual event application may use images from an image capture unit to predict an impending physical event. For example and without limitation the device running the virtual event application may include an image capture unit such as a camera or video camera. The image capture unit may be pointed at a screen in an airplane or car showing navigation data such as takeoff time or turn-by-turn directions. Images obtained from the image capture unit may be analyzed by, e.g., an object detection algorithm, character recognition algorithm or image recognition algorithm (e.g., machine vision), to determine impending navigation events. Navigation events may include such events as airplane take-off, airplane landing, and airplane or other vehicle turns. After the impending physical event is determined, a virtual event305may be initiated at202. The virtual event305shown is a right turn and matches the right turn detected by the navigation app304. According to some aspects of the present disclosure navigation system and the application for tying physical events to virtual events may coordinate to provide similar virtual events to the navigation events, which were estimated ahead of time using navigation map data and also determined on real time by the navigation system. Alternatively, a random virtual event or single type of virtual event may be initiated regardless of the type of navigation event where the association between the motion of the user and the virtual event is simply that there is a change in motion of the user and a change in motion in the virtual event. The application may also use navigation data such as the speed and heading of the user to determine when to initiate the virtual event at202. Referring again toFIG.2, motion information is obtained from a device, e.g., an inertial sensor, navigation system, etc., as indicated at203. By way of example, and not by way of limitation the application that generates the virtual events may optionally use contextual events to determine a frequency of a physical event. The contextual events may be navigation events from a navigation system. For example, and without limitation the navigation system may show that the vehicle is on a windy road that switches back at a fixed interval from this information the fixed interval may be used as a frequency of physical events. Alternatively, inertial information from one or more IMUS may be used to determine a frequency of the physical event, as discussed above. After obtaining motion information, the application dynamically associates motion in the virtual event with motion information from the device, as indicated at204. In some implementations, such association may involve matching a frequency of physical events to the frequency of corresponding virtual events.FIG.5further depicts an example of associating virtual events to impending physical events and matching the frequency of the physical event to the virtual events. FIG.4is a diagram showing the association of devices detecting an impending physical event and a virtual event displayed on a display device according to aspects of the present disclosure. These devices may provide contextual data for detecting contextual events and determining an impending physical event. Here, contextual data such as a navigation information includes topographic map data that may be used to determine the frequency at which the user is going to experience an elevation change401based on a position, speed and heading of the user or vehicle402. Alternatively, the IMU may detect changes in elevation401. These detected impending changes in elevation of the user401may be translated into virtual events406that match or approximate the physical events401. In particular, information relating to motion and/or changes in motion may be estimated for the impending physical event may be used to generate simulated motion occurring in a corresponding virtual event. In the example depicted inFIG.4, the user is in a vehicle traveling402on a road with many elevation changes401. The virtual events are initiated to match the frequency of elevation changes408of the physical events401. As shown, the virtual events are in the form of elevation changes of a skier405,406,407in a skiing videogame. As the user changes in elevation from the top of a mountain402to the middle of the mountain403to the bottom of the mountain404so does the skier change position from the top of a ski-slope405to the middle of the ski-slope406to the bottom of the ski-slope407. The position and incline of the slope that the user will be traveling upon many be determined through the user of topographic information from the navigation system or using data from one or more IMUS. The timing of the frequency of elevation changes in the skiing game is matched to the timing of physical elevation changes of the user so that the user feels a change in his physical elevation at the same time his skier is experiencing a steeper slope, indicative of a faster change in elevation. This would provide the user with a virtual event that matches the physical sensation of a quick change in elevation such as weightlessness or increased gravity. Referring again toFIG.2, an end to the physical event may be detected205contextually and may signal an end to the virtual event206. A contextual end to the physical event may be determined for example and without limitation by a change in device mode, a navigation event, or another event determined, e.g., with information obtained from an image capture unit or microphone. Detection of an end to the physical event205with a device mode may be for example exiting an airplane mode on a device. Contextual events such as navigation events may be used to determine and end to physical events by detecting when a turn will end, a slope will level off or a road will straighten out. Detecting an end to a physical event with an image capture unit or a microphone may involve detecting certain visual or audio cues. By way of example and without limitation, the system may detect that a fasten seatbelt sign has been turned off either with an image capture unit or a microphone. The virtual event may be completed and the application may enter a normal event mode where events controlled by the application are not associated with physical events206. FIG.5is a diagram showing the association of devices detecting an impending physical event and a virtual event displayed on a display device according to aspects of the present disclosure. According to some aspects of the present disclosure, multiple different devices may be used in detection of an impending physical event. As shown inFIG.3, a virtual event displayed on a display device501may be associated with an impending physical event detected by multiple different devices each having an IMU. The devices shown include a hand-held device with IMU502, a vehicle equipped with an IMU503, and a pair of headphones equipped with an IMU504. As used herein, the term “handheld device” generally refers to any device configured to be held in a user's hand or hands when in normal operation. By way of example and not by way of limitation, handheld devices include cellphones, portable game devices, tablet computers, laptop computers and the like. The vehicle equipped with the IMU503may be a, plane, train, car, helicopter or other conveyance. The devices may communicate with each other via a wired or wireless connection. The wired or wireless connection may be a wired or wireless internet connection, a wired or wireless network connection, Bluetooth connection, nearfield communication (NFC) connection, an optical connection, an acoustic connection, or an Ethernet or other wired communication standard connection. The display device501may receive virtual event display information from the application for associating impending physical events to virtual events. The application may be running on a single device such as the handheld device of any number of devices. According to some alternative aspects of the present disclosure, use of different devices may enable better detection of impending physical events. For example, a user wearing IMU equipped headphones504may be an occupant in a car also having an IMU503. The car may encounter a bumpy road causing the IMU to detect momentary vertical acceleration that is over the threshold. This momentary vertical acceleration may not yet be detected by the IMU in the headset indicating that the shock from the bumpy road has not been translated from the car to the user yet. Thus, the window of time in which to initiate a virtual event to coincide with the physical event is still open. Timing of initiation of the virtual event is configured to occur within a time period for which the difference in the time between the start of the user's experience of the physical event and the start of the corresponding virtual event is imperceptible. For example and without limitation the time between detection of a physical event and initiation of the virtual event may be less than 100 milliseconds (ms). FIG.6is illustrates filtering regular, e.g., periodically recurring, physical events from the virtual events according to aspects of the present disclosure. Examples of periodically recurring events include, but are not limited to, up and down motion of a rider on a horse, rocking of a boat in response to waves on water, vertical motion of an automobile travelling at a relatively constant speed over regularly spaced bumps in a road. As shown, an IMU may provide inertial information601that shows peaks over the physical event threshold602at regular intervals. After a certain number of peaks, occurring at regular intervals past a regular event threshold603the physical events may be filtered so that virtual events are not shown for regular physical events after the regular event threshold603. Thus, when Physical events are associated with virtual events606, the virtual event604, as depicted may have an intensity or an occurrence that mirrors the physical events601. After the regular event threshold603, virtual events may not be initiated605. In this way, the virtual events may simulate the way a user becomes accustomed to regular movements felt during travel, such as the rocking of a ship, or train. In some implementations, historical information may be used to determine whether a change in motion similar to the impending change in motion has occurred in a recent history. In the context of such implementations, “recent” can be regarded as within a time that is of the order of the period of the regular event. System FIG.7depicts the system700configured to tie physical events to virtual events according to aspects of the present disclosure. The system700may include one or more processor units703, which may be configured according to well-known architectures, such as, e.g., single-core, dual-core, quad-core, multi-core, processor-coprocessor, cell processor, and the like. The marketplace server may also include one or more memory units704(e.g., random access memory (RAM), dynamic random-access memory (DRAM), read-only memory (ROM), and the like). The processor unit703may execute one or more programs717, portions of which may be stored in the memory704and the processor703may be operatively coupled to the memory, e.g., by accessing the memory via a data bus705. The programs717may be configured to tie impending physical events to virtual events708according to the method described above with respect toFIG.1and/orFIG.2. In other words, execution of the programs causes the system to analyze motion information from a detecting device or analyze contextual information to detect an impending change in center of mass motion of a user and initiate a virtual event with a display device in response to the impending change in center of mass motion of the user. Motion in the virtual event is dynamically associated the motion information. Additionally, the Memory704may contain information about physical event thresholds710that are applied to inertial data708. Such physical event thresholds may include different thresholds for different types of inertial data such as a different threshold for vertical acceleration than horizontal acceleration and/or a jerk (rate of change of acceleration) threshold. The physical event threshold information710may also include regular event cutoff thresholds. In addition, the Memory704may contain contextual data721used for determination of impending physical events such as navigation events, airplane modes, image recognition information, audio recognition information etc. The Memory704may also contain data corresponding to virtual events709. Virtual event data may include audio, video, and gameplay data displayed on a display device during a virtual event. The virtual events, contextual data and physical event thresholds may also be stored as data718in the Mass Store718. The system700may also include well-known support circuits, such as input/output (I/O)707, circuits, power supplies (P/S)711, a clock (CLK)712, and cache713, which may communicate with other components of the system, e.g., via the bus705. The computing device may include a network interface714. The processor unit703and network interface714may be configured to implement a local area network (LAN) or personal area network (PAN), via a suitable network protocol, e.g., Bluetooth, for a PAN. The computing device may optionally include a mass storage device715such as a disk drive, CD-ROM drive, tape drive, flash memory, or the like, and the mass storage device may store programs and/or data. The system may also include a user interface716to facilitate interaction between the system and a user. The user interface may include a display device such as monitor, Television screen, speakers, headphones or other devices that communicate information to the user. The display device may include visual, audio, or haptic display or some combination thereof. A user input device702such as a mouse, keyboard, game controller, joystick, etc. may communicate with an I/O interface and provide control of the system to a user. While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications and equivalents. It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, while the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is not required (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.). Furthermore, many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described but can be practiced with modification and alteration within the spirit and scope of the appended claims. The scope of the invention should therefore be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “A”, or “An” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. The appended claims are not to be interpreted as including means-plus-function limitations, unless such a limitation is explicitly recited in a given claim using the phrase “means for.” | 30,694 |
11859982 | DETAILED DESCRIPTION Environment Properties and Signals Some embodiments of the invention discuss the use of an environment property or environment properties. An environment property represents a characteristic of an environment, so an environment property of a real environment represents a characteristic of the real environment. The characteristic may be naturally occurring, such as by the Earth's magnetic field, the local atmospheric conditions (e.g. weather, sun or moon state, etc.), or the time or date. The characteristic may also be human-derived, such as a characteristic caused or altered by a real object or a living thing in the real environment. Thus, an environment property may also represent information or a signal sent or received by the real environment or by a real object located in the real environment. In some embodiments of this disclosure, an environment property may include an environment signal property encoded by an environment signal (a “signal” in short). The environment signal may be sent from one or more signal transmitters located in the real environment. In addition, the environment signal may also be sent from at least part of the real environment, in which case, that part of the real environment may also be called or considered a signal transmitter. In some embodiments, the environment signal may be a wireless signal including audio information, WiFi, Bluetooth, light, or any radio, audio, light spectrum or electromagnetic signal of any kind. Moreover, many embodiments contemplate that the environment signal may be artificially generated and/or modified by human-made objects. In some embodiments, an environment signal may be received by one or more signal receivers located within the real environment. Depending upon the embodiment, a signal receiver may be an electronic or electro-mechanical device that receives and/or reacts to signals. In some embodiments, received signals may be measured to obtain signal property measurements. In one example, an environment signal comprises at least one of radio signals, environment light (e.g. visible light or infrared light generated by the sun or a lamp), sound (e.g. ultrasound, infrasound, or acoustics), and magnetic fields. Some example radio signals include at least one of WLAN, Bluetooth, Zigbee, Ultrawideband (UWB) or Radio-frequency identification (RFID). In addition, exemplary magnetic fields include artificial magnetic fields (e.g. that are generated by one or more coils deployed in the real environment) or a geomagnetic field (e.g. the Earth's magnetic field). Inside the real environment, significant variations of the geomagnetic field may be experienced. The cause of these variations could be local phenomena, such as the steel shells of modern buildings, pipes, wires, electric equipment or anything else in the real environment that blocks, absorbs, reflects or refracts a signal. Techniques for measuring these fields are known in the art. An environment signal may carry information that is useful to embodiments of this disclosure. For example, an environment signal may encode one or more environment signal properties (a “signal property” in short), including Cell of Origin (CoO), Signal Strength (SS), Angle of Arrival (AOA), Time of Arrival (TOA), Time Difference of Arrival (TDOA), or any other indications known now or at the time these teachings are employed in future embodiments. In some embodiments, an environment property may include information about a received signal, for example: the signal strength of a Bluetooth signal; the signal strength of a WLAN signal; the time of arrival of a Bluetooth signal; the time of arrival of a WLAN signal; the signal strength of an ultrasound signal; the time of arrival of an ultrasound signal; or, the signal strength of an environment light. While these examples are illustrative, the embodiments of this disclosure contemplate the use any useful environment property from various environment signals and environment signal properties. In some embodiments of this disclosure, when an environment signal sent from a signal transmitter is received by a signal receiver, the value of an environment signal property may be measured. For example, the measurement may take place at the receiver and may be associated with the position of the receiver. This association may be recorded to be employed for matching against an environment map or for producing or updating an environment map. Many embodiments herein are agnostic to the form of storage, although some embodiments employ specialized data arrangements and may be maintained in relational databases or other physical memory managed by software. Many embodiments of the disclosure contemplate that environment signal properties may be position dependent. For example, a signal property of a signal that is sent by a signal transmitter is received by a signal receiver at a position A relative to the signal transmitter, and may be measured as a value X. The signal may also be received by another signal receiver at a position B relative to the signal transmitter, and the signal property may be measured as a value Y. In this one example, the value X and the value Y may be different, as the position A and position B may be different. As a similar example, a signal property value may depend on a distance between the signal transmitter and the signal receiver and/or on signal propagation properties. The embodiments discussed herein may use semiconductors, sensors or other equipment to measure signal values and to access accuracy and propagation properties. Current versions of such systems are known in the art and the embodiments herein contemplate the use of improved and different equipment for this purpose in the future. As suggested above, many embodiments of the disclosure contemplate that an environment property may include a natural environment property, such as temperature, humidity, brightness, time, date, gravity, altitude above sea level, weather data and the Earth's magnetic field. Varying embodiments further contemplate that local conditions may affect natural environment properties in both predictable and unpredictable manners. For example, brightness may be influenced by artificial light (e.g. from lamps) or natural light (e.g. from the sun), the presence and applicability of which may be known to the system. In greater particularity, a natural environment property may be influenced by artificial objects. For example, temperature could be influenced by heaters, humidity could be influenced by humidifiers, and brightness may be influenced by electrical lighting and window treatments. Furthermore, an environment property may be position dependent. For example, a position close to a heater may have a higher temperature than a position further from the heater. In addition, a position in a room with air conditioning may have a lower temperature than a position in another room without air conditioning. Finally, an environment property may be also time dependent and position dependent. For example, in the morning, it is possible to measure a higher light strength in a room at a position close to a window facing East than at a position close to a window facing West. In the afternoon, it is also possible to measure a lower light strength in a room at a position close to a window facing East than at a position close to a window facing West. Environment Property Map As mentioned above, some embodiments of the disclosure employ, construct or update an environment property map. In some embodiments, an environment property map includes at least one pair of datums, including an environment property value and a position for an environment property. For example, this pairing might be expressed as “(EPV, Position).” In one or more embodiments, the position indicates a region or a position in the real environment where the environment property value applies. In some embodiments, an environment property map may include multiple different types of environment properties. Thus, the pair of the value and the position contained in the environment property map may further optionally have an associate time parameter. For example, this grouping might be expressed as “(Position, EPV, Time).” Of course, the grouping of properties may be extended to include any number environment property values and other factors. The following are examples: “(Position, EPV, EPV2-EPVn, Time).” Moreover, the grouping of information, may include other datums that make the environment property values more useful. For example, other datums might include: information indicating the reliability of one or more EPVs, for example based upon history; information about the position, such as indoor/outdoor, an address, primary use of the room, etc.; or, information (such as accuracy, tolerance or specifications) about the sensors or other source of the position, time or EPVs. Overview Process Embodiments With reference toFIG.1A, there is shown a process associated with many embodiments of the disclosure. At process portion1001an environment property map is created. As discussed below, there are many ways to create an environment property map, including, for example, by using a combined camera and sensor device to collect reference images and reference environment property values in a real environment. In some embodiments, environment property maps may be constructed by the intentional deployment of cameras/sensors in a real world environment to collect data. In other embodiments, the data may be collected in a crowd source fashion, where device end-users (e.g. smartphone, tablet, vehicles) move through their routine life, while their device collects data for environment property maps. The device may use a background application to collect the data in a way that does not interfere with the end-user's enjoyment of their device. In another embodiment, a crowd source or other end-users may employ a foreground application to collect data for environment property maps due to community incentive or an express incentive to do so. Referring again toFIG.1A, at process portion1002the environment property map is used to locate an item. As discussed below, the precision of the item location may be at any level but is, in many embodiments, the determination of a camera or sensor pose. Thus, in one embodiment, an end-user's device may use the environment property map to facilitate navigation (e.g. finding items or directions inside a building), augmented reality (e.g. learning about items in the real environment by augmentation of real environmental information), virtual reality (e.g. activity in an artificial world that is affected by activity in the real world). Furthermore, in some embodiments, the environment property map may be stored locally to the device using the map. Local storage may provide a performance advantage. In other embodiments, the environment property map may be stored at a server or other computer accessed over a network. Network-based storage provides the opportunity for more robust processing of the map data and for more rapid updating. In yet another embodiment, part of the environment property map may be stored local to the user device and part may be stored remotely. For example, if the overall map reflects data for a state, the device might carry only data for the city or area within the state where the device resides. Referring now toFIG.1B, there is shown another process diagram associated with various embodiments of the disclosure, including with particularity certain embodiments relating to creating an environment property map. At process portion1010environment properties are measured in a real environment. Sensors of any known type may be used to make the measurements and the measured valued may be processed or stored either on the device making the measurements (e.g. a smartphone) or at a server or other device that receives the measurement data from the measuring device, for example, wirelessly over the Internet. At process portion1011, images of the real environment are captured or obtained. In some embodiments, the resulting image files include or are accompanied by any available image metadata, such as time and location data. At process portion1012, the environment property measurements are associated with the image data (and metadata if present). For example, each image may be associated in a database with environment property measurements made in the same place, or at the same time as the image capture. The association of data may take place on the capturing device or on a connected server or other computer. At process portion1013, the associated data is employed to construct or update (e.g. augment) an environment property map. The map may be constructed on a device local to the measurements and image capture (e.g. one of the capture devices) or on a connected server. In one embodiment, and regarding the workload sharing between devices, there are multiple local devices for gathering data (images and environment measurements). In some instances multiple local devices will communicate with another specially equipped local device and the specially equipped local device will communicate with a server. The specially equipped local device may be one of the units collecting data or may be a different local computer, possibly a desktop or server computer. The specially equipped local device may have more processing power, memory or a better (e.g. wired) Internet/network connection, or, in alternative embodiments, may be identical to one or more units used for data collection. By using a system including a specially equipped local device, all of the local devices can be more specialized and workloads can be divided accordingly. Referring now toFIG.1C, there is shown another process diagram associated with various embodiments of the disclosure, including, with particularity, certain embodiments relating to using an environment property map. At process portion1020an image or environment information is captured or measured with a camera or sensor, which for purpose of this illustration is called a user camera/sensor. The user cameras/sensors employed with respect toFIG.1Cmay be the same or different as the cameras/sensors employed with respect toFIG.1B. At process portion1021, the information captured or measured by the user camera/sensor is used as an index or entry point into an environment property map in order to determine an initial position of the user camera/sensor. For example, the captured/sensed information may include WiFi strength from a specific WiFi router. By indexing into the environment property map with this information, a relatively small location or area will likely be associated with the WiFi strength from the particular WiFi router. However, by using WiFi (or many other sensed signals as discussed below) instead of computer vision, the system is able to eliminate many locations or areas that appear identical or similar to the area in which the user camera/sensor is making readings. As discussed below, the captured or sensed information may vary according to embodiment and one or more types of sensed/captured information may be simultaneously employed. Furthermore, the position information may be any localization in the real environment. In some embodiments the position information is a camera/sensor pose or portion thereof. Moving to process portion1022, the initial position information obtained in process portion1021may be used to select a data set for refining the position information. For example, the system may wish to find a camera pose for the user camera/sensor using computer vision type techniques. By using the initial position information from process portion1021, the size of computer vision information that must be searched or analyzed becomes greatly reduced (e.g. there are fewer images to analyze). Recalling the WiFi example above, if the real environment is a city block, there may be many millions of images for consideration in a computer vision technique. Furthermore, many of the images may be similar or even identical (images of different apartments that are built in clone fashion). By using the WiFi (or similar) information, a relatively large amount of the images may be eliminated from consideration. Finally, at process portion1023a refined position is determined for the user camera/sensor. In many embodiments the refined position is determined by a machine vision technique such as SLAM and the position is expressed as a camera/sensor pose. DETAILED EMBODIMENTS With reference toFIG.1D, there is shown a process diagram associated with many embodiments of the disclosure for determining a camera pose relative to a real environment. The process diagram and exemplary embodiments are explained based on examples and potential scenarios shown inFIGS.2-7. With reference toFIG.1D, at101, a plurality of reference images of a real environment are provided. In one or more embodiments, the referenced images may be captured by a camera. In other embodiments, the reference images may be captured by a plurality of different cameras or obtained without knowledge of the camera or device associated with capture. The real environment referred at101may be any real environment (as opposed to a computer-generated environment) and an example real environment is shown inFIG.2. With reference toFIG.2, the illustrated real environment is a top view showing an indoor portion201and an outdoor portion, which is shown outside of the boundaries of201. The real environment201includes the following: rooms221,222,223,224,225and226; signal-sensitive objects211,212,213,214and215, which all reside inside the indoor portion201; signal-sensitive objects216and217, which both reside outside of the indoor portion201; windows231and232, which face West; and, windows233and234, which face East. According to one or more embodiments, each of the signal-sensitive objects211through217may send or receive (including measure) an environment signal. Thus, for example, each of the signal-sensitive objects211through217might be a transmitter, a receiver, or a transceiver (such as a WLAN access point, cell base station or other radio-communication access point). Furthermore, a signal-sensitive object that receives a signal may also modify the received signal and send the modified signal (or otherwise develop a signal based upon the received signal and send out the developed signal). Varying embodiments of the disclosure contemplate different potential types of signals including, but not limited to, a radio signal, a sound signal, a light signal, or a magnetic signal. In some embodiments and for purposes of illustration: the signal-sensitive object211may be a radio-frequency identification (RFID) device, which could either be a RFID tag or a RFID reader; the signal sensitive-object212may be a WLAN access point; the signal-sensitive object213may be a Bluetooth sensor (e.g. Bluetooth tag); the signal-sensitive object214may be an ultrasound sensor; the signal-sensitive object215may be an infrared radiation sensor; the signal-sensitive object216may be a satellite that is able to send or receive any radio signals; and the signal-sensitive object217may be a mobile cellular base station. With reference toFIG.3, there is shown the real environment fromFIG.2, having additional features. For example,FIG.3shows the sensors311,312,313,314,315,316,317,318, and319. According to varying embodiments of the disclosure, sensors311-319may be any type of sensor that measures or captures information regarding the real environment. In some embodiments, sensors311-319may be cameras to capture reference images at corresponding positions in the real environment201. In at least one embodiment, there may only be a single camera that is moved to the corresponding positions (and potentially other positions) to capture reference images. In yet other embodiments, there may be multiple cameras (e.g. 2 or more) that are moved or otherwise manipulated (e.g. through motion, mirrors or communications) to capture reference images at the indicated positions311through319, and perhaps other positions. Finally, in yet other embodiments, there may be more than one camera per position311through319, for example, to employ different capabilities of the different cameras or to access multiple features or sensing operations simultaneously. Referring again toFIG.1D, at102a determination is made of a reference camera (or other sensor) pose for each of the plurality of reference images (indicated in101). For example, in one or more embodiments, the reference camera pose indicates a pose relative to the real environment at the position at which that respective reference image is captured (or other sensor data measured/taken). Furthermore, depending upon the embodiment, the determined pose may include a three-dimensional position and orientation or any part of such data. According to one embodiment, various computer vision methods could be employed to determine the reference camera poses of a camera or sensor based upon one or more images/data captured/measured by the camera/sensor. For example, vision-based Simultaneous Localization and Mapping (SLAM) is well-known in the art and can estimate or determine camera poses in a real environment and reconstruct a geometrical model of the real environment at the same time without prior knowledge of the real environment. The created (or “reconstructed”) geometrical model may be represented by a plurality of image features, each corresponding to a reconstructed 3D position. The 3D positions may correspond with places in the image such as point features or edge features. Having at least two images captured by one or more cameras, a typical SLAM includes one or more of the following: feature detection, feature description, matching, triangulation, and map refinement (e.g. global map refinement). Feature detection refers to a process of detecting image regions, each of which represents an image feature. Feature description is the transformation of the detected image region into a typically more dense representation (e.g. Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) that is robust or invariant to certain types of variations (e.g. (non-uniform) lighting, rotation and occlusion). Matching refers to determining feature correspondences between image features extracted from different images. In many SLAM applications, the matched features may be considered to represent the same real object feature in the real environment. Triangulation refers to determining a 3D position for one real feature based on at least two matched image features. A typical SLAM application is based on a frame-to-frame process and may be employed to determine camera poses online (i.e. as compared to offline). In one embodiment, SLAM may be used to determine the camera poses to be used as a reference (i.e. reference camera poses). Each reference camera pose is typically based upon an analysis of multiple reference images. Further, in some embodiments, reference image features (e.g. in addition to pose) are extracted from the reference images and 3D positions are determined for each of the reference image features. In some embodiments, another vision-based technique called “structure from motion” (“SFM”) may be employed for camera or sensor pose estimation or 3D geometrical reconstruction from reference images (e.g. sequences of images). Unlike SLAM, a typical SFM is based on a global scheme. Thus, for example, in performing pose estimation or 3D reconstruction, all images may be analyzed within the same computation framework instead of frame to frame. The typical SFM may use a process including: feature detection, description, matching, triangulation and map refinement (e.g. global map refinement). In one embodiment, SFM may be used to determine the reference camera or sensor poses based on the reference images. Furthermore, SFM may also be employed to extract reference image features in the reference images and determine 3D positions of the reference image features. In yet another set of embodiments, a known 3D model of the real environment may be provided for determining the reference camera or sensor poses. The known 3D model may include features and associated 3D positions to serve as references. For example, by comparing image features extracted in one respective reference image with features contained in the known 3D model, it is possible to build image feature correspondences. A reference camera pose for the respective reference image could be estimated from one or more image feature correspondences. In a further set of embodiments, one or more real visual markers may be placed in the real environment. The markers may facilitate determining the reference camera or sensor poses because each marker may be captured in multiple reference images. The presence of the markers provide for easy feature alignment (at least with respect to the markers themselves) and facilitate the determination of pose information. In other embodiments, depth information may be used to estimate camera or sensor pose information. For example, some embodiments may employ one or more depth maps or reference images with depth information. In at least one instance, the one or more sensors may be a RGB-D camera that is able to capture an image including depth information. It is also possible to use only depth information of the reference images (e.g. depth maps) for determining reference camera poses. This result may be achieved by matching or aligning depth data of the reference images based on any iterative closest points (ICP) algorithm, which is known in the art. In yet a further set of embodiments, one or more sensors may incorporate inertial measurement units (IMUs). IMUs are known in the art and often provide measurements and report an item's specific force, angular rate and potentially the magnetic field surrounding the body. IMUs may be composed from a combination of sensors, such as accelerometers, gyroscopes or magnetometers. In practicality, IMUs provide velocity, orientation or gravitational forces of an item associated with the sensor. In one example, one or more IMUs could be attached to a camera or other sensor and provide inertial measurements as discussed. The inertial measurement may be combined (e.g. associated with) image features extracted in the reference images, and camera or sensor pose may be estimated thereby. Reconstructing a 3D Environment Model of at Least Part of a Real Environment According to one embodiment, a 3D environment model includes two or more reference image features extracted from reference images. The 3D environment model includes 3D positions for each of the reference image features. In one example, the reference image features and their 3D positions may be determined from reference images based on a computer vision method (e.g. the SLAM approach or the SFM approach). In another example, when depth information is available for the reference images, the 3D positions for the plurality of reference image features may be determined from the depth information. In some embodiments for assembling a 3D environment model, a reference image feature may be related to one or more reference images, if the one or more reference images contain either the reference image feature or any image feature matched to the reference image feature. Reference Environment Properties Referring again toFIG.1D, at103at least one reference environment property measurement is made or provided for at least one environment property. For example, the real environment may include one or more different environment properties. Each environment property may be measured to obtain one or more environment property measurements. In one example, one environment property could be measured by one or more sensors multiple times at different positions or at different points in time. In one sample embodiment, one environment property represents a property of an environment signal that is sent by a signal sensor. For example, the environment property may be data related to timing or signal strength of any signal. Some specific possibilities are as follows: the signal strength of a Bluetooth signal; the signal strength of a WLAN signal; the time of arrival of a Bluetooth signal; the time of travel of a Bluetooth signal; and, or the time of arrival of an ultrasound signal. Referring now toFIG.4, there is shown an illustration depicting environment property measurements indicated by411,412,413,414,415,416,417,418,419,421,422, and423. Thus each of411-419and421-423represent a position of an environment property measurement, and each such measurement may include one or more environment property. In some embodiments, environment signals sent from one or more signal-sensitive objects211though217may be measured by sensors as properties of the environment. Thus, environment properties may be measured by one or more sensors associated with (e.g. attached to) cameras/sensors311through319at positions where the cameras/sensors capture reference information such as reference images. These measurements may be reference environment property measurements as indicated by411through419, shown inFIG.4. In one embodiment, the environment property measurement411may include one measurement of one signal sent from one of the signal-sensitive objects211through217. In other embodiments, the environment property measurement411may include or be derived from multiple measurements of multiple signals sent from multiple signal-sensitive objects of the signal-sensitive objects211though217. Similarly, each of the other environment property measurements412though419may include or be derived from one or more measurements. Thus, in one or more embodiments, the reference environment property measurements411through419may be associated with reference camera/sensor poses for the cameras311through319respectively (e.g. in an environment model). The environment properties may also be measured by the sensor(s) attached to cameras at positions indicated by421though423, even when no reference image is captured at that location (e.g. by a camera). However, the positions indicated by421through423may be determined from vision-based tracking (e.g. derived from at least one of the reference images and an image captured by a camera at the positions421through423). In another set of embodiments, environment signals may be sent from one or more sensors attached to one or more cameras. Properties of the environment signals (i.e. environment properties) may be measured in turn by one or more of the signal-sensitive objects211though217. In one example, the environment signals are sent from the sensor(s) attached to one or more cameras at positions where at least part of the reference images are captured. Then, the environment signals are measured as reference environment property measurements by one or more of the signal-sensitive objects211through217. For example, the reference environment property measurement indicated by411may represent a measurement for an environment signal sent from the sensor attached to the camera311and measured by one of the signal-sensitive objects211through217. The reference environment property measurement indicated by411may, in other embodiments, represent multiple measurements for the signal sent from the sensor attached to the camera311and measured by multiple signal-sensitive objects of the signal-sensitive objects211through217. In this case, the reference camera pose of the camera311may be assigned to a measurement pose related to the reference environment property measurement411. Varying embodiments of this disclosure contemplate any and all possible or practical combinations of the examples, techniques and sub-techniques mentioned above for reconstructing a model. Some practical possibilities with respect to varying embodiments are as follows: The environment properties, for which at least one reference environment property measurement is provided, may be encoded by environment signals sent from at least part of the signal-sensitive objects211through217and sent from the sensor(s) attached to the at least one reference camera. The signal strength of the environment signals, such as received signal strength, may be measured as a reference environment property measurement. The environment signals may have the same signal type or different signal types. Temperature and/or humidity of the real environment may be measured by sensor(s) attached to a camera. The temperature and/or humidity could be measured as multiple measurements at different positions or times in the real environment. Gravity or altitude of a camera or sensor may also be measured as a reference environment property measurement. For example, at the positions (indicated by311through319as shown inFIG.3) where the at least one camera captures reference images, the gravity or altitude of the camera may also be measured. Light strength may be measured by a sensor, such as a sensor attached to a camera. The (e.g. current) time may be recorded and related to one or more reference environment properties (including the measurement of each reference environment property). For example, the current time could be recorded coarsely as “morning” or more specifically as “10:03 AM, Central Time, May 25, 2016,” or at any other level of specificity. In practical application, a measurement of the light strength at the position indicated by419in the room223may be lower than a measurement of the light strength at the position indicated by417in the room222at the current time. In this example, the reference environment measurements417and419may each include a light strength measurement. Associating a Pose with an Environment Property With reference again toFIG.1D, at104the process associates one or more reference environment property measurements to at least one measurement sensor/camera pose. The association may be recorded in a memory or database, including in a special data structure for accessing information quickly or efficiently. In many embodiments, the measurement pose is obtained at process portion102as part of the reference camera pose. Furthermore, as discussed above, in many embodiments, the camera pose may be derived from at least one of the reference camera poses. Furthermore, measurement sensor/camera pose may include position and orientation information, or only position (translation) information, or any other subset of a full set of pose information. In one example, at least one environment property is measured at one or more positions (indicated by311through319as shown inFIG.3) where a camera captures one or more reference images. The reference camera poses or a part of the reference camera poses for the cameras311through319may be directly assigned as measurement poses to the reference environment property measurements411through419. In one example, only the translational part of a reference camera pose is related to a respective reference environment property measurement. In another example, it is also possible to relate both the translational part and the rotational part of the reference camera pose to the respective reference environment property measurement. In one or more embodiments, an environment signal that encodes at least one environment property may be sent from a camera and measured by sensors at other places. For instance, environment signals may be sent from the cameras/sensors311through319(e.g. from a sensor attached to each camera) where the cameras capture the reference images; then, the environment signals may be measured as the reference environment property measurements411through419by one or more of the signal-sensitive objects211through217. In some embodiments, the reference camera poses or a part of the reference camera poses for the cameras311through319may be directly assigned as measurement poses to the reference environment property measurements411through419. According to another embodiment, it is also possible to send the environment signals or measure the environment signals by a camera/sensor at a position where the camera/sensor does not capture any reference image. For example, the environment property measurements421through423may not correspond to any camera positions indicated by311through319, where reference images may actually be captured. However, measurement poses to be related to the environment property measurements indicated by421though423may be derived from vision-based tracking and at least one of the reference camera poses of the cameras311through319. Determining an Environment Property Map With reference again toFIG.1D, at process position105, an environment property map (or portion thereof) is determined according to the at least one reference environment property measurement and at least one measurement pose. The environment property map may include at least one pair of an environment property value and a position for an environment property. In some embodiments, the value could be one reference environment property measurement and the corresponding position could be one measurement pose (or aspect thereof) associated with the reference environment property measurement. For example, the environment property map may include at least part of the environment property measurements411through419and421through423and their associated measurement poses (or portions thereof). Further, in some embodiments, an environment property map includes associated time/date data. For example, a time parameter may be associated with one or more environment property measurements411through419and421though423. The time parameter may indicate an exact time up to a certain desired precision, such as morning, 10 AM, 10:32:27:6788 AM or any combination including any information indicating time or date. For example, the time parameter may indicate a specific time or time period, such as time between 10:00 and 15:00, morning, winter, or Monday. In some embodiments, an environment property value may be obtained for any arbitrary target position in the real environment, and the environment property value and the target position may be added to a data collection to build an environment property map. An environment property value at the target position may be derived from the at least one reference environment property measurement discussed with respect to process portion103above. With respect to environment properties that are encoded by an environment signal, determination of an environment property value from a reference environment property measurement may be performed according to: a signal propagation property; or, distances between the target position of the environment property value and the position (i.e. the measurement pose) of the reference environment property measurement. In some embodiments, environment property values may be obtained by interpolation. For example, with reference toFIG.5, there is shown an illustration of the real environment. As part of the example, we assume that there are no reference environment property measurements at positions indicated by511,512,513, or514. However, some embodiments of the disclosure contemplate calculating environment property values511through514for the positions indicated by511though514by interpolating or extrapolating at least part of the environment property measurements411through419and421through423. Therefore, in one or more embodiments, a target position in the real environment may be first provided; then, an environment property value may be determined from the provided target position and existing environment property measurements and their associated positions. The determined environment property value may be associated to the provided target position, and the pairing may be used to build or augment an environment property map. Of course, in many embodiments, an environment property map may include at least part of the environment property measurements411through419and421through423as well as at least part of environment property values511through514. For clarity, both terms “environment property measurement” and “environment property value” represent a value of an environment property. The environment property measurement indicates that the value is determined from a sensor, while the environment property value indicates that the value is either determined from a sensor or from estimation (e.g. interpolation or extrapolation) based on other environment property values. Using the Environment Map With reference toFIG.1D, process position106provides a current image of the real environment captured by a camera or sensor at a first position. The camera may be one of the same cameras discussed above in the construction of the environment map. Alternatively, the camera may be a different camera, for example a camera of an end-user of a system that employs the environment map for navigation, augmented reality or virtual reality. This type of camera may be referred to hereafter as a user camera, where the earlier-discussed cameras (311through319involved in map creation) may be referred to as reference cameras. Importantly, in many embodiments a user camera and a reference camera may be the same camera device or different camera devices, thus the distinction draws from the instant function of the camera. With referenceFIG.6, there is shown user camera602, and a current image of the real environment201may be captured by camera602at the position indicated by602. In one or more embodiments of the disclosure, a current camera pose of user camera602is determined relative to the real environment201. According to one embodiment, the current pose may be determined as follows: current image features may be extracted in the current image and matched to the reference image features extracted in the plurality of reference images; the current image features have 2D image positions in the current image, and the reference image features have 3D positions; the 2D-3D feature correspondences may be built from the feature matching process; and, the 2D-3D feature correspondences may then be used to estimate the current camera pose. However, matching the current image features with a huge amount of the reference image features (in particular for a large environment) would be computationally expensive and error-prone. For example, there may exist many similar reference image features, which would cause incorrect matching (e.g. identical refrigerators or whole kitchens in near identical rooms of an apartment building or sister apartment buildings that are geographically dispersed). For the example of the environment201, reference image features extracted from a reference image of the room222could be similar to reference image features extracted from another reference image of the room221, as the room222and the room221may have similar decoration. Thus, determining a smaller and proper subset of the reference image features and matching the current image features with the subset of the reference image feature will improve the computational performance and matching accuracy. In one embodiment, the proper subset of the reference image features could be determined from an initial pose of the user camera602. For example, it is possible to define a field of view from the initial pose, and reference image features whose 3D positions are within the field of view or within a certain range beyond the field of view would be selected into the proper subset. In another example, it is possible to determine a neighbor region around the position defined by the initial pose. Reference image features having 3D positions within the neighbor region may be selected into the proper subset. In a further example, one or more reference images among the plurality of reference images could be chosen, if the reference camera poses of the one or more reference images are close (e.g. defined by thresholding or another approximation technique) to the initial pose. Reference image features extracted from the chosen reference images could be determined as the proper subset of the reference image features. According to another embodiment, having an initial pose of the user camera602, one or more reference images among the plurality of reference images could be chosen if the reference camera poses of the one or more reference images are close (e.g. defined by thresholding) to the initial pose. Photometric-based image matching may be performed to match the current image with at least one of the chosen reference images in order to estimate a transformation. The transformation could be a homography or a rigid 3D transformation. The final current camera pose could then be determined from the initial pose and the estimated transformation. Determining an Initial Pose for the User Camera Regarding semantics, in the following explanation, the environment property values contained in the environment property map may be called reference environment property values. Referring again toFIG.1D, at process portion107, a least one current environment property measurement is produced. The at least one current environment property measurement includes a measurement of an environment property. The at least one current environment property measurement could be obtained similarly as obtaining the at least one reference environment property measurement, described in step103. In one embodiment illustrated byFIG.6, the user camera602may be associated with one or more current environment property measurements, such as environment property measurement613. The environment property measurement613may be, for example, a measured WLAN signal strength, a measured time of travel of a Bluetooth signal, a measured altitude, a measured temperature, or any other property discussed or implied herein. In some embodiments, the measured property may be obtained from one or more sensors attached to the user camera602(e.g. integral with the camera). In addition, in some embodiments, the one or more sensors may measure environment signals sent from at least one of the signal-sensitive objects211through215to obtain the at least one current environment property measurement. In another embodiment, the one or more sensors may send one or more environment signals, and the one or more environment signals may be measured by at least one of the signal-sensitive objects211through215to obtain the at least one current environment property measurement. As discussed above, the one or more sensors may also measure any environment data that is possible to measure at the time of the particular implementation. For example, the sensor may measure at least one of temperature, humidity, time, date, gravity, altitude above sea level, and earth magnetic field to obtain the at least one current environment property measurement. Furthermore, the time of measuring environment properties to obtain the at least one current environment property measurement may be recorded as a time data or a time parameter. The recorded time parameter may be associated with the at least one current environment property measurement. As discussed above, the time parameter may indicate any possible level of exactness or coarseness. With reference again toFIG.1D, at process portion108, the current environment property measurement is matched against an environment property map. In particular, the at least one current environment property measurement may be matched against at least one reference environment property value contained in an environment property map. In some embodiments, multiple environment properties may be matched. In some embodiments, properties from the current environment property measurement may only be compared against properties of a like type in the environment property map. For example, Bluetooth signal strength might only be compared against other data indicating Bluetooth signal strength. As another more particular example, the environment property measurement613(i.e. a current environment property measurement) may include a measured WLAN signal strength. The WLAN signal strength might, in some embodiments, only be compared with values of reference WLAN signal strengths of the environment property map. Furthermore, according to one embodiment, comparing two environment property values (e.g. matching one current environment property measurement and one reference environment property value) involves determining a measurement difference between the two values. If the measurement difference between the two values is below a threshold, the two environment property values are matched; otherwise, they are not matched. According to some embodiments, a time parameter may also be considered in matching properties. In particular, in order to more accurately match a signal (or other) property, the time of the measurement and reference information may also be compared. Thus, the time parameter associated with one current environment property measurement may be compared to a time parameter associated with one reference environment property value. Thus, in one or more embodiments, the time parameters are matched first (e.g. based on thresholding, or another known matching technique) so as to narrow the number of comparisons required regarding associated environment properties. For example, the universe of environment properties for comparison (i.e., reference environment property values for comparison with environment property measurements) may be greatly reduced by sorting out only those property values that have “matching” time values. After the universe of data is reduced by use of the time parameter matching, the actual determined reference environment property values may more efficiently be compared to the current environment property measurements. In one example, temperature distribution patterns of the real environment201may be different in winter and summer. When the current environment property measurement of the environment temperature is made in winter, then reference environment property values related to winter contained in the environment property map should be selected to be matched against the current environment property measurement. Thus, in some embodiments, reference environment property values related to summer contained in the environment property map should not be selected to be compared and matched against the current environment property measurement. In another example, light strength in the room224may be different in the morning and in the afternoon. Thus, in some embodiments, the current environment property measurement of light strength obtained in the morning should be matched to reference environment property values of light strength related to morning. With reference again toFIG.1D, at process position109an initial pose is determined based upon the matching performed with respect to process position108. For example, one pair of a reference environment property value and a position contained in the environment property map may be matched to a current environment property measurement. The matching implies that the current position is associated with the reference position from the environment map. Depending upon the embodiment, the initial pose to be determined may indicate a position in 2D or 3D. For example, the 2D position would be a position on a plane, e.g. the earth plane, while the 3D position would be a position in 3D space. In addition, the initial pose to be determined may be very specific or any level of indication that is useful. For example, the initial position may indicate a region, and a region may define a space around or adjacent to a specific position. In some embodiments the region may be defined by distance (e.g. the space of 10 meters diameter around the specific position), while in other embodiments a region may represent a specific real environment space (e.g. a hallway, a building, a floor in a building or a room, such as the room225). As discussed above, the initial pose to be determined may include any level of detail, such as position and orientation or any portion thereof. Once a position is determined (e.g. as related to the matched reference environment property value contained in the environment property map), it may be used to determine the initial pose. According to one embodiment, the initial pose is determined to be the position of the matched reference environment property value. According to another embodiment, the initial pose is derived from the position of the matched reference environment property value. The derivation may be performed in any known manner and, in one embodiment, is derived as a function of the measurement difference between the matched reference environment property value and the at least one current environment property measurement. For example, the position of the matched reference environment property value may be modified according to the measurement difference and optionally according to the propagation property of the environment signal. The initial pose may then be determined to be the modified position. The propagation property may describe a relation between signal traveling time and distance. For instance, the relation could be linear or quadratic or otherwise functionally related. Thus, the relation may also be described by any mathematical equation. In one example, multiple pairs of reference environment property values and positions contained in the environment property map are matched to the at least one current environment property measurement. According to one embodiment, the initial pose is determined to be an average, maximum, or minimum of the positions of the matched reference environment property values. For the determination of the average, the maximum, or the minimum, the positions contained in the environment property map may be weighted according to measurement differences between each of the matched reference environment property value and the at least one current environment property measurement, and optionally according to the propagation property of the environment signals. With reference again toFIG.1D, at process portion110a determination is made regarding a current pose for the user camera according to the initial pose, the current image, and at least part of the plurality of reference images. As discussed above, various techniques may be employed to determine the current pose for the user camera. Online Tracking Some embodiments of the disclosure contemplate online (as compared to offline) tracking of the user camera in the real environment (e.g. to determine subsequent camera poses of the user camera located at subsequent positions in the real environment). In contemplating this analysis, the user camera may be the same camera having different poses or it may be different cameras devices. For example, as shown inFIG.6, at least one user camera may include the cameras602through605, which may be the same camera device or different camera devices. For purposes of the following discussion, the following semantics may be useful. The current image captured by the user camera at the first position at process portion106may be called the first current image. The current pose determined at process portion110may be called the first current camera pose. Contemplating a scenario in view ofFIG.6, the first current image may be captured by user camera602at the first current camera pose. A second current image may be captured by the user camera603at a second current camera pose. A third current image may be captured by the user camera604at a third current camera pose. According to one embodiment, the second current camera pose relative to the real environment is determined from the first current camera pose and a spatial relationship between the at least one user camera at the first position and at the second position. In one implementation that accords with the foregoing embodiment, the spatial relationship between the camera602and the camera603may be determined from the image features extracted in the first current image and the second current image, e.g. based on SLAM. In another implementation that also accords with the foregoing embodiment, depth information may be captured by the cameras602and603; then, the spatial relationship between the camera602and the camera603could be determined by aligning the depth information based on iterative closest points (ICPs) (or by any other known system for alignment). Moreover, many embodiments herein contemplate use of any technique disclosed herein or otherwise known for determining camera motion between different positions. One technique contemplated by various embodiments of this disclosure might begin by acquiring an environment property measurement similar to the techniques discussed with respect to process portion107. For example, one or more sensors may be attached to the user camera603, and the one or more sensors may send environment signals or may measure environment properties. The environment property measurement may be used to obtain a subsequent initial pose for the user camera at the subsequent position indicated by603. The subsequent initial pose may be used to determine, constraint, and/or improve the current camera pose. According to one embodiment, current image features extracted in a first current image or the second current image may be added to the collection of reference image features. 3D positions of the current image features could be determined from the first or the second current camera poses. By this way, the completeness of the plurality of reference image features for the real environment could be improved. Texture-Less Spots Some embodiment contemplate that, in some cases, there do not exist significant features in one part of the real environment, and thus it may not be practical or even possible to extract image features from an image of that part of the real environment. For example, a white wall (or any uniform color) in the real environment may not provide any feature. Such parts of the real environment that do not provide extractable image features are called texture-less environments, and image regions containing texture-less environments are called texture-less image regions. Thus, it is technically difficult or impossible to perform image feature-based tracking or environment reconstruction in these regions. As an exemplary scenario based uponFIG.6, a camera pose of the camera602(i.e. a user camera) relative to the real environment201may be determined according to any solution disclosed above. The camera pose of the camera603may then be determined based upon the camera pose of the camera602and the motion or spatial difference between the camera602and the camera603. The camera motion could be determined from image features extracted in the camera602and the camera603. The camera motion could also be determined from depth information (e.g. by aligning the depth information) and/or from inertial sensors. In one embodiment, assume the camera605captures a texture-less image of the wall. There may or may not be enough image features detected in the texture-less image of the wall. In one embodiment, whether there is “not enough” may be determined according to a threshold of a number of image features. For example, if the number of image features is less than a certain number (i.e. threshold), the image of the wall is determined to be texture-less. In this case, a camera motion between the camera602and the camera605could be determined from depth information and/or from inertial sensors. Thus, the camera pose of the camera605could still be determined from the camera motion and the camera pose of the camera602. Having the camera pose of the camera605, the position of the texture-less part (i.e. the wall or a part of the wall) in the real environment could be determined. According to another embodiment, the environment property map may be created only once during an offline stage, and the determination of the current camera pose of a user camera could be performed online. Thus, the procedure of creating the environment property map and the procedure of determining the current camera pose could be separate. For example, with reference toFIG.1D, the process portions101through105may be performed once to create the environment property map offline. The process portions106through110may be performed to determine the current camera pose of a user camera, and that determination may be made online. The process portions106through110may be performed multiple times to determine several current camera poses for several different user cameras. The process portions106through110may alternatively be performed just once to determine one current camera pose for one user camera. FIG.7shows a workflow diagram of one or more embodiments for determining an environment property map. At process portion701, a plurality of reference images of a real environment are provided. The references images may be captured by a reference camera or by multiple reference cameras. Process portion701bears similarity to process portion101and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion101. Moving to process portion702, a determination is made regarding a reference camera pose of the reference camera for each of the plurality of reference images. Process portion702bears similarity to process portion102and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion102. Next, at process portion703, at least one reference environment property measurement is provided for at least one environment property. Process portion703bears similarity to process portion103and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion103. Still referring toFIG.7, at process portion704at least one reference environment property measurement is associated with at least one measurement pose derived from at least one of the reference camera poses. The association may be recorded in a memory or database. Process portion704bears similarity to process portion104and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion104. Finally, at process portion705, an environment property map is determined, constructed or augmented. For example, a determination regarding an environment property map may be made according to the at least one reference environment property measurement and the at least one measurement pose. The environment property map may be embodied in hardware or software. Some particular embodiments place the environment property map in a database or specialized data structure suited for referencing. All embodiments, implementations and examples related to process portion105could also be applied to process portion705. Referring now toFIG.8, there is shown a process diagram for determining a current camera pose based on a provided environment property map. At process portion805, an environment property map and reference image information related to a real environment are provided. The environment property map may be created based on embodiments disclosed above, for example with respect to those inFIG.7. In one embodiment, the reference image information may include both: a plurality of reference images of the real environment; and, the reference camera poses where the reference images are captured by cameras. In another embodiment, the reference image information may include a set of reference image features with 3D positions. Theses reference image features may be extracted from images of the real environment. Referring back toFIG.8, at process portion806a current image of the real environment is provided, and in some embodiments the image is provided by a user camera that captured the image at a first position. Process portion806bears similarity to process portion106and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion106. Next, at process portion807at least one current environment property measurement is provided and in some embodiments it is provided by a sensor attached to the user camera. Process portion807bears similarity to process portion107and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion107. Moving further, at process portion808, the at least one current environment property measurement is compared for matching with the environment property map. Process portion808bears similarity to process portion108and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion108. Next, at process portion809, an initial pose is determined according to a result of the comparing and matching. Process portion809bears similarity to process portion109and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion109. Finally, at process portion810, a current pose is determined for the user camera at the first position based upon one or more of the initial pose, the current image, and the reference image information. Process portion810bears similarity to process portion110and thus, varying embodiments of the disclosure contemplate this process portion benefits from the teachings with respect to process portion110. In one example of a system implementation, all the processes and techniques of determining a current camera pose disclosed herein could be performed by one computing device. For example, all of the process portions101through110shown inFIG.1Dmay be performed by one computing device. In one embodiment, the one computing device may be a client device equipped with a client camera and potentially a series of sensors. A sample device of this type is described below. As discussed above, one or more reference cameras and one or more user cameras may be the same camera, which is the client camera. For example, the environment property map may be constructed with an iPad or iPhone, and then later used for navigation, augmented reality or virtual reality with the same iPad or iPhone. In a sample implementation, a user may hold the client device and capture a plurality of reference images of a real environment by the client camera, while the user walks in the real environment. Further, a sensor may be attached to the client device to measure environment properties as reference environment property measurements. The creation of an environment property map may be performed by the client device according to any method disclosed herein. When the user comes to the real environment next time, the user could device for navigation in the real environment. The user could capture a current image of the real environment by the client camera. The sensor attached to the client device would provide at least one current environment property measurement. The determination of a current pose for the client camera may be performed by the client device according to any method disclosed herein or known hereafter. Similarly, the client device could capture a second current image and determine a second current camera pose. Extending the sample implementation, the one computing device may include a server separate from the camera/sensor-bearing devices. The server may communicate with one or more client devices via cables or wirelessly. The one or more client devices may each include one or more client cameras, which may be employed as a reference camera or a user camera. The one or more client devices may also be equipped with sensors to measure environment properties. An embodiment of this type may be illustrated with respect toFIG.3, which shows server350that may be the server of the sample implementation. In one server-based embodiment, one or more users may use one or more client devices equipped both to capture a plurality of reference images of a real environment by the client cameras and to produce at least reference environment property measurements. A plurality of captured reference images and reference environment property measurements may be sent from the one or more client devices to the server. The data may be sent one-at-time or may be batched by the client device prior to sending. The creation of an environment property map may be performed by the server according to any method disclosed herein. Alternatively, the environment property map may be created or augmented (e.g. updated) in cooperation between the client devices and the servers. For example, the client devices may associate data (e.g. image with environment measurements), and the server may analyze the data as taught herein and construct or augment the map. In another embodiment, a client device may be used to capture a current image and at least one current environment property measurement, which would then be sent from the client device to the server (either batched or one-at-a-time). The determination of a current pose for the client camera may be performed by the server or the client device. If the pose is determined by the server, it may be sent from the server to the client device according to any known technique. In yet another implementation variation, all the process portions for determining a current camera pose discussed herein may be performed by several different computing devices. The computing devices may communicate with each other via cables or wirelessly. For example, the computing devices may include servers and client devices. A client device may be equipped with one or more cameras to capture images and sensors to provide environment property measurements. The process portions101through105shown inFIG.1Dor the process portions701through705shown inFIG.7may be performed by the server. In this case, the server may collect reference images and reference environment property measurements from one or more client devices as described above. Finally, the process portions106through110shown inFIG.1Dor process portions806through810shown inFIG.8may be performed by a client device. In this case, the environment property map may be provided to the client device. When the environment property map is stored in the server, the environment property map may be sent from the server to the client device. Further, reference image information related to the real environment stored in the server may also have to be sent from the server to the client device. In one implementation, the reference image information may include a plurality of reference images of the real environment and the reference camera poses where the reference images are captured by cameras. In another implementation, the reference image information may include a set of reference image features with 3D positions. The reference image features may be extracted from images of the real environment. The inventive embodiments described herein may have implication and use in and with respect to all types of devices, including single and multi-processor computing systems and vertical devices (e.g. cameras, phones or appliances) that incorporate single or multi-processing computing systems. The discussion herein references a common computing configuration having a CPU resource including one or more microprocessors. The discussion is only for illustration and is not intended to confine the application of the invention to the disclosed hardware. Other systems having other known or common hardware configurations (now or in the future) are fully contemplated and expected. With that caveat, a typical hardware and software operating environment is discussed below. The hardware configuration may be found, for example, in a server, a laptop, a tablet, a desktop computer, a phone, or any computing device, whether mobile or stationary. Referring toFIG.9, a simplified functional block diagram of illustrative electronic device900is shown according to one embodiment. Electronic device900could be, for example, a mobile telephone, personal media device, portable camera, or a tablet, notebook or desktop computer system, a specialized mobile sensor device or even a server. As shown, electronic device900may include processor905, display910, user interface915, graphics hardware920, device sensors925(e.g. GPS, proximity sensor, ambient light sensor, accelerometer, magnetometer and/or gyroscope), microphone930, audio codec(s)935, speaker(s)940, communications circuitry945, image capture circuitry950(e.g. camera), video codec(s)955, memory960, storage965(e.g. hard drive(s), flash memory, optical memory, etc.) and communications bus970. Communications circuitry945may include one or more chips or chip sets for enabling cell based communications (e.g. LTE, CDMA, GSM, HSDPA, etc.) or other communications (WiFi, Bluetooth, USB, Thunderbolt, Firewire, etc.). Electronic device900may be, for example, a personal digital assistant (PDA), personal music player, a mobile telephone, or a notebook, laptop, tablet computer system, a dedicated sensor and image capture device, or any desirable combination of the foregoing. Processor905may execute instructions necessary to carry out or control the operation of many functions performed by device900(e.g. to run applications like games and agent or operating system software to observe and record the environment (e.g. electrometrically or otherwise), user behaviors (local or remote), and the context of those behaviors). In general, many of the functions described herein are based upon a microprocessor acting upon software (instructions) embodying the function. Processor905may, for instance, drive display910and receive user input from user interface915. User interface915can take a variety of forms, such as a button, keypad, dial, a click wheel, keyboard, display screen and/or a touch screen, or even a microphone or camera (video and/or still) to capture and interpret input sound/voice or images including video. The user interface915may capture user input for any purpose including for use as an entertainment device, a communications device, a sensing device, an image capture device or any combination thereof. Processor905may be a system-on-chip such as those found in mobile devices and may include a dedicated graphics processing unit (GPU). Processor905may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and may include one or more processing cores. Graphics hardware920may be special purpose computational hardware for processing graphics and/or assisting processor905to process graphics information. In one embodiment, graphics hardware920may include one or more programmable graphics processing units (GPUs). Sensors925and camera circuitry950may capture contextual and/or environmental phenomena such as the electromagnetic environment, location information, the status of the device with respect to light, gravity and the magnetic north, and even still and video images. All captured contextual and environmental phenomena may be used to contribute to determining device positioning as described above and throughout this disclosure. Output from the sensors925or camera circuitry950may be processed, at least in part, by video codec(s)955and/or processor905and/or graphics hardware920, and/or a dedicated image processing unit incorporated within circuitry950. Information so captured may be stored in memory960and/or storage965and/or in any storage accessible on an attached network. Memory960may include one or more different types of media used by processor905, graphics hardware920, and image capture circuitry950to perform device functions. For example, memory960may include memory cache, electrically erasable memory (e.g., flash), read-only memory (ROM), and/or random access memory (RAM). Storage965may store data such as media (e.g., audio, image and video files), computer program instructions, or other software including database applications, preference information, device profile information, and any other suitable data. Storage965may include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM). Memory960and storage965may be used to retain computer program instructions or code organized into one or more modules in either compiled form or written in any desired computer programming language. When executed by, for example, processor905, such computer program code may implement one or more of the acts or functions described herein, including all or part of the described processes. Referring now toFIG.10, illustrative network architecture10000, within which the disclosed techniques may be implemented, includes a plurality of networks10005, (i.e.,10005A,10005B and10005C), each of which may take any form including, but not limited to, a local area network (LAN) or a wide area network (WAN) such as the Internet. Further, networks10005may use any desired technology (wired, wireless, or a combination thereof) and protocol (e.g., transmission control protocol, TCP). Coupled to networks10005are data server computers10010(i.e.,10010A and10010B) that are capable of operating server applications such as databases and are also capable of communicating over networks10005. One embodiment using server computers may involve the operation of one or more central systems to collect, process, and distribute device environment and behavior, contextual information, images, or other information to and from other servers as well as mobile computing devices, such as smart phones or network connected tablets. Also coupled to networks10005, and/or data server computers10010, are client computers10015(i.e.,10015A,10015B and10015C), which may take the form of any computer, set top box, entertainment device, communications device, or intelligent machine, including embedded systems. In some embodiments, users will employ client computers in the form of smart phones or tablets. Also, in some embodiments, network architecture10010may also include network printers such as printer10020and storage systems such as10025, which may be used to store multi-media items (e.g., images) and environment property information that is referenced herein. To facilitate communication between different network devices (e.g., data servers10010, end-user computers10015, network printer10020, and storage system10025), at least one gateway or router10030may be optionally coupled there between. Furthermore, in order to facilitate such communication, each device employing the network may comprise a network adapter. For example, if an Ethernet network is desired for communication, each participating device must have an Ethernet adapter or embedded Ethernet-capable ICs. Further, the devices may carry network adapters for any network in which they will participate, including wired and wireless networks. As noted above, embodiments of the inventions disclosed herein include software. As such, a general description of common computing software architecture is provided as expressed in the layer diagrams ofFIG.11. Like the hardware examples, the software architecture discussed here is not intended to be exclusive in any way but rather illustrative. This is especially true for layer-type diagrams, which software developers tend to express in somewhat differing ways. In this case, the description begins with layers starting with the O/S kernel, so lower level software and firmware have been omitted from the illustration but not from the intended embodiments. The notation employed here is generally intended to imply that software elements shown in a layer use resources from the layers below and provide services to layers above. However, in practice, all components of a particular software element may not behave entirely in that manner. With those caveats regarding software, referring toFIG.11, layer1101is the O/S kernel, which provides core O/S functions in a protected environment. Above the O/S kernel is layer1102O/S core services, which extends functional services to the layers above, such as disk and communications access. Layer1103is inserted to show the general relative positioning of the Open GL library and similar application and framework resources. Layer1104is an amalgamation of functions typically expressed as multiple layers: applications frameworks and application services. For purposes of this discussion, these layers provide high-level and often functional support for application programs which reside in the highest layer shown here as item1105. Item C100is intended to show the general relative positioning of any client side agent software described for some of the embodiments of the current invention. In particular, in some embodiments, client-side software (or other software) that observes device environment and behaviors (including context) and the behavior of sensors may reside in the application layer and in frameworks below the application layer. In addition, some device behaviors may be expressed directly by a device user through a user interface (e.g. the response to a question or interface). Further, some device behaviors and environment may be monitored by the operating system, and embodiments of the invention herein contemplate enhancements to an operating system to observe and track more device behaviors and environment; such embodiments may use the operating system layers to observe and track these items. While the ingenuity of any particular software developer might place the functions of the software described at any place in the software stack, the software hereinafter described is generally envisioned as all of: (i) user facing, for example, to receive user input for set up, during creation of an environment property map and potentially during use of an environment property map; (ii) as a utility, or set of functions or utilities, beneath the application layer, for tracking and recording device behaviors and environment and for determining the position of the device or pose of a camera; and (iii) as one or more server applications for organizing, analyzing, and distributing position information and the underlying data. Furthermore, on the server side, certain embodiments described herein may be implemented using a combination of server application level software and database software, with either possibly including frameworks and a variety of resource modules. No limitation is intended by these hardware and software descriptions and the varying embodiments of the inventions herein may include any manner of computing device such as Macs, PCs, PDAs, phones, servers, or even embedded systems, such as a dedicated device. It is to be understood that the above description is intended to be illustrative, and not restrictive. The material has been presented to enable any person skilled in the art to make and use the invention as claimed and is provided in the context of particular embodiments, variations of which will be readily apparent to those skilled in the art (e.g., many of the disclosed embodiments may be used in combination with each other). In addition, it will be understood that some of the operations identified herein may be performed in different orders. The scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. As used in this disclosure, (i) the words “include” and “including” and variations thereof, will not be deemed to be terms of limitation, but rather will be deemed to be followed by the words “without limitation,” and (ii) unless the context otherwise requires, the word “or” is intended as an inclusive “or” and shall have the meaning equivalent to “and/or.” Furthermore, in the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” | 84,897 |
11859983 | DESCRIPTION OF EMBODIMENTS An embodiment of the present invention will be described below. A lane information generating method according to the embodiment of the present invention is a lane information generating method in which lane information of an increasing or decreasing lane which increases or decreases from a traveling lane is generated based on traveling trajectory information of a mobile object and includes a first acquiring step of acquiring the traveling trajectory information, a second acquiring step of acquiring increasing or decreasing line information of an increasing or decreasing line of the increasing or decreasing lane detected by a sensor arranged in the mobile object, and a generating step of generating the lane information based on the traveling trajectory information and a shape of the increasing or decreasing line. According to such a lane information generating method of the present invention, by generating the lane information based not only on the traveling trajectory information but also on the shape of the increasing or decreasing line, it is possible to complementarily use information of the shape of the increasing or decreasing line at a position where variation is easily generated in a traveling trajectory and thus to generate lane information with high accuracy. In an increasing lane, the increasing or decreasing line is a line branching from a line which defines an existing traveling lane (a line which increases from a line defining the existing traveling lane), and in a decreasing lane, the increasing or decreasing line is a line converges on a line defining a traveling lane adjacent thereto (a line located on an outer side out of two lines getting closer to each other). Further, the lane information of the increasing or decreasing lane may be, for example, information regarding a characteristic point (a node) showing a start position or an end position of a lane change. In the generating step, it is preferable to generate the lane information by converting a portion of a shape of the traveling trajectory of the mobile object into the shape of the increasing or decreasing line. Thereby, it is possible to generate lane information of the increasing or decreasing line with high accuracy. It is preferable that the traveling trajectory information in of a plurality of mobile objects is acquired in the first acquiring step, and that a replaced range is determined based on the traveling trajectory information of the plurality of mobile objects in the generating step. Thereby, shapes of a plurality of traveling trajectories are replaced with the shape of the increasing or decreasing line with respect to a range in which variation is large, and the shapes of the plurality of traveling trajectories can be used intactly with respect to other ranges. The lane information generating method may further include a third acquiring step which acquires displacement of the mobile object and a correction step to correct the traveling trajectory information based on the displacement. Thereby, it is possible to generate lane information with high accuracy by using the corrected traveling trajectory information. On the other hand, a lane information generating system according to the present embodiment is a lane information generating system which generates the lane information of the increasing or decreasing lane which increases or decreases from the traveling lane based on the traveling trajectory information of the mobile object and includes a first acquiring unit acquiring the traveling trajectory information, a second acquiring unit acquiring the increasing or decreasing line information of the increasing or decreasing line of the increasing or decreasing lane detected by the sensor arranged in the mobile object, and a generating unit generating the lane information based on the traveling trajectory information and the shape of the increasing or decreasing line. Further, a lane information generating device according to the present embodiment is a lane information generating device which generates the lane information of the increasing or decreasing lane which increases or decreases from the traveling lane based on the traveling trajectory information of the mobile object and includes the first acquiring unit acquiring the traveling trajectory information, the second acquiring unit acquiring the increasing or decreasing line information of the increasing or decreasing line of the increasing or decreasing lane detected by the sensor arranged in the mobile object, and a generating unit generating the lane information based on the traveling trajectory information and the shape of the increasing or decreasing line. According to the lane information generating system and the lane information generating device of the present embodiment, as is the case with the above-mentioned lane information generating method, the lane information of the increasing or decreasing lane which increases or decreases from the traveling lane can be generated with high accuracy. Meanwhile, each configuration of the lane information generating system and the lane information generating device may be provided in one device or may be provided in each of a plurality of devices which are physically separated. EXAMPLES An example of the present invention will be described specifically. A lane information generating system (lane information generating device)1of the present example includes, as shown inFIG.1, a current position estimating unit2, a sensor3, a displacement sensor4, and a control unit5. The current position estimating unit2, the sensor3, and the displacement sensor4are provided in a vehicle10as a mobile object, and the control unit5is provided in an external server20which communicates with the vehicle10. Meanwhile, the vehicle10may be a general vehicle or a measuring vehicle for the purpose of generating map data. The current position estimating unit2estimates a current position (absolute position) of the vehicle10and may be, for example, a GPS receiving unit which receives a radio wave emitted from a plurality of GPS (Global Positioning System) satellites. The sensor3includes a projecting unit which projects an electromagnetic wave onto a road surface and a receiving unit which receives a reflected wave of the electromagnetic wave reflected by the road surface. The sensor3may be, for example, any light sensor (so-called LIDAR: Laser Imaging Detection and Ranging) which projects light and then receives reflected light reflected by an irradiation object. Further, the sensor3may be a camera which takes an image of the road surface. Information regarding a shape of a line formed on the road surface (a white or yellow broken line or a white or yellow solid line) can be acquired by the sensor3. Further, a dedicated sensor3may be provided in order to acquire the shape of the line, or a drive recorder or a light sensor for driving assistance may be used as the sensor3. The displacement sensor4acquires displacement of the vehicle10, and may be constituted by, for example, a vehicle speed pulse acquiring unit which acquires a vehicle speed pulse of the vehicle10, and a gyro sensor which acquires angular velocity and angular acceleration of the vehicle10. The displacement sensor4may be provided with an angular velocity sensor. The vehicle10is provided with a communicating unit11. The communicating unit11is constituted with a circuit or an antenna or the like to communicate with a network such as the Internet or a public line or the like, and transmits and receives information by communicating with the external server20. Meanwhile, the communicating unit11may perform only a transmission of information to the external server20. Other than the control unit5, the external server20includes a storage unit body21and a communication unit22and is provided being physically separated from the vehicle10, and is capable of communicating with the vehicle10via, for example, the network such as the Internet or the like and is configured to collect and store information from the vehicle10. Meanwhile, a state where the external server20is communicating with one vehicle10is shown inFIG.1, however, the external server20is capable of communicating with a plurality of vehicles. The control unit5is constituted with a CPU (Central Processing Unit) provided with a memory, for example, a RAM (Random Access Memory) or ROM (Read Only Memory), and the control unit5manages entire control of the external server20, and as described later, performs a process regarding information acquired from the vehicle10and stores the processed information in the storage unit body21. The storage unit body21is constituted with, for example, a hard disc or a nonvolatile memory or the like, and stores the map data, and reading and writing are performed by control from the control unit5. The map data includes route information of a road. Information on a plurality of characteristic points (nodes) is included in the route information, and the route information becomes information with respect to a route on which a vehicle can travel by connecting these characteristic points appropriately. Meanwhile, due to a configuration of the stored data, the storage unit body21may separately store the map data and the route information. The communication unit22is constituted with a circuit or an antenna or the like to communicate with the network such as the Internet or the public line or the like, and transmits and receives the information by communicating with the vehicle10. In a lane information generating system1as mentioned above, the traveling trajectory information is generated by performing map-matching based on current position information (latitude and longitude information) acquired by the current position estimating unit2and the route information stored in the storage unit body21. At this time, the map-matching may be performed on the vehicle10side and a result thereof may be acquired by the control unit5of the external server20, or the control unit5may acquire the current position information and perform the map-matching and acquire the traveling trajectory information. Thus, the control unit5acquires the traveling trajectory information (first acquiring step) and functions as a first acquiring unit. Meanwhile, the control unit5may generate the traveling trajectory information based only on the latitude and longitude information without performing the map-matching. The lane information generating system1determines a plurality of characteristic points with respect to each lane of a road on which the vehicle10travels, on based on the acquired traveling trajectory information. Meanwhile, in the present example, it is assumed that the vehicle travels on the left side, but the lane information generating system1is also applicable to countries or regions where vehicles travel on the right side. Below, a method to generate lane information of an increasing lane LN3by determining a characteristic point on the increasing lane LN3in a case where the increasing lane (lane only for a right turn) LN3which increases from two traveling lanes LN1and LN2before an intersection exists as shown inFIG.2is described. In a case where the vehicle10turns right at the intersection, the control unit5acquires the current position information of a section from a lane change from a traveling lane LN2to an increasing lane LN3to entering to a lane after turning right. Thereby, the control unit5generates a traveling trajectory. Meanwhile, whether the traveling lane LN3exists or not may be judged using information previously stored in the storage unit body21or the like or based on an acquired result of the sensor3. When the traveling trajectory is generated based on the current position information as described above, a smooth curve may not be obtained at a position corresponding to the turning right process. Therefore, the traveling trajectory may be generated by integrating displacements obtained by the displacement sensor4with a predetermined position set as reference, and in such a case, a smooth curve can be obtained. Since the displacement includes an error and errors are cumulated due to the integration, as shown by a dot-dashed line inFIG.3, the traveling trajectory generated based on the displacement may diverge from the lane after turning right at a position where turning right is completed. In such a case, the control unit5may obtain a displacement of the vehicle10in this section from the displacement sensor4(third acquiring step), and may generate the traveling trajectory by cumulating the displacements from a position on the lane after turning right as a starting point (shown in the drawing by a two-dot chain line). That is, the control unit5corrects the traveling trajectory information based on the displacement of the vehicle10(correction step). Meanwhile, the traveling trajectory information may be corrected by integrating the displacements toward an opposite direction of a traveling direction from the position on the lane after turning right to, for example, an end position of the traveling lane LN3(that is, a start position of turning right). Further, the traveling trajectory may be corrected by integrating the displacements in the traveling direction from the end position of the traveling lane LN3set as a starting point. Next, the control unit5performs a lane information generating process shown inFIG.4. First, the control unit5acquires the current position information from a plurality of vehicles and generates the traveling trajectory information and corrects the traveling trajectory information based on the displacement (step S1) as described above. Then, the control unit5acquires information (increasing line information) regarding a position and a shape of an increasing line LI of the increasing lane LN3based on a detected result of the sensor3(step S2, second acquiring step). Meanwhile, the increasing line LI is a line to define the increasing lane LN3and is branched from a line on a right side out of two lines defining the existing traveling lane LN2. Then, with n=0 (step S3), in a traveling direction position (x=n), the control unit5calculates variance S of a distance between each of a plurality of traveling trajectories and the increasing line LI (step S4) and judges whether the calculated variance S is equal to or greater than a threshold Y or not (step S5). If the variance S is less than the threshold Y (Judged N in step S5), the control unit5increments n by 1 (step S6), and then returns to the step S4. If the variance S is equal to or greater than the threshold Y (Judged Y in step S5), the control unit5determines and stores the traveling direction position as a starting point P1(step S7). Then, the control unit5increments n by 1 (step S8) and calculates the variance S for x=n (step S9), and judges whether the calculated variance S is equal to or greater than the threshold Y or not (step S10). If the variance S is equal to or greater than the threshold Y (Judged Yin step S10), the control unit5returns to the step8. If the variance S is less than the threshold Y (Judged N in step S10), the control unit5determines and stores the traveling direction position as an end point P2(step S11). The control unit5replaces the traveling trajectory with the shape of the increasing line LI (step S12) between the starting point P1and the end point P2and completes the process. Meanwhile, here, as the traveling trajectory, one representative traveling trajectory may be used, or an average of the plurality of traveling trajectories may also be used. In the step12, a section between the starting point P1and the end point P2is deleted from the traveling trajectory, and a line which is the increasing line LI translated in a direction orthogonal to the traveling direction is connected to the traveling trajectory at the starting point P1and the end point P2. This line after replacement is a composite line. The above-mentioned lane information generating process will be described specifically based onFIGS.5to7. First, in the step S1, a traveling trajectory LT as shown inFIG.5is obtained by generating and correcting the traveling trajectory. Meanwhile, inFIG.5, for ease of explanation, only one traveling trajectory is illustrated. Further, the starting point P1and the end point P2are determined by calculating the variance S and comparing the variance S with a threshold in the steps S3to S11. Further, as shown inFIGS.6and7, a composite line LM is generated by replacing the traveling trajectory LT with the shape of the increasing line LI in the step S12. By generating the composite line LM as mentioned above, the control unit5can determine representative points on the composite line LM as characteristic points P3and P4. Each of the characteristic points P3and P4shows a start position and an end position of a lane change, and is lane information of an increasing lane L3. The control unit5stores information regarding the characteristic points P3and P4in the storage unit body21of the external server20. At this time, at the intersection, characteristic points may be determined also on an opposite lane. Therefore, in order not to make the characteristic points P3and P4connected to the characteristic points of the opposite lane, it is preferable to store not only the position information of the characteristic points P3and P4but also information together on which characteristic points the characteristic points P3and P4are connected. According to the above-mentioned configuration, by determining the characteristic points P3and P4based not only on the traveling trajectory information but also on the shape of the increasing line LI, information of the shape of the increasing line LI at a position at which variation is easily generated in the traveling trajectory can be complementarily used to generate the characteristic points P3and P4as lane information of the increasing line LI with high accuracy. Further, it is possible to generate the lane information of the increasing line LI with high accuracy by converting the shape of the traveling trajectory into the shape of the increasing line LI between the starting point P1and the end point P2. In addition, by determining the starting point P1and the end point P2which show a replaced range based on the traveling trajectory information on the plurality of vehicles, shapes of the plurality of traveling trajectories are replaced with the shape of the increasing or decreasing line with respect to a range in which the variation is large, and the shapes of the plurality of traveling trajectories can be used intactly with respect to other ranges. By correcting the traveling trajectory information based on the displacement acquired by the displacement sensor4, the lane information can be generated with high accuracy using the corrected traveling trajectory information. The present invention is not limited to the example explained above, but the invention includes other configurations or the like which can achieve the object of the invention, and the following modifications are also included in the invention. For example, in the above-mentioned example, the lane information regarding the increasing lane L3only for a right turn is generated, but lane information about an increasing lane only for a left turn may also be generated. Further, as shown inFIG.8, in a case where the number of lanes decreases from three to two, a disappearing lane may be a decreasing lane LN6, and remaining lanes may be traveling lanes LN4and LN5, and lane information of the decreasing lane LN6may be generated based on the traveling trajectory information and a shape of a decreasing line LD of the decreasing lane LN6. Meanwhile, the decreasing line LD is a line to define the decreasing lane LN6and converges on a line located on a left side out of two lines defining an adjacent traveling lane LN5. Further, in the above-mentioned example, the replaced range is determined based on the traveling trajectory information on the plurality of vehicles, but the replaced range of the traveling trajectory may be determined by another method. For example, a branching start position of the increasing line LI may be detected by a sensor4and then the traveling direction position corresponding to this branching start position may also be a start point of the replaced range. Further, an end position of the increasing line L3and a stop line position may be detected by the sensor4and then the traveling direction position corresponding to this end position or the stop line position may also be an end point of the replaced range. Further, in the above-mentioned example, the composite line LM is generated by replacing a portion of the shape of the traveling trajectory with the shape of the increasing line LI, but it is sufficient to use the shape of the increasing line LI when the lane information is generated, and the portion of the shape of the traveling trajectory need not to be converted. For example, the composite line may be generated by an average of the shape of the traveling trajectory and the shape of the increasing line LI. Further, in the above-mentioned example, the traveling trajectory information is corrected based on the displacement acquired by the displacement sensor4, but, for example, in a case where the traveling trajectory information generated based on the current position information becomes a smooth curve, correction based on the displacement needs not to be performed. Although the best configuration and method and the like for carrying out the present invention have been described above, the invention is not limited to them. That is, the invention is particularly illustrated and described mainly with reference to the specific example, but a person skilled in the art can variously modify the above-described example in terms of shapes, materials, the amount and other detailed configurations without departing the scopes of the technical idea and purposes of the present invention. Therefore, the descriptions limited to the above-disclosed shapes and materials etc. are illustratively described to make it easy to understand the present invention, and they do not limit the invention. Thus, descriptions with names of members from which a portion or all of the limitations such as the shapes and the materials are removed are included in the invention. REFERENCE SIGNS LIST 1lane information generating system (lane information generating device)2current position estimating unit3sensor5control unit (first acquiring unit, second acquiring unit, generating unit)LN3increasing laneLI increasing line | 22,771 |
11859985 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Example methods and systems are described herein. Any example embodiment or feature described herein is not necessarily to be construed as preferred or advantageous over other embodiments or features. The example embodiments described herein are not meant to be limiting. It will be readily understood that certain aspects of the disclosed systems and methods can be arranged and combined in a wide variety of different configurations, all of which are contemplated herein. Furthermore, the particular arrangements shown in the Figures should not be viewed as limiting. It should be understood that other embodiments may include more or less of each element shown in a given Figure. Further, some of the illustrated elements may be combined or omitted. Yet further, an example embodiment may include elements that are not illustrated in the Figures. Aspects of the present disclosure involve systems and methods that implement knowledge graphs to verify or otherwise analyze the accuracy of mapping data. Mapping data may generally provide environmental properties or details of the relationships between elements of some object or space, such as a particular geographic region (e.g., a city, state, county, country, or the like). In the specific context of a road map or a route map, mapping data may provide navigational information that displays the relationship of roads and/or transport links of a given geographic region or recognize the presence of objects in the environment of the geographic region. For example, road map and/or route map mapping data may include information corresponding to road location, road names, numbers of lanes, travel direction, information on intersections formed by one or more of the identified roads, names of intersections, identification of right or left turn lanes, speed limits, indications of the presence or absence of a traffic signal in each of the roads, points of interest, construction locations etc. In other examples, the road map and/or route map mapping data may include non-automotive transit routes, such as passenger train routes, metro routes, subway routes, bike routes, walking routes, scooter routes and associated navigational information. In some instances, mapping data may be used by systems of location-based applications, such as transportation matching systems, to execute and display various map-based and/or navigation-based calculations and decisions for a given geographic region. For example, a transportation matching system may process a dataset of mapping data corresponding to a geographic location to generate various navigational decisions and calculations, such as ETA to a location, calculating ETD from a location, and/or generating routing decisions between locations (e.g., between picking an individual up at a particular location and dropping an individual off at a particular destination location). The map-based and/or navigation-based decisions may be provided for display at a client device associated with users of the transportation matching system or service, such as drivers, passengers, and/or the like. As long as the mapping data the transportation matching system is processing is valid and accurate, the executable functions and processes of the transportation matching system will generate accurate navigational calculations and routing/transport decisions. In one example, any dataset of mapping data that a transportation matching system is currently processing to execute and display (e.g., at a client device) map-based calculations and decisions (e.g., to users of a transportation matching system) is referred to as a production dataset of mapping data, and such data is generally considered valid and accurate because the data was previously verified as accurate data. Over time, however, the conditions or layout of roads and roadways of a geographic region may change, requiring updates to be made. In the example of a transportation matching system, updates may be made to the production mapping dataset that the transportation matching system may be currently processing to function (e.g., evaluate and perform calculations associated with regional environments). For example, new roads may be built after a map has already been included into the production dataset, or existing routes represented within a mapping dataset may be temporarily or permanently changed (e.g., a road may have a temporary diversion). In other examples, new feature types of map data may be identified that correspond to the geographic region, and such new data may be useful to a transportation matching system (or other type of location-based application) when making calculations and decisions, such as the routing decisions described above. For example, new map features (i.e., distinctive attributes or aspects of the map), such as speed limits, building locations, parking locations, turn restrictions, road locations, roads, traffic signals, traffic signs, vehicle locations, etc., may be identified for inclusion into a production dataset of mapping data. As yet another example, the new map features may define “points of interest” (e.g., theme parks, museums, banks or filling stations, park boundaries, river boundaries, and the like) that were previously unknown. Any such map features may be identified for inclusion into a production dataset of mapping data. Other updates and types of data may be made or incorporated to the production dataset of mapping data as well. Accordingly, to the extent a transportation matching system is executing processes based on the existing baseline/production dataset of mapping data that does not include any of such changes or new features, calculations generated by the transportation matching system may not include sufficient detail about the present or future conditions of the geographic region, or may inaccurately reflect the present or future conditions of the geographic region. In such a scenario, the production dataset of mapping data may need to be updated. To facilitate such updates, a location-based application provider (e.g., a transportation mapping system, a mapping system provider) may obtain one or more new datasets of mapping data (e.g., from different sources) as a supplement to the existing production dataset of mapping data. For example, mapping data may be available from various sources, including open source mapping data sources (e.g., OpenStreetMap) and/or mapping data vendors (e.g., Esri® ArcGIS®). The new datasets of mapping data may be a separate mapping dataset than the production dataset or a new, updated version, of the production dataset. In some instances, different sources of mapping data may offer different types of data. For example, one mapping data source may include data identifying building locations, but may not include speed limit data, while another mapping data source may include speed limit data but may not include building location data. Therefore, it may be advantageous to combine one or more datasets of mapping data from multiple sources into a production dataset of mapping data. However, including any dataset of mapping data into a production dataset of mapping data may present technical problems due to the accuracy of a given dataset of mapping data. For example, the accuracy of certain types of mapping data features may differ depending on the source of the mapping data and/or by location. In one specific example, certain features or type of data within a dataset of mapping data may be less accurate because the features were computer generated by a computer-generated process or system based on simulation (e.g., based on surrounding features of the mapping dataset) and without any real-world verification. Such computer-generated map features may be of minimal value to the location-based application provider because the location-based application provider could create their own computer-generated process or system to generate such map features. In such a scenario, it may be unnecessary to obtain (e.g., purchase) the new dataset of mapping data for integration into the production dataset of mapping data. In another example, the map features of a new dataset of mapping data may include features or otherwise represent features in a manner that differs from how such features have typically appeared in the production dataset of mapping data. For example, if the production dataset of mapping data includes map feature types (e.g., speed limits) for a new geographic location (e.g., Bozeman, Mont.) that the production dataset of mapping data already includes for other, seemingly similar locations (e.g., Billings, Mont.), occurrences of the map feature types in the new location (i.e., from the new dataset of mapping data) may be compared against occurrences in the seemingly similar locations of the production dataset of mapping data. If the occurrences between the new dataset of mapping data and the production dataset of mapping data differ too much, the map feature types of the new dataset of mapping data may be determined to be inaccurate and therefore not integrated into the production dataset of mapping data. In any such scenario, the accuracy of the obtained data may be questioned. To solve these specific technical problems, among others, the present disclosure generally discloses systems and methods that may be used for analyzing a dataset of mapping data to verify or otherwise analyze the accuracy of the mapping data before integrating or otherwise including the dataset of mapping data into a production dataset of mapping data. As will be explained in further detail below, the system may generate a knowledge graph corresponding to the dataset of mapping data and a knowledge graph corresponding to a production dataset of mapping data. The system may calculate a randomness measure for the dataset of mapping data and/or the production dataset of mapping data. Based on the calculated randomness measure(s), the system may, for example, determine that: 1) the dataset of mapping data is accurate and therefore may be acceptable for inclusion into the production dataset of mapping data; or 2) the dataset of mapping data is not accurate and therefore may be unacceptable for inclusion into the production dataset of mapping data. Upon determining that the dataset of mapping data is accurate, the dataset of mapping data may be incorporated into the production dataset of mapping data. Accordingly, within the context of a transportation matching system, the dataset of mapping data may be incorporated into the production dataset of mapping data the transportation matching system is currently processing to generate various map-based calculations and decisions. To provide a more specific example,FIGS.1A-1Bdepict an illustration associated with mapping data110that may be used in a rideshare service scenario100to generate various navigation-based calculations and decisions for display at a client device associated with users of the rideshare service, such as drivers, passengers, and/or the like. Referring initially toFIG.1B, the mapping data110may represent a complete dataset of mapping data or a subset of a dataset of mapping data used by a service or location-based application or service, such as a rideshare application, provided by a transportation network company or other navigation services provider. Referring toFIG.1A, a transportation matching system may process the mapping data110to generate various navigation-based and/or map-based calculations and decisions for display to users, such as at a client device associated with a user in proximity of the depicted vehicle. For example, the transportation matching system (e.g., one or more systems or processes) may use the mapping data110to calculate various route and/or transport decisions and/or other transportation matching calculations and decisions, such as determining a pickup location for a passenger, generating a transport route from the pickup location to a specific destination, estimate ETA, estimate ETD, and the like. Referring again toFIG.1B, the mapping data110includes road segments32,34,36,38,40,42of a road. The road segments34,40represent the eastern lane of a north/south road and the road segments32,38represent the western lane of the north/south road. The road segments40,42intersect to form the intersection43and the road segments38,42intersect to form the intersection46. The road segments34,36intersect to form the intersection44and the road segments32,36intersect to form the intersection45. As further illustrated, the mapping data110includes separate intersections43,44,45,46for each lane of the north/south road, although other implementations may use a single intersection for both lanes (e.g., the road segments38,40,42may form a single intersection that includes both lanes of the north/south road). Each road segment32,34,36,38,40,42has a corresponding traffic direction10,12,16,18,20,26. The traffic directions12,20for the road segments34,40indicate a north traffic direction. The traffic directions10,18for the road segments32,38indicate a south traffic direction. The traffic direction16for the road segment36indicates a western traffic direction, representing a one-way road. The traffic direction26for the road segment42indicates an eastern traffic direction, representing a one-way road. The mapping data110also includes stop signs14,22,24,28,30for the road segments32,34,38,42,40at the intersections45,44,46,43. Such data may be used to update routing, ETA, or ETD information, as, for example, stop signs can slow the average speed of traffic along the road segments32,34,38,42,40and thereby increase an ETA for a ride in transit or an ETD for a waiting rider. FIG.2depicts mapping data200which may be received or otherwise obtained from a map data provider to supplement the mapping data110(e.g., a production dataset of mapping data). Referring to the transportation matching example illustrated inFIG.1B, the rideshare service provider may obtain the mapping data200to add new features to the mapping data110to cause a transportation matching system, routing system, navigation system, and/or the like to generate improved navigation calculations and decisions and/or map-based calculations and decisions. The mapping data200may include certain types of mapping features that are more accurate than the mapping data110or are not included within the mapping data110. Additionally, in some embodiments, the mapping data200may include certain types of mapping features that are less accurate than those in the mapping data110. For example, the mapping data200includes road segments70,72,74,76,78and traffic directions50,52,54,56,58,62similar to those included in the mapping data110. Also, although the mapping data200includes stop signs60,64corresponding to the stop signs24,30of the mapping data110, the mapping data200does not include corresponding data for the stop signs14,22,28. The mapping data200may therefore include less accurate stop sign data, which should not be incorporated into the dataset of mapping data110. Although the dataset of mapping data110includes more accurate stop sign data, the dataset of mapping data110does include turn restrictions66,68, included in the dataset of mapping data200, which indicate no right turn from the road segment76at the intersection83and no left turn from the road segment78at the intersection82(e.g., because the road segment80is a one-way road with a traffic direction toward the intersections82,83). The mapping data200may similarly include additional turn restrictions outside of the depicted subset, such as turn restrictions not associated with one-way roads that may be more difficult to systematically determine using a computer process or system. The disclosed system may validate these mapping features and, if determined accurate or unique, incorporate the mapping features (i.e., the additional turn restrictions) into the mapping data110for production use. FIG.3depicts a system300for performing map data validation according to an exemplary embodiment of the present disclosure. The system300includes a map data provider302, a map database310, and a server316. The map database310includes historical map data312and a production map dataset314. The production map dataset314may represent mapping data used in a production setting, such as in a routing, mapping, ETA, or ETD process. The production map dataset314may represent a combination of datasets of mapping data, obtained from multiple mapping sources, and combined into a single dataset of mapping data314that includes the most accurate map features from each of the combined datasets. The production map dataset314may be updated based on received mapping data at regular intervals (e.g., every quarter, month, week). In certain implementations, mapping data within the production map dataset314may be validated on a regular basis (e.g., nightly basis). The historical map dataset312may include other, previously-received mapping data (e.g., previous versions of the production map dataset314, previous versions of the historical map dataset312, and/or other map datasets received from other mapping data sources). As explained further below, the historical map dataset312may be used to validate new map datasets304received from map data providers302. In one specific example, the historical map dataset312may represent multiple datasets, which may be stored in separate databases accessible to the server316. The map database310is communicatively coupled to the server316, which may access the historical map data312and the production map dataset314via a communications network connection (e.g., the Internet, a local area connection, wide area network, private network, cloud, etc.). The map data provider302includes a new map dataset304that includes map features306,308. In one example, the new map dataset304may include mapping data, such as the mapping data200, that includes certain map features306,308for inclusion within the production map dataset314(e.g., because the map features306,308are not present in the production map dataset314or are more accurate than the corresponding features of the production map dataset314). The map data provider302may correspond to an open source map data provider, a map data vendor, and/or any other source of map datasets. The map data provider302is communicatively coupled to the server316and may provide the new map dataset304to the server316over a network connection (e.g., the Internet, a local area connection, wide area network, private network, cloud, etc.). In certain implementations, the new map dataset304may be provided according to an application programming interface (API) (e.g., at regular intervals, such as monthly, weekly, daily, or hourly, or upon receiving an API request from the server316). The map features306,308may include one or more types of map features, such as road segments, traffic directions, intersections, stop sign locations, and turn restrictions as discussed above in connection with mapping data110,200. Additionally or alternatively, the map features306,308may include one or more of the following features: annotated shapes (e.g., buildings, parking lots), building type information, U-turn locations, toll road locations, road construction locations, road termini, traffic control equipment locations (e.g., traffic lights, signage, barriers), points of interest locations, transit stop locations, speed limits, road class information for road segments, road lengths for road segments, lane counts for road segments, bike lane counts for road segments, right turn lane locations, and left turn lane locations. Other types of map features may be apparent to one skilled in the art in light of the present disclosure, and such map features are expressly contemplated. The server316may be configured to receive the new map dataset304and to validate whether one or more of the map features306,308are accurate enough for inclusion within the production map dataset314. In particular, and as will be explained further below, the server316may generate a knowledge graph318of the new map dataset304. The nodes of the knowledge graph318may correspond to the map features306,308of the new map dataset304. The server316may traverse the knowledge graph318and identify paths within the knowledge graph318, which may be stored in the paths table320. For example, the server316may traverse the knowledge graph318from nodes corresponding to map features306,308of a particular feature type324,328,334(e.g., stop signs, turn restrictions, speed limits, building locations, parking lot locations). The server316may traverse the knowledge graph318and may generate a paths table320separately for each feature type324,328,334. Based on the generated paths table320, the server316may calculate a randomness measure322,326,332for the corresponding feature type324,328,334. The randomness measure322,326,332may measure the randomness with which the corresponding feature type324,328,334occurs within the new map dataset304. The randomness measure322,326,332may then be compared to a previous randomness measure325,330,336that also corresponds to the feature type324,328,334. The previous randomness measure325,330,336may be calculated based on one or both of the historical map dataset312and the production map dataset314. For example, the server316may similarly generate a knowledge graph318for the historical map dataset312and/or the production map dataset314and may generate a corresponding paths table320for each feature type324,328,334. The corresponding paths table320may then be used to calculate the previous randomness measure324,330,336. In certain implementations, or for certain feature types, the previous randomness measure325,330,336may be previously-calculated (e.g., during a previous evaluation of a different map dataset). In such instances, the previous randomness measure325,330,336may not be recalculated to evaluate the new map dataset304and may instead be accessed and compared to the calculated randomness measure322,326,332. The server316also includes a processor340and a memory338. The processor340and the memory338may implement one or more aspects of the server316. For example, the memory338may store instructions which, when executed by the processor128, may cause the processor128to perform one or more operational features of the server316. Similarly, although not depicted, one or both of the map data provider302and the map database310may include a processor and a memory storing instructions which, when executed by the processor, cause the processor to implement one or more operational features of the map data provider302and the map database310. FIG.4depicts a knowledge graph400according to an exemplary embodiment of the present disclosure. The knowledge graph400may be an implementation of the knowledge graph318generated by the server316. For example, the knowledge graph400may represent a portion of the knowledge graph318generated by the server316in response to receiving a new map dataset304representing the mapping data200. In particular, the knowledge graph400may represent a portion of the knowledge graph318corresponding to the road segments76,78,80and the intersections82,83of the mapping data200. Each of the map features306,308of the mapping data200is represented as a node on the knowledge graph400, represented by an ID number (e.g., ID68of the node468corresponds to the turn restriction68). In particular, the nodes462,466,468,476,478,480,482,483represent, respectively, the traffic direction62, the turn restriction66, the turn restriction68, the road segment76, the road segment78, the road segment80, the intersection82, and the intersection83. The knowledge graph400also includes other nodes402,404,406,408,410,412,414,416,418,420,422,424that represent values corresponding to the nodes462,466,468,476,478,480,482,483. Such nodes402,404,406,408,410,412,414,416,418,420,422,424may be considered as terminal nodes. In certain implementations, terminal nodes of the same value may be implemented by a single node. For example, nodes406,414,422may be represented by a single node including the value “Road Segment.” The nodes402,404,406,408,410,412,414,416,418,420,422,424,462,466,468,476,478,480,482,483are connected by edges. The edges connect two nodes402,404,406,408,410,412,414,416,418,420,422,424,462,466,468,476,478,480,482,483and may designate a relationship type between the nodes, as illustrated. The depicted relationship types and corresponding definitions are provided below in Table 1, although additional relationship types are possible depending on a desired implementation and the map features306,308included in the new map dataset304. TABLE 1Relationship TypeDefinitionis-aIdentifies the type of map feature for thecorresponding noderoad-segment-typeIdentifies a type of road segment (e.g., one-way road, private road, driveway, parking lotentrance). As depicted, two-way roads are notidentified by a road-segment-type, but suchimplementations are possible according to thepresent disclosure.intersection-forIdentifies a road segment for which anintersection map feature is an intersectionturn-restriction-forIdentifies a road segment for which a turnrestriction map feature appliesrestriction-typeIdentifies the type of a corresponding turnrestriction map feature (e.g., no left turn or noright turn)traffic-direction-forIdentifies a road segment for which a trafficdirection map feature appliesdirection-typeIdentifies the direction oftraffic for acorresponding traffic direction map feature By traversing the knowledge graph400, the server316can ascertain information about the map features306,308of the mapping data200. For example, by traversing the edge connecting nodes468,404, it can be determined that ID68 is a turn restriction and by traversing the edge connecting nodes468,402, it can be determined that ID68 has a restriction type of “no left turn.” Combining each of the independently indicated pieces of information, and based on the relationship definitions of Table 1, it can be determined that ID68 corresponds to a turn restriction map feature with no left turns. In some examples, the system may extend beyond individual features to incorporate additional information. For example, by traversing the nodes468,482,408, the server316can also determine that ID68 is on ID82, which is an intersection. By further traversing nodes478,406, the server316can also determine that the ID68 turn restriction is a turn restriction for ID78, which is a road segment. The insights determined by the system can extend across the knowledge graph400to other map features306,308by traversing the knowledge graph400in such a manner. The knowledge graph400is depicted at a conceptual level as a two-dimensional arrangement of nodes and edges. In practice, the knowledge graph400may be stored in a text-based format that describes each of the edge connections between two nodes. For example, the edge connections may be stored according to a framework, such as the resource description framework (RDF). In particular, the edges may be stored as triples identifying (i) a first node, (ii) a second node, and (iii) a relationship type between (e.g., connecting) the first node and the second node. Building on this example, Table 2 depicts the knowledge graph400stored as triples. TABLE 2Node ARelationship TypeNode BID78is-aroad segmentID82is-aIntersectionID82intersection-forID78ID68is-aturn restrictionID68restriction-typeno left turnID68is-onID82ID68turn-restriction-forID78ID76is-aroad segmentID83is-aIntersectionID83intersection-forID76ID66is-aturn restrictionID66restriction-typeno right turnID66is-onID83ID66turn-restriction-forID76ID80is-aroad segmentID80road-segment-typeone-wayID62is-atraffic directionID62direction-typeEastID62traffic-direction-forID80ID82intersection-forID80ID83intersection-forID80 FIG.5depicts a method500according to an exemplary embodiment of the present disclosure. The method500may be implemented on a computer system, such as the system300. For example and more specifically, the method500may be performed by the server316to evaluate the accuracy of anew map dataset304, and its corresponding features, received from a map data provider302. The method500may also be implemented by a set of instructions stored on a computer readable medium that, when executed by a processor, cause the computer system to perform the method. For example, all or part of the method500may be implemented by the processor340and the memory338. Although the examples below are described with reference to the flowchart illustrated inFIG.5, many other methods of performing the acts associated withFIG.5may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, one or more of the blocks may be repeated, and some of the blocks described may be optional. As illustrated, method500begins with the server316receiving a map dataset including map features (block502). For example, the server316may receive a new map dataset304from the map data provider302containing map features306,308. As explained above, the new map dataset304may include map features306,308for evaluation and inclusion within the production map dataset314(e.g., new or more accurate map features306,308). The server316may generate a knowledge graph318based on the map dataset (block504). For example, the server316may generate a knowledge graph318based on the new map dataset304, where each map feature306,308of the new map dataset304represents a node of the knowledge graph318, such as the knowledge graph400. One exemplary implementation for generating the knowledge graph318,400based on the new map dataset304is discussed in greater detail below in connection withFIG.9and the method900. The server316may determine or otherwise identify specific nodes (referred to as “starting nodes”) of the knowledge graph318,400corresponding to map features306,308of a particular feature type and from which the system may initiate traversal of the knowledge graph (block506). The particular feature type may represent a type of map feature306,308for which analysis is desired. The particular feature type may be specified by a request, such as a request received by a user (e.g., received at the server316). For example, while evaluating the new map dataset304, a mapping data technician may notice that the new map dataset304includes more turn restrictions than the production datasets (e.g., while comparing the mapping data200to the mapping data110). The technician may then submit a request specifying the turn restriction feature type for further analysis. In additional or alternative implementations, the particular feature type may be specified as part of an analysis process (e.g., a standard new map dataset analysis process) that specifies one or more standard feature types for analysis. In such implementations, the particular feature type may be selected as one of the standard feature types. The starting nodes may be selected within the knowledge graph318,400as the nodes corresponding to map features306,308of the particular feature type. To identify the starting nodes, the server316may identify the nodes within the knowledge graph with an “is-a” relationship type. As explained above, the “is-a” relationship type identifies a map feature type for the corresponding node. The server316may review the knowledge graph318,400nodes containing the particular feature type and may identify these nodes as the starting nodes. For example, if the particular feature type is turn restrictions, the server316may identify the nodes404,412as containing a value indicating the desired feature type (e.g., “turn restriction”) and may identify these nodes404,412as the starting nodes. FIG.6Adepicts a traversal state600of the knowledge graph400, indicating the nodes404,412as the starting nodes. Alternatively, the server316may identify the nodes with a corresponding “is-a” relationship type to the nodes containing the particular feature type as the starting nodes. In such implementations, where the particular feature type is turn restrictions, the nodes466,468would be identified as the starting nodes, as these nodes have an “is-a” relationship with the nodes404,412containing the particular feature type (e.g., “turn restriction”). Subsequent example discussions will assume that the nodes404,412were identified as the starting nodes for consistency. The server316may traverse edges of the knowledge graph318,400for each identified starting node to identify paths leading from the starting nodes (block508). To traverse the knowledge graph318,400, the server316may identify each edge leading from the starting nodes to determine the nodes that share an edge with the starting nodes. This process may-repeat across multiple levels, with the nodes identified at a previous level acting as the nodes under consideration for the following level. For example,FIGS.6A-6Edepict a traversal process of the knowledge tree400, where bolded nodes and edges indicate traversed edges and nodes and non-bolded nodes and edges indicate non-traversed edges and nodes, as shown in the key605. As previously discussed, where the particular feature type is turn restrictions, the nodes404,412may be identified as the starting nodes, indicated by the bold outline in the traversal state600ofFIG.6A. At the first level of traversal, the server316may identify the edges leaving from the starting nodes404,412. As depicted inFIG.6Band traversal state610, the identified edges connect node404to node468and connect node412to node466. At this stage, the connection between these nodes may be represented as “turn restriction is-a ID68” and may be considered a path within the knowledge tree400. Similarly, the path “turn restriction is-a ID66” may also be identified as a path within the knowledge tree400. The traversal state610corresponds to a single level of traversal of the knowledge graph400(e.g., a single level of edge connections originating from the starting nodes404,412, identifying nodes466,468of depth one). The server316may continue to traverse knowledge graph400across multiple levels. At each level, the server316may add to consideration nodes that share an edge with nodes at the same depth as the previous level of traversal. For example,FIG.6Cdepicts a traversal state620corresponding to the second level of traversal of the knowledge tree400. To perform the second level of traversal, the server316may identify all edges connected to the nodes466,468that were added to consideration during the previous level of traversal (e.g., the nodes466,468of depth one). After the first level of traversal, the nodes466,468were added to consideration by virtue of their edge connections to the starting nodes404,412. At the second level of traversal, the server316may identify those nodes that share an edge with the nodes466,468. The server316may therefore identify the nodes402,478,482as sharing an edge with the node468and the nodes410,476,483as sharing an edge with the node466. The nodes402,410,476,478,482,483may then be considered of depth two. The nodes402,410,476,478,482,483of depth two may then be considered during the third level of traversal, depicted in the traversal state630ofFIG.6D. In the traversal state630, the server316has identified the nodes406,408,414,416,480as sharing an edge with at least one of the nodes402,410,476,478,482,483of depth two. When traversing the knowledge graph400, the server316may only identify the nodes that were not considered at a previous level of traversal (e.g., traversal states600,610,620). Therefore, although there are new edges connecting nodes478,482and nodes466,476at the third level of traversal, these nodes466,476,478,482may not be added to consideration, although the edges connecting them are considered traversed in the traversal state630to indicate that they have been considered. The server316may then repeat the traversal process for the fourth level of traversal, depicted in traversal state640ofFIG.6E. In the traversal state640, the nodes422,424,462have been added to consideration by virtue of their shared edges with the node480. Traversal may continue until a traversal completion threshold is met. For example, the traversal completion threshold may specify a number of levels to be traversed (e.g., three or four levels). The number of levels for traversal may be user-specified or may be a system parameter (e.g., a parameter of the server316). In other implementations, the traversal completion threshold may specify a maximum number of nodes or edges that can be in consideration. It should be appreciated that the number of nodes and edges, and therefore paths, in consideration may rise rapidly as the knowledge graph318,400is traversed for additional levels. Accordingly, the traversal completion threshold may be selected to balance system performance and analysis time against analysis robustness and complexity. In still further implementations, the traversal completion threshold may be designated, at least in part, as a certain type of ending node. For example, traversal may continue until the identified paths end at a certain type of feature (e.g., road segments), or in a certain location (e.g., Alameda County). Additionally or alternatively, during traversal the server316may only consider edges of a certain relationship type (e.g., is-a or turn-restriction-for), which may correspond to paths that tend to eventually terminate within the knowledge graph400. The specific traversal completion threshold and/or technique may be selected to include one or more of the above techniques in order to achieve desired analysis robustness and system performance. The server316may accordingly identify paths within the knowledge graph318,400based on the traversal process. These paths may be stored in the paths table320as the server316traverses the knowledge graph400. For example, each time a node is added to consideration during the traversal process, the server316may add a corresponding path or paths from the starting node404,412to the newly-added node. Continuing the previous example regarding the path connecting nodes404,468, when the node478is added to consideration in traversal state620by virtue of its shared edge with the node468, the path “turn restriction is-a ID68 turn-restriction-for ID78” may be added to the paths table320. In traversal state630, the node406is added to consideration by virtue of its shared edge with node478, which may be represented by the path “turn restriction is-a ID68 turn-restriction-for ID78 is-a road segment” added to the paths table320. In this way, the server316may build out a table of paths originating from the starting nodes404,412for each node added to consideration. The paths table800depicts paths802-840that may be generated during traversal of the knowledge graph400as depicted inFIGS.6A-6E. In certain implementations, rather than considering the entire path802-840, with each intermediate node ID, the paths may only consider the starting and ending nodes. For example, path840may be added to the table as “Turn Restriction→One-Way,” with the “→” representing the internal path. Such implementations are discussed further below in connection with the filtered paths table870. In certain implementations, while traversing the knowledge graph400, the server316may be configured to exclude edges of a certain relationship type and/or nodes of a certain type. For example,FIG.7depicts a knowledge graph traversal700that excludes relationship types of “restriction-type,” “turn-restriction-for,” and “traffic-direction-for.” Such implementations may reduce the number of nodes added to consideration at each level, which would correspondingly reduce the overall number of nodes and edges in consideration. For example, excluding these relationship types has resulted in the nodes402,410,462being excluded from consideration, which correspondingly reduces the number of paths as compared to the traversal state640. Similarly, excluding certain relationship types may keep irrelevant information from consideration. For example, a mapping technician providing the particular feature type of turn restrictions may be principally interested in determining whether the additional turn restrictions in the new map dataset304were generated by computer processes, which may suggest that they are less accurate or at least have not been verified in person (e.g., by visiting the relevant location). Such a technician may therefore be less interested in the specific type of restriction generated (e.g., no left turn or no right turn) and may instead be primarily interested in the related map features. Therefore, excluding the relationship type “restriction-type” may not exclude relevant information for the purposes of the desired analysis. Similarly, turn restrictions may almost always be located on road segments because turn restrictions relate to operation of a vehicle, unlike, e.g., building locations. Therefore, the relationship type “turn-restriction-for” may, in certain instances, be excluded with minimal loss of analytical accuracy. Lastly, if it is desired to focus analysis on certain types of roads or intersections, the relationship type “traffic-direction-for” may be excluded for the purposes of speeding up the analysis without excluding analytically relevant map features306,308because traffic directions are not under consideration and are therefore not analytically relevant. Similar heuristics may be used in certain implementations to determine nodes of certain types that may be omitted. These restriction types may also differ for each level of traversal of the knowledge graph400. Additionally or alternatively, the server316may filter paths in the paths table320,800after identification of paths during traversal of the knowledge graph318,400. For example, after creating the paths table800, the server316may filter paths from the paths table800that do not end at a terminal node (e.g., a node for which no further edges remain, or a node containing a value). As can be seen in the traversal state640, such terminal nodes include nodes402,406,408,410,416,422,424. Paths ending at terminal nodes may improve analysis because the terminal nodes are more likely to define or provide further information on a node connected within the path (e.g., the node424provides additional information on the type of road segment that corresponds to node480).FIG.8Bdepicts a filtered paths table850corresponding to the paths table800after the paths802-810,814,826,822,828,830,836that do not end in a terminal node were filtered out. Other filtration techniques are also possible, including removing paths that are too short (e.g., include fewer than three nodes), paths that are too long (e.g., include more than 6 nodes), and paths that do not end in a value. Alternatively, the paths table320,800may be filtered to only include certain types of paths. For example,FIG.8Cdepicts a filtered paths table870generated based on a new map dataset304including road segment type information on One-Way, Two-Way, and Parking Lot Entrance road segments. The filtered paths table870has been filtered to only include paths850-864that end at a node identifying a road segment type. For simplicity, the internal portions of these paths are represented by a “→” character. In particular, while generating the filtered paths table870, the server316may remove internal node IDs. For example, the path840in the paths table800is initially represented as “Turn Restriction is-a ID66 is-on ID83 intersection-for ID80 road-segment-type One-Way.” The server316may remove the internal node identifiers and substitute them with a predicate character, such as “.” or “#,” or any other character to indicate removed node IDs. The path may then become “Turn Restriction is-a.is-on.intersection-for.road-segment-type One-Way.” Accordingly, the “→” character may therefore represent the intermediate portions of the path (e.g., “is-a.is-on.intersection-for.road-segment-type”). Referring again toFIG.5, the server316may calculate a randomness measure322,326,332for the particular feature type based on the paths identified while traversing the knowledge graph318,400(block510). Continuing the rideshare example discussed above, the randomness measure322,326,332, may be calculated to see how the occurrence of map features306,308, of the particular feature type in the new map dataset304compare to occurrences in, e.g., the historical map dataset310and the production map dataset314, thereby testing the accuracy of the particular feature type in the new map dataset304. The server316may calculate a randomness measure322,326,332between each of the starting nodes404,412and the nodes at the end of each of the paths within the paths table320,800and/or the filtered paths table850,870. The randomness measure322,326,332may indicate a level of randomness between the particular feature type and one or more additional map features306,308and/or feature types within the new map dataset304. As an example, if the server316completed the traversal of the knowledge graph318,400for the particular feature type of turn restrictions with the filtered paths table820, the randomness measure322,326,332may measure the randomness between turn restrictions within the new map dataset304and various road segment types within the new map dataset304. Stated differently, the randomness measure322,326,332may quantify how predictable or unpredictable occurrences of a given particular feature type will be within the new map dataset. Accordingly, in the turn restriction example, a high randomness measure322,326,332may indicate that occurrences of turn restrictions within the new map dataset304cannot be easily predicted, and therefore, that the presence of a turn restriction in the new map dataset304is a random occurrence. In such a scenario, it may be determined that the new map dataset304was not generated by a computer-process (which would likely be based on certain nearby map features306,308and therefore be less random and predictable), but rather was generated based on random, real-world verification of the turn restrictions. In certain implementations, the randomness measure may be calculated as the information entropy between the starting node404,412and the ending node of each path802-840within the paths table320,800and/or filtered paths table850,870. In particular, the randomness measure322,326,332(RM) may be calculated as: RM=-∑iPilnPi, where i represents each type of ending node (e.g., One-Way, Two-Way, Parking Lot Entrance in the filtered paths table870) and where Pirepresents the proportion of paths that end in each corresponding type of ending node. For example, applying the above formulation of the randomness measure322,326,332to the paths850-864of the filtered paths table870would result in a randomness measure of: RM=-[P(One-Way)*lnP(One-Way)+P(Two-Way)*lnP(Two-Way)+P(ParkingLot)*lnP(ParkingLot)]=-[58*ln58+28*ln28+18*ln18]≅0.9 For simplicity, the above example calculation was demonstrated using the comparatively smaller filtered paths table870. However, it should be noted that the calculation can be similarly performed using more complicated paths tables, including, e.g., the paths table800so long as a separate probability can be determined for each ending node in the included paths. In practice, for example, a tree traversal may have, e.g., hundreds or thousands of starting nodes404,412and hundreds, thousands, or millions of paths in the paths table318,800. The randomness measure322,326,332may be calculated across all of these paths, or across a filtered subset of these paths. In further implementations, the randomness measure322,326,332may be calculated separately for each type of end node. For example, for the filtered paths table870, the randomness measure322,326,332may be calculated separately for one-way roads, two-way roads, and parking lot entrances. In such implementations, the randomness measures322,326,332would be: RM(One-Way)=−P(One-Way)*lnP(One-Way)=−⅝ ln ⅝≈0.3 RM(Two-Way)=−P(Two-Way)*lnP(Two-Way)=− 2/8 ln 2/8≈0.35 RM(Parking Lot)=−P(Parking Lot)*lnP(Parking Lot)=−⅛ ln ⅛≈0.26 In such implementations, the randomness measure322,326,332may be separately for each type of path connecting the end nodes. For example, as explained above, the “→” character in the filtered paths table870may represent different intermediate portions of the depicted paths. A separate randomness measure322,326,332may therefore be calculated for each type of intermediate path. Additionally or alternatively, the types of intermediate paths may be grouped (e.g., according to the types of relationships contained in the intermediate paths), and a separate randomness measure322,326,332may be calculated for each group of intermediate paths. In still further implementations, the randomness measure322,326,332may be normalized. For example, where a single randomness measure322,326,332is calculated for all types of ending node, the randomness measure322,326,332may be normalized by the number of types of ending nodes for which the randomness measure322,326,332is calculated. The randomness measure322,326,332may be additionally or alternatively implemented according to formulations other than the above information entropy formulation, such as formulations utilizing one or more of statistical dispersion measures of the end nodes of the paths, a diversity index of the end nodes of the paths, and mutual information measures between the starting nodes and the end nodes of the paths. In light of the above disclosure, one skilled in the art may recognize additional tests that may be used to measure the randomness of paths802-840identified during traversal of the knowledge graph318,400, and such tests are expressly contemplated by the present disclosure. The server316may then compare the randomness measure322,326,332to a previous randomness measure325,330,336corresponding to the particular feature type (block512). For example, the map database310may store previous randomness measures325,330,336corresponding to the randomness measure322,326,332. For example, the server316may have previously calculated the previous randomness measure325,330,336for the particular feature type based on one or both of the historical map dataset312and the production map dataset314. The previous randomness measure325,330,336may be stored within the map database310when the particular feature type is part of a standard set of analyses for new map datasets304. For example, where a predefined list of feature types324,328,334are analyzed for incoming new map datasets, the previous randomness measures325,330,336may be calculated ahead of time and stored along with the historical map dataset312in the production map dataset314to expedite analysis of the new map dataset304. In other implementations, the previous randomness measure325,330,336may need to be calculated by the server316while evaluating the new map dataset304. For example, the server316may calculate the previous randomness measure325,330,336before or after calculating the randomness measure322,326,332of the feature type. For accurate comparisons between the randomness measure322,326,332and the previous randomness measure325,330,336, the server316may use the same formulation when calculating the previous randomness measure325,330,336as the randomness measure322,326,332(e.g., the information entropy formulation and/or other formulations discussed above). This may be true both when the previous randomness measure325,330,336is calculated ahead of time and when the previous randomness measure325,330,336is calculated while evaluating the new map dataset304. Once the previous randomness measure325,330,336is obtained, the randomness measure322,326,332may be compared to the previous randomness measure325,330,336. This comparison may determine whether the randomness measure322,326,332is larger or smaller than the previous randomness measure325,330,336. Further examples of this comparison are discussed below in connection withFIGS.10A and10B. In implementations where the randomness measure322,326,332is calculated separately for each type of ending node, the previous randomness measure325,330,336may also be calculated and compared separately for each type of ending node. For example, the one-way randomness measure322,326,332in the example above may be compared with a one-way previous randomness measure322,326,332that was similarly calculated based on the proportion of paths from turning restrictions to one-way roads in the historical map dataset312and/or the production map dataset314. The method500was discussed above in connection with receiving and traversing a knowledge graph400for a single particular feature type. In practice, however, it may be desirable to analyze more than one feature type324,328,334of a new map dataset304. In such implementations, all or part of the method500may be repeated for each feature type324,328,334requiring analysis. For example, the server316may receive and convert the new map dataset304into a knowledge graph (blocks502,504) for analysis of the first feature type324and may proceed with analyzing the feature type324(block506-512) as discussed above. For subsequent feature types328,334, the server316may proceed directly to analyzing the feature types328,334(blocks506-512) based on the previously-generated knowledge graph318,400. In other implementations (e.g., where nodes of a certain feature type or where edges of a certain relation type are excluded from the knowledge graph318,400), the server316may generate a new knowledge graph318,400for each analyzed feature type324,328,334. In still further implementations, the server316may analyze multiple feature types at the same time. For example, the method400may be performed to analyze both turn restrictions and one-way roads and to determine relationships between the feature types. FIG.9depicts an exemplary method900for converting the new map dataset304into a knowledge graph318,400. The method900may be implemented on a computer system, such as the system300. For example, the method900may be implemented by the server316to generate a knowledge graph318,400based on the new map dataset304. The method900may also be implemented by a set of instructions stored on a computer readable medium that, when executed by a processor, cause the computer system to perform the method. For example, all or part of the method900may be implemented by the processor340and the memory338. The server316may identify the map features306,308within the received new map dataset304(block902). For example, the new map dataset304may be received in a file format (e.g., a geographic information system (GIS) file format) that includes data regarding each map feature306,308and a corresponding location or region. The server316may process the new map dataset304based on the received format to identify each map feature306,308contained within the new map dataset304. The server316may identify relationships between one or more of the map features (block904). For example, the server316may, for each map feature306,308locate a map feature identifier within the new map dataset304that identifies a map feature type of the map feature306,308and may thereby determine an “is-a” relationship for the map feature306,308that includes the map feature identifier. In certain implementations, the server316may identify the relationships based on one or more common attributes (e.g., similar locations) between map features306,308. For example, the server316may also determine which map features306,308are located near one another (e.g., based on location information for the map features306,308) to determine “is-on” and “intersection-for” relationships. In such an example, for the mapping data200, the server316may determine that the turn restriction68overlaps locations with the road segment78and may therefore determine that the turn restriction68has a “turn-restriction-for” relationship with the road segment78. In further implementations, the server316may identify relationships based on indicators other than common shared attributes. The new map dataset304may additionally or alternatively indicate one or more relationships between map features306,308. For example, the new map dataset304may indicate that the intersection82is an intersection between road segments78,80. The server316may generate a node in the knowledge graph318,400for each map feature (block906). For example, after identifying the map features of the mapping data200, the server316may generate the nodes462,466,468,476,478,480,482,483corresponding to these map features. In certain implementations, to generate the nodes, the server316may assign an identifier to each node that identifies the corresponding map feature. For example, as depicted inFIG.4, each of the nodes462,466,468,476,478,480,482,483corresponding to map features may include an identification (ID) number for the corresponding map feature (e.g., node468with identifier ID68 corresponds to the turn restriction68). In implementations where the new map dataset304includes feature ID numbers, these feature ID numbers may be used as the node numbers within the knowledge graph318,400. In other implementations, the server316may generate a new ID number for the nodes462,466,468,476,478,480,482,483corresponding to map features. The server316may generate an edge between nodes for each relationship (block908). For example, the server316may, for each relationship identified at block904, identify the two nodes462,466,468,476,478,480,482,483of the knowledge graph318,400corresponding to the relationship and may generate an edge between the two nodes462,466,468,476,478,480,482,483. In certain implementations, the generated edge may identify the type of relationship between the two nodes462,466,468,476,478,480,482,483. For example, after determining that the turn restriction68is a “turn-restriction-for” the road segment78, the server316may identify the nodes468,478corresponding to the turn restriction68and the road segment78and may generate an edge connecting the nodes468,478with a relationship type of “turn-restriction-for.” In generating the edge, the server316may generate a triple identifying the connected nodes and the relationship type between the connected nodes. The server316may then add the triple to the knowledge graph318,400. The method900is discussed above in connection using examples of generating the knowledge graph400based on the mapping data200. It should be understood, however, that the method900may be generalized to other received mapping data and new map datasets. Additionally, the discussions of the method900above refer to generating nodes for “each” map feature306,308and edges for “each” relationship. In practice, similar results may be achieved by creating nodes for a subset of the map features and/or a subset of the identified relationships. For example, certain map features306,308and/or relationship types may be excluded from the knowledge graph318,400and therefore from subsequent analysis in the method500to improve processing times, as discussed above. Also, although the examples above are described with reference to the flowchart illustrated inFIG.9, many other methods of performing the acts associated withFIG.9may be used. In particular, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, one or more of the blocks may be repeated, and some of the blocks described may be optional. For example, in certain implementations where the knowledge graph318is stored as triples, the server316may combine blocks906and908. In such implementations, after identifying a relationship between two nodes, the server316may append the identified relationship as a triple to the knowledge graph318that identifies both nodes and the relationship (i.e., an edge between the nodes). FIGS.10A and10Bdepict exemplary methods1000,1010for processing the randomness measure322,326,332. The methods1000,1010may be implemented on a computer system, such as the system300. For example, the methods1000,1010may be implemented by the server316to analyze the randomness measure322,326,332and process the new map dataset304in light of the analysis. The methods1000,1010may also be implemented by a set of instructions stored on a computer readable medium that, when executed by a processor, cause the computer system to perform the method. For example, all or part of the methods1000,1010may be implemented by the processor340and the memory338. The method1000begins with the server316determining whether the randomness measure322,326,332exceeds the predetermined randomness measure325,330,336by a predetermined threshold (block1002). The predetermined threshold may be a fixed amount (e.g., 0.05, 0.1) or a percentage of the predetermined randomness measure325,330,336(e.g., 10%, 20%). In certain implementations, the server316may calculate the absolute value of the difference between the randomness measure322,326,332and the predetermined randomness measure325,330,336. The server316may then compare the absolute value of the difference to the predetermined threshold to determine whether the absolute value of the difference exceeds the predetermined threshold. Additionally, the predetermined threshold may vary on a per-feature basis. For example, the threshold for map features306,308deemed more important (e.g., turn restrictions) may be smaller than the predetermined threshold for map features306,308deemed less important (e.g., parking lot locations). In such implementations, the more important map features may be held to a higher accuracy standard than less important map features. If the randomness measure322,326,332exceeds the previous randomness measure325,330,336, the server316may determine that the corresponding feature type324,328,334occurs more randomly within the new map dataset304than within the historical map dataset312and/or the production map dataset314. For example, in the filtered paths table870, a majority of the turn restrictions result in paths ending at a one-way street, resulting in a randomness measure322,326,332of 0.9 in the formulation discussed above. If the production map dataset314includes turn restrictions that result in a previous randomness measure325,330,336of 1.5, then the difference between the randomness measure322,326,332and the previous randomness measure325,339,336is 0.9-1.5=−0.6, indicating that the new map dataset304includes turn restrictions that are less randomly distributed than those in the production map dataset314. In certain implementations, such a calculation may suggest that the corresponding feature type324,328,334is less accurate within the new map dataset304than within the map database310. The server316may therefore exclude map features of the feature type from the production map dataset314(block1006). If the randomness measure322,326,332does not exceed the previous randomness measure325,330,336by the predetermined threshold, such a calculation may suggest that the corresponding feature type324,328,334does not occur more randomly within the new map dataset304than within the map database310, which may suggest that the feature types324are similarly accurate or more accurate within the new map dataset304. The server316may therefore include map features of the feature type within the production map dataset314(block1004). Although not depicted, in certain implementations, the server316may also determine whether the difference between the randomness measure322,326,332and the previous randomness measure324,330,336is not less than a second threshold to ensure that the randomness measure322,326,332is not too small as compared to the previous randomness measure324,330,336. If the randomness measure322,326,332is too small, the system may determine that the map features of the feature type are computer-generated based on other map features, and are thereby highly correlated with the other map features. For example, if the turn restrictions66,68in the mapping data200were generated based on surrounding mapping features such as one-way streets, then the corresponding randomness measure322,326,332may be low in comparison to a previous randomness measure325,330,336of the production map dataset314(e.g., if the production map dataset314includes only verified turn restrictions that may also occur near two-way streets and parking lots). In certain implementations, because the turn restrictions66,68in the mapping data200have not been verified, it may be desirable to exclude the turn restriction feature type from the production map dataset314. In certain implementations, the server316may maintain ratings of different map data providers302. For example, different map data providers302may provide more accurate data for specific features types324,328,334(e.g., one map data provider302may provide accurate speed limit data while another map data provider302may provide accurate stop sign location information). The server316may therefore maintain ratings of the determined accuracy of map data providers302across multiple feature types324,328,334. These ratings may be updated or initialized based on the comparison of the randomness measure322,326,332with the previous randomness measure. For example, in addition to incorporating the map features306,308of the particular feature type into the production map dataset314at block1004, the server316may update the rating of the map data provider302to indicate accuracy with the feature type (e.g., may increase the associated accuracy score). In another example, in addition to excluding the map features306,308of the feature type from the production map dataset314at block1006, the server316may update the rating of the map data provider302to indicate inaccuracy with the feature type (e.g., may decrease the associated accuracy score). In certain implementations, these ratings may be aggregated into an overall map data provider accuracy score. For example, if an overall map data provider accuracy score for a map data provider302is too low (e.g., below a certain threshold), new map datasets304from the map data provider302may not be incorporated into the production map dataset314. In still further implementations (e.g., for open source map data providers302), the server316may distinguish between different editors of the map data. Open source map data may typically be edited by multiple editors, and data edited by a given editor may include an identifier of the editor. The server316may therefore be configured to analyze feature types324,328,334on a per-editor basis and may maintain accuracy scores (e.g., accuracy profiles) for editors across different features324,328,334. Such implementations may allow for filtering on a per-editor and/or per-feature basis, thereby reducing the necessity to analyze certain map features306,308within open source map data. For example, if the new map dataset304includes new stop sign map features from an editor with a low accuracy score for stop sign accuracy, the server316may exclude these map features from the production map dataset without having to calculate and analyze the randomness measure322,326,332of the new map dataset304. In certain implementations, map features306,308associated with editors with low accuracy score may also be excluded from generation of the knowledge graph318,400. Unreliable editors may be identified based on a randomness measure322,326,332of feature types324,328,334of map features306,308that editors add to map data. For example, if one editor adds many map features306,308of the same feature type, the map features306,308may be analyzed using a randomness measure322,326,332. A low randomness measure322,326,332may indicate that the editor has generated the map features306,308using a computer process, which may be less valuable for purchase by a location-based application provider. Accordingly, an accuracy score of the editor may be reduced to indicate that features from that editor are less valuable and less reliable (e.g., less likely to have been verified in person). The method1010may begin with the server316determining whether the randomness measure322,326,332exceeds a minimum randomness threshold (block1012). The minimum randomness threshold may represent a minimum level of randomness for a given feature type324,328,334to be considered likely accurate. In the information entropy formulation discussed above in connection with the block510, the minimum randomness threshold may be, e.g., 1, 0.5, 0.1, 0.01. If the randomness measure322,326,332exceeds the minimum randomness threshold, the server316may incorporate the features of the particular feature type within the new map dataset304into the production map dataset314(block1014). If the randomness measure322,326,332exceeds the minimum randomness threshold, the features of the particular feature type may be sufficiently randomly distributed throughout the new map dataset304. If the server316determines that the randomness measure322,326,332does not exceed the minimum randomness threshold, the server316may exclude map features of the particular feature type within the new map dataset304from the production map dataset314(block1016). If the randomness measure322,326,332is too low, the system may determine that the feature type324,328,334is not sufficiently randomly distributed throughout the new map dataset304. For example, if the new feature type324,328,334(e.g., turn restrictions) is sparsely distributed within the new map dataset304, it may frequently correlate with certain other map features (e.g., one-way streets). Such a situation would result in a low randomness measure322,326,332, indicating the sparse distribution and potential inaccuracy of the feature type324,328,334within the new map dataset304. In fact, the example randomness measure calculation discussed with block512resulted in a randomness measure322,326,332of approximately 0.9. If the minimum randomness threshold is set at 1 as discussed above, the server316may then detect, based on the randomness measure322,326,332, the sparsity (e.g., 8 occurrences) of turn restrictions identified in the filtered paths table870. In other implementations (e.g., other values for the minimum randomness threshold), a randomness measure of 0.9 may be deemed sufficient for inclusion within the production map dataset. Accordingly, if the server316determines that the randomness measure322,326,332exceeds the minimum randomness threshold, the server316may incorporate map features306,308of the particular feature type into the production map dataset314(block1016). In certain implementations, the method1010may be performed on its own (e.g., in lieu of comparison at block514). For example, the method1010may be performed after calculating the randomness measure322,326,332at block510as depicted. The method1010may also be performed in combination with the comparison with the previous randomness measure325,330,336. For example, instead of incorporating the map features306,308of the particular feature type into the production map dataset314at block1014, the method1010may proceed to comparison with the previous randomness measure325,330,336(e.g., at blocks512,1002). Each of the thresholds discussed above in connection with methods1000,1010may differ by feature type324,328,334. For example, each feature type324,328,334may have its own minimum randomness threshold and minimum threshold for comparison between the randomness measure322,326,332with the previous randomness measure325,330,336. For example, stop signs may be more randomly distributed in mapping data than turn restrictions. Therefore, the minimum randomness threshold for stop signs may be higher (e.g., 4.0) than the minimum randomness threshold for turn restrictions (e.g., 0.5). Similarly, the minimum threshold at block1002may be higher or lower for stop signs than for turn restrictions. Depending on the implementation, the minimum randomness thresholds may range from, e.g., 1/1,000,000 to 0.01, 0.1, 10, 100, or more. The thresholds for comparison with the previous randomness measure325,330,336may cover a similar range of values. As discussed above, the randomness measure322,326,332and previous randomness measure325,330,336may be calculated separately for each type of end node, or may be calculated based on all or more than one type of end node. Depending on how the calculation is performed, the above thresholds may also differ in value. For example, when the randomness measure322,326,332and previous randomness measure325,330,336are calculated separately for each type of end node, the overall values may be smaller, so the thresholds may also be smaller in value. However, when the randomness measure322,326,332and the previous randomness measure325,330,336are calculated for all types of end nodes, the overall values may be larger, so the threshold may be larger in value. FIG.11illustrates an example computer system1100that may be utilized to implement one or more of the devices and/or components ofFIG.3, such as the server316and/or the map data provider302. In particular embodiments, one or more computer systems1100perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems1100provide the functionalities described or illustrated herein. In particular embodiments, software running on one or more computer systems1100performs one or more steps of one or more methods described or illustrated herein or provides the functionalities described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems1100. Herein, a reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, a reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems1100. This disclosure contemplates the computer system1100taking any suitable physical form. As example and not by way of limitation, the computer system1100may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, the computer system1100may include one or more computer systems1100; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems1100may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems1100may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems1100may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system1100includes a processor1106, memory1104, storage1108, an input/output (I/O) interface1110, and a communication interface1112. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, the processor1106includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, the processor1106may retrieve (or fetch) the instructions from an internal register, an internal cache, memory1104, or storage1108; decode and execute the instructions; and then write one or more results to an internal register, internal cache, memory1104, or storage1108. In particular embodiments, the processor1106may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates the processor1106including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, the processor1106may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory1104or storage1108, and the instruction caches may speed up retrieval of those instructions by the processor1106. Data in the data caches may be copies of data in memory1104or storage1108that are to be operated on by computer instructions; the results of previous instructions executed by the processor1106that are accessible to subsequent instructions or for writing to memory1104or storage1108; or any other suitable data. The data caches may speed up read or write operations by the processor1106. The TLBs may speed up virtual-address translation for the processor1106. In particular embodiments, processor1106may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates the processor1106including any suitable number of any suitable internal registers, where appropriate. Where appropriate, the processor1106may include one or more arithmetic logic units (ALUs), be a multi-core processor, or include one or more processors802. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, the memory1104includes main memory for storing instructions for the processor1106to execute or data for processor1106to operate on. As an example, and not by way of limitation, computer system1100may load instructions from storage1108or another source (such as another computer system1100) to the memory1104. The processor1106may then load the instructions from the memory1104to an internal register or internal cache. To execute the instructions, the processor1106may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, the processor1106may write one or more results (which may be intermediate or final results) to the internal register or internal cache. The processor1106may then write one or more of those results to the memory1104. In particular embodiments, the processor1106executes only instructions in one or more internal registers or internal caches or in memory1104(as opposed to storage1108or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory1104(as opposed to storage1108or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple the processor1106to the memory1104. The bus may include one or more memory buses, as described in further detail below. In particular embodiments, one or more memory management units (MMUs) reside between the processor1106and memory1104and facilitate accesses to the memory1104requested by the processor1106. In particular embodiments, the memory1104includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory1104may include one or more memories804, where appropriate. Although this disclosure describes and illustrates particular memory implementations, this disclosure contemplates any suitable memory implementation. In particular embodiments, the storage1108includes mass storage for data or instructions. As an example and not by way of limitation, the storage1108may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. The storage1108may include removable or non-removable (or fixed) media, where appropriate. The storage1108may be internal or external to computer system1100, where appropriate. In particular embodiments, the storage1108is non-volatile, solid-state memory. In particular embodiments, the storage1108includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage1108taking any suitable physical form. The storage1108may include one or more storage control units facilitating communication between processor1106and storage1108, where appropriate. Where appropriate, the storage1108may include one or more storages806. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, the I/O Interface1110includes hardware, software, or both, providing one or more interfaces for communication between computer system1100and one or more I/O devices. The computer system1100may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system1100. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, screen, display panel, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. Where appropriate, the I/O Interface1110may include one or more device or software drivers enabling processor1106to drive one or more of these I/O devices. The I/O interface1110may include one or more I/O interfaces1110, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface or combination of I/O interfaces. In particular embodiments, communication interface1112includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system1100and one or more other computer systems1100or one or more networks1114. As an example and not by way of limitation, communication interface1112may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or any other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a Wi-Fi network. This disclosure contemplates any suitable network1114and any suitable communication interface1112for it. As an example and not by way of limitation, the network1114may include one or more of an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system1100may communicate with a wireless PAN (WPAN) (such as, for example, a Bluetooth® WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or any other suitable wireless network or a combination of two or more of these. Computer system1100may include any suitable communication interface1112for any of these networks, where appropriate. Communication interface1112may include one or more communication interfaces1112, where appropriate. Although this disclosure describes and illustrates a particular communication interface implementations, this disclosure contemplates any suitable communication interface implementation. The computer system1102may also include a bus. The bus may include hardware, software, or both and may communicatively couple the components of the computer system1100to each other. As an example and not by way of limitation, the bus may include an Accelerated Graphics Port (AGP) or any other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. The bus may include one or more buses, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other types of integrated circuits (ICs) (e.g., field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages. | 88,628 |
11859986 | The FIGURES of the drawings are not necessarily drawn to scale, as their dimensions can be varied considerably without departing from the scope of the present disclosure. DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE DISCLOSURE Overview The demand for contactless delivery robots has been rising. However, many contactless delivery robots cannot meet the rising demand due to high cost and technical challenges. For example, many contactless delivery robots are designed for delivering a particular type of item and cannot be used to deliver different items. Therefore, improved technology for autonomous delivery is needed. An autonomous delivery system including a delivery assembly secured in an autonomous vehicle (AV) overcomes these problems. The system uses localization and navigation capabilities of the AV as well as certain features of the delivery assembly to provide a more advantageous autonomous delivery method. The AV can navigate to delivery destinations and control users' access to the delivery assembly by using its onboard sensors and onboard controller. For example, the onboard controller detects whether the AV has arrived at the destination and opens a door of the AV after the AV has arrived to allow access to the delivery assembly. The delivery assembly can include a user interface (UI) module that authenticates the user, allows the user to access one or more cubbies in the delivery assembly, and can generally help facilitate the delivery of one or more items to the user. After the user had collected one or more items from the one or more cubbies in the delivery assembly, the AV can close the door and continue to a next destination. The delivery assembly is removably secured in the AV and facilitates delivering items to users or picking up items from users by using the AV. In some embodiments, the delivery assembly includes the one or more cubbies and the UI module. The one or more cubbies contain the items within a secured space (e.g., during the AV's motion). Each of the one or more cubbies can have various configurations to fit different types of items. In addition, the one or more cubbies in the delivery assembly can include one or more features to help secure and protect the items. The UI module provides information of the delivery to the user and allows the user to provide input for authenticating the user to allow the user to access one or more cubbies in the delivery assembly. The autonomous delivery system leverages the autonomous features of the AV such as autonomous localization, navigation, and door control. Also, it can provide advantageous delivery service by using the delivery assembly. Further, the delivery assembly can be taken out of the AV so that the AV can still be used for other purposes, (e.g., rideshare). By combining the AV and the delivery assembly, the high cost and technical challenges for autonomous delivery can be reduced or even avoided. Also, the users are better protected. Embodiments of the present disclosure provide a method for facilitating autonomous delivery using a delivery assembly transported by an AV. The method includes determining a location of the delivery assembly and, in response to determining the location of the delivery assembly, configuring a user interface to help facilitate the autonomous delivery. The user interface has at least a point of origination configuration and a customer user configuration (or more simply user configuration). The method can also include authenticating identification information of a user through the user interface. Based on the identification information of the user, the user can be allowed to access one or more cubbies of the delivery assembly. In some examples, one or more indicators on the user interface can inform the user that the user can access a specific cubby and the one or more indicators include light, text, sound, or some combination thereof. The method can also include determining that the user should access a first cubby before a second cubby and informing the user through the user interface that the first cubby is available to be accessed before the user has access to the second cubby. In some examples, the user interface is configured in the point of origination configuration to allow a retail user (e.g., a retailer or supplier of goods) to load items into one or more cubbies of the delivery assembly and an indicator on the user interface can inform the user to place one or more items into a specific cubby of the delivery assembly. In other examples, the user interface is configured in the customer user configuration to allow a customer user to unload items from one or more cubbies of the delivery assembly and an indicator on the user interface can inform the customer user to retrieve one or more items from a specific cubby of the delivery assembly. The method can also include determining that a user has approached the delivery assembly, requesting user authentication from the user, determining if the user is an authorized user, and unlocking at least one door of a cubby from the plurality of cubbies if the user is determined to be an authorized user. In some examples, the user interface includes a keypad and the user authentication is a keycode entered into the user interface using the keypad. In other examples, the user interface includes a scanner and the user authentication can occur when the user scans a barcode or quick response (QR) code on their mobile device (e.g., a smartphone, wearable, etc.). In some examples, the user authentication can occur before one or more doors of the AV are opened to allow the user to access the locker. As will be appreciated by one skilled in the art, aspects of the present disclosure, in particular aspects of dispatch-based charging for electric vehicle fleets, described herein, may be embodied in various manners (e.g., as a method, a system, a computer program product, or a computer-readable storage medium). Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as an “engine,” a “circuit,” a “module,” or a “system.” Functions described in this disclosure may be implemented as an algorithm executed by one or more hardware processing units (e.g., one or more microprocessors) of one or more computers. In various embodiments, different steps and portions of the steps of each of the methods described herein may be performed by different processing units. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable medium(s), preferably non-transitory, having computer-readable program code embodied (e.g., stored) thereon. In various embodiments, such a computer program may, for example, be downloaded (updated) to the existing devices and systems (e.g., to the existing perception system devices or their controllers, etc.) or be stored upon manufacturing of these devices and systems. The following detailed description presents various descriptions of specific certain embodiments. However, the innovations described herein can be embodied in a multitude of different ways, for example, as defined and covered by the claims or select examples. In the following description, reference is made to the drawings where like reference numerals can indicate identical or functionally similar elements. It will be understood that elements illustrated in the drawings are not necessarily drawn to scale. Moreover, it will be understood that certain embodiments can include more elements than illustrated in a drawing or a subset of the elements illustrated in a drawing. Further, some embodiments can incorporate any suitable combination of features from two or more drawings. Other features and advantages of the disclosure will be apparent from the following description and the claims. As described herein, one aspect of the present technology may be the gathering and use of data available from various sources to improve quality and experience. The present disclosure contemplates that in some examples, this gathered data may include personal information. The present disclosure contemplates that the entities involved with such personal information respect and value privacy policies and practices. The following disclosure describes various illustrative embodiments and examples for implementing the features and functionality of the present disclosure. While particular components, arrangements, or features are described below in connection with various example embodiments, these are merely examples used to simplify the present disclosure and are not intended to be limiting. In the Specification, reference may be made to the spatial relationships between various components and to the spatial orientation of various aspects of components as depicted in the attached drawings. However, as will be recognized by those skilled in the art after a complete reading of the present disclosure, the devices, components, members, apparatuses, etc. described herein may be positioned in any desired orientation. Thus, the use of terms such as “above”, “below”, “upper”, “lower”, “top”, “bottom”, or other similar terms to describe a spatial relationship between various components or to describe the spatial orientation of aspects of such components, should be understood to describe a relative relationship between the components or a spatial orientation of aspects of such components, respectively, as the components described herein may be oriented in any desired direction. When used to describe a range of dimensions or other characteristics (e.g., time, pressure, temperature, length, width, etc.) of an element, operations, or conditions, the phrase “between X and Y” represents a range that includes X and Y. In addition, the terms “comprise,” “comprising,” “include,” “including,” “have,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a method, process, device, or system that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such method, process, device, or system. In the following detailed description, reference is made to the accompanying drawings that form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense. For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). Reference to “one embodiment” or “an embodiment” in the present disclosure means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” or “in an embodiment” are not necessarily all referring to the same embodiment. The appearances of the phrase “for example,” “in an example,” or “in some examples” are not necessarily all referring to the same example. The term “about” includes a plus or minus fifteen percent (±15%) variation. The systems, methods and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for all of the desirable attributes disclosed herein. Details of one or more implementations of the subject matter described in this Specification are set forth in the description below and the accompanying drawings. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present disclosure. Substantial flexibility is provided by an electronic device in that any suitable arrangements and configuration may be provided without departing from the teachings of the present disclosure. As used herein, the term “when” may be used to indicate the temporal nature of an event. For example, the phrase “event ‘A’ occurs when event ‘B’ occurs” is to be interpreted to mean that event A may occur before, during, or after the occurrence of event B, but is nonetheless associated with the occurrence of event B. For example, event A occurs when event B occurs if event A occurs in response to the occurrence of event B or in response to a signal indicating that event B has occurred, is occurring, or will occur. Example Autonomous Delivery System FIG.1shows an autonomous delivery environment100according to some embodiments of the present disclosure. The autonomous delivery environment100can include AVs102, a delivery assembly104, an online system106, a client device108, and a third-party device110. Each of the AVs102, the delivery assembly104, the online system106, the client device108, and/or the third-party device110can be in communication using network112. In addition, each of the AVs102, the delivery assembly104, the online system106, the client device108, and/or the third-party device110can be in communication with one or more network elements114, one or more servers116, and cloud services118using the network112. In other embodiments, the autonomous delivery environment100may include fewer, more, or different components. For example, the autonomous delivery environment100may include a different number of AVs102with some AVs102including a delivery assembly104and some AVs102not including a delivery assembly104(not shown). A single AV is referred to herein as AV102, and multiple AVs are referred to collectively as AVs102. For purpose of simplicity and illustration,FIG.1shows one client device108and one third-party device110. In other embodiments, the autonomous delivery environment100includes multiple third-party devices or multiple client devices. In some embodiments, the autonomous delivery environment100includes one or more communication networks (e.g., network112) that supports communications between some or all of the components in the autonomous delivery environment100. The network112may comprise any combination of local area and/or wide area networks, using both wired and/or wireless communication systems. In one embodiment, the network uses standard communications technologies and/or protocols. For example, the network112can include communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, 5G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over the network112may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some embodiments, all or some of the communication links of the network112may be encrypted using any suitable technique or techniques. The AV102is a vehicle that is capable of sensing and navigating its environment with little or no user input. The AV102may be a semi-autonomous or fully autonomous vehicle (e.g., a boat, an unmanned aerial vehicle, a driverless car, etc.). Additionally, or alternatively, the AV102may be a vehicle that switches between a semi-autonomous state and a fully autonomous state and thus, the AV may have attributes of both a semi-autonomous vehicle and a fully autonomous vehicle depending on the state of the vehicle. The AV102may include a throttle interface that controls an engine throttle, motor speed (e.g., rotational speed of electric motor), or any other movement-enabling mechanism, a brake interface that controls brakes of the AV (or any other movement-retarding mechanism), and a steering interface that controls steering of the AV (e.g., by changing the angle of wheels of the AV). The AV102may additionally or alternatively include interfaces for control of any other vehicle functions (e.g., windshield wipers, headlights, turn indicators, air conditioning, etc. In some embodiments, an AV102includes an onboard sensor suite. The onboard sensor suite detects surrounding environment of the AV102and generates sensor data describing the surround environment. The onboard sensor suite may include various types of sensors. In some embodiments, the onboard sensor suite includes a computer vision (“CV”) system, localization sensors, and driving sensors. For example, the onboard sensor suite may include photodetectors, cameras, RADAR, sound navigation and ranging (SONAR), LIDAR, GPS, wheel speed sensors, inertial measurement units (IMUS), accelerometers, microphones, strain gauges, pressure monitors, barometers, thermometers, altimeters, ambient light sensors, etc. The sensors may be located in various positions in and around the AV102. In some embodiments, the onboard sensor suite may include one or more sensors for a delivery assembly104that is secured in the AV102. The delivery assembly104can help facilitate the delivery of items (e.g., prepared foods, groceries, packages, etc.) by the AV102. The delivery assembly104defines a space where the items can be stored in the AV102. The space may be a controlled environment. For example, access to space inside the delivery assembly104where items are stored may require authentication of the identify of a user. As another example, a physical condition (e.g., temperature, lightening, etc.) of the space is maintained at a desired level. The delivery assembly104may include features that facilitate users (e.g., customers or personnel of a retail entity) to load or unload items from the AV102. The delivery assembly104may support a UI that provides the users information regarding the loading or unloading process. The UI may also allow the users to interact with the delivery assembly104or the AV102during the loading or unloading process. The delivery assembly104may include features to protect the users during the loading or unloading process. The delivery assembly104may also include privacy features to protect the privacy of the user. The AV102also includes an onboard controller. The onboard controller controls operations and functionality of the AV102. In some embodiments where the AV102includes the delivery assembly104, the onboard controller may control some operations and functionality of the delivery assembly104. In other embodiments where the AV102includes the delivery assembly104, the operations and functionality of the delivery assembly104is separate from the onboard controller. In some embodiments, the onboard controller is a general-purpose computer, but may additionally or alternatively be any suitable computing device. The onboard controller is adapted for input/output (I/O) communication with other components of the AV102(e.g., the onboard sensor suite, an UI module of the delivery assembly, etc.) and external systems (e.g., the online system106). The onboard controller may be connected to the Internet via a wireless connection (e.g., via a cellular data connection). Additionally or alternatively, the onboard controller may be coupled to any number of wireless or wired communication systems. The onboard controller processes sensor data generated by the onboard sensor suite and/or other data (e.g., data received from the online system106) to determine the state of the AV102. Based upon the vehicle state and programmed instructions, the onboard controller modifies or controls behavior of the AV102. In some embodiments, the onboard controller implements an autonomous driving system (ADS) for controlling the AV102and processing sensor data from the onboard sensor suite and/or other sensors in order to determine the state of the AV102. Based upon the vehicle state and programmed instructions, the onboard controller modifies or controls driving behavior of the AV102. An AV102may also include a rechargeable battery that powers the AV102. The battery may be a lithium-ion battery, a lithium polymer battery, a lead-acid battery, a nickel-metal hydride battery, a sodium nickel chloride (“zebra”) battery, a lithium-titanate battery, or another type of rechargeable battery. In some embodiments, the AV102is a hybrid electric vehicle that also includes an internal combustion engine for powering the AV102(e.g., when the battery has low charge). In some embodiments, the AV102includes multiple batteries. For example, the AV102can include a first battery used to power vehicle propulsion, and a second battery used to power the delivery assembly104and/or AV hardware (e.g., the onboard sensor suite and the onboard controller130). The AV102may further include components for charging the battery (e.g., a charge port configured to make an electrical connection between the battery and a charging station). The online system106manages delivery services using the AVs102. A delivery service is a delivery of one or more items from one location to another location. In some embodiments, a delivery service is a service for picking up an item from a location of a business (e.g., a grocery store, a distribution center, a warehouse, etc.) and delivering the item to a location of a customer of the business. In other embodiments, a delivery service is a service for picking up an item from a customer of the business and delivering the item to a location of the business (e.g., for purpose of returning the item). The online system106may select an AV102from a fleet of AVs102to perform a particular delivery service and instruct the selected AV102to autonomously drive to a particular location. The online system106sends a delivery request to the AV102. The delivery request includes information associate with the delivery service, information of a user requesting the delivery (e.g., location, identifying information, etc.), information of an item to be delivered (e.g., size, weight, or other attributes), etc. In some embodiments, the online system106may instruct one single AV102to perform multiple delivery services. For example, the online system106instructs the AV102to pick up items from one location and deliver the items to multiple locations, or vice versa. The online system106also manages maintenance tasks, such as charging and servicing of the AVs102. As shown inFIG.1, each of the AVs102communicates with the online system106. The AVs102and online system106may connect over a public network, such as the Internet. In some embodiments, the online system106may also provide the AV102(and particularly, onboard controller145) with system backend functions. The online system106may include one or more switches, servers, databases, live advisors, or an automated voice response system (VRS). The online system106may include any or all of the aforementioned components, which may be coupled to one another via a wired or wireless local area network (LAN). The online system106may receive and transmit data via one or more appropriate devices and network from and to the AV102, such as by wireless systems, such as 882.11x, general packet radio service (GPRS), and the like. A database at the online system106can store account information such as subscriber authentication information, vehicle identifiers, profile records, behavioral patterns, and other pertinent subscriber information. The online system106may also include a database of roads, routes, locations, etc. permitted for use by AV102. The online system106may communicate with the AV102to provide route guidance in response to a request received from the vehicle. For example, based upon information stored in a mapping system of the online system106, the online system106may determine the conditions of various roads or portions thereof. Autonomous vehicles, such as the AV102, may, in the course of determining a navigation route, receive instructions from the online system106regarding which roads or portions thereof, if any, are appropriate for use under certain circumstances, as described herein. Such instructions may be based in part on information received from the AV102or other autonomous vehicles regarding road conditions. Accordingly, the online system106may receive information regarding the roads/routes generally in real-time from one or more vehicles. The online system106communicates with the client device108. For example, the online system106receives delivery requests from the client device108. A delivery request is a request to deliver one or more items from a location to another location. The delivery request may include information of the items, information of the locations (e.g., store location, distribution center location, warehouse location, location of a customer, etc.), and so on. The online system106can provide information associated with the delivery request (e.g., information of the status of the delivery process) to the client device108. The client device108may be a device (e.g., a computer system) of a user of the online system106. The user may be an entity or an individual. In some embodiments, a user may be a customer of another user. In an embodiment, the client device108is an online system maintained by a business (e.g., a retail business, a package service business, etc.). The client device108may be an application provider communicating information describing applications for execution by the third-party device110or communicating data to the third-party device110for use by an application executing on the third-party device110. The third-party device110is one or more computing devices capable of receiving user input as well as transmitting and/or receiving data via the network. The third-party device110may be a device of an individual. The third-party device110communicates with the client device108to request delivery or return of items. For example, the third-party device110may send a delivery request to the client device108through an application executed on the third-party device110. The third-party device110may receive from the client device108information associated with the request, such as status of the delivery process. In one embodiment, the third-party device110is a conventional computer system, such as a desktop or a laptop computer. Alternatively, a third-party device110may be a device having computer functionality, such as a personal digital assistant (PDA), a mobile telephone, a smartphone, or another suitable device. A third-party device110is configured to communicate via the network. In one embodiment, a third-party device110executes an application allowing a user of the third-party device110to interact with the online system106. For example, a third-party device110executes a browser application to enable interaction between the third-party device110and the online system106via the network. In another embodiment, a third-party device110interacts with the online system106through an application programming interface (API) running on a native operating system of the third-party device110, such as IOS® or ANDROID™. Example Online System FIG.2is a block diagram illustrating the online system106according to some embodiments of the present disclosure. The online system106can include a UI server120, a vehicle manager122, a delivery manager124, and a database126. Alternative configurations, different or additional components may be included in the online system106. Further, functionality attributed to one component of the online system106may be accomplished by a different component included in the online system106or a different system (e.g., the onboard controller of an AV102). The UI server120is configured to communicate with third-party devices (e.g., the third-party device110) that provide a UI to users. For example, the UI server120may be a web server that provides a browser-based application to third-party devices, or the UI server120may be a mobile app server that interfaces with a mobile app installed on third-party devices. The UI server120enables the user to request a delivery by using an AV102. The vehicle manager122manages and communicates with a fleet of AVs (e.g., the AVs102). The vehicle manager122may assign AVs102to various tasks and direct the movements of the AVs102in the fleet. For example, the vehicle manager122assigns an AV102to perform a delivery service requested by a user through the UI server120. The user may be associated with the client device108. The vehicle manager122may instruct AVs102to drive to other locations while not servicing a user (e.g., to improve geographic distribution of the fleet, to anticipate demand at particular locations, to drive to a charging station for charging, etc.). The vehicle manager122also instructs AVs102to return to AV facilities for recharging, maintenance, or storage. The delivery manager124manages delivery services requested by users of the online system106(e.g., a user associated with the client device108). The delivery manager124processes a delivery request from a user and sends information in the delivery request to the vehicle manager122for the vehicle manager122to select an AV102meeting the need of the user. The delivery manager124may also monitor the process of a delivery service (e.g., based on the state of the AV102and the state of the delivery assembly104in the AV102). In some embodiments, the delivery manager124sends information of the delivery process to the client device108so that the user can be informed of the status of the delivery service. The delivery manager124may also analyze errors detected during the performance of the delivery service. The delivery manager124may assist to help resolve the error. For example, the delivery manager124may determine a solution to help fix the error. The solution may include an instruction to the onboard controller of the AV102or a person loading/unloading the item. As yet another example, the delivery manager124communicates the error to the client device108and requests the client device108to help fix the error. The database126stores data used, generated, received, or otherwise associated with the online system106. For example, the database126stores data associated with the AVs102, data received from the client device108, data associated with users of the online system106, and so on. Example Onboard Controller FIG.3is a block diagram illustrating an onboard controller130of the AV102according to some embodiments of the present disclosure. The onboard controller130includes an interface module132, a localization module134, a navigation module136, and an AV delivery module138. Alternative configurations, different or additional components may be included in the onboard controller130. Further, functionality attributed to one component of the onboard controller130may be accomplished by a different component included in the AV102or a different system (e.g., the online system106). The interface module132facilitates communications of the onboard controller130with other systems. For example, the interface module132supports communications of the onboard controller130with other systems (e.g., the online system106). The interface module132supports communications of the onboard controller130with other components of the AV102(e.g., the onboard sensor suite, delivery assembly104, and/or actuators in the AV102). For example, the interface module132may retrieve sensor data generated by the onboard sensor suite, communicate with an UI module of the delivery assembly104, and/or send commands to the actuators. The localization module134localizes the AV102. The localization module134may use sensor data generated by the onboard sensor suite to determine the current location of the AV102. The sensor data includes information describing an absolute or relative position of the AV102(e.g., data generated by GPS, global navigation satellite system (GNSS), IMU, etc.), information describing features surrounding the AV102(e.g., data generated by a camera, RADAR, SONAR, LIDAR, etc.), information describing motion of the AV102(e.g., data generated by the motion sensor), or some combination thereof. In some embodiments, the localization module134uses the sensor data to determine whether the AV102has entered a local area, such as a parking garage or parking lot where the AV102can be charged. In some other embodiments, the localization module134may send the sensor data to the online system106and receive from the online system106a determination whether the AV102has entered the local area. In some embodiments, the localization module134determines whether the AV102is at a predetermined location (e.g., a destination of a delivery service). For example, the localization module134uses sensor data generated by the onboard sensor suite (or a sensor in the onboard sensor suite) to determine the location of the AV102. The localization module134may further compare the location of the AV102with the predetermined location to determine whether the AV102has arrived. The localization module134may provide locations of the AV102to the AV delivery module138. The localization module134can further localize the AV102within the local area. For example, the localization module134determines a pose (position or orientation) of the AV102in the local area. In some embodiments, the localization module134localizes the AV102within the local area by using a model of the local area. The model may be a 2D or 3D representation of the surrounding area, such as a map or a 3D virtual scene simulating the surrounding area. In various embodiments, the localization module134receives the model of the local area from the online system106. The localization module134may send a request for the model to the online system106and in response, receive the model of the local area. In some embodiments, the localization module134generates the request based on sensor data indicating a position or motion of the AV102. For example, the localization module134detects that the AV102is in the local area or is navigated to enter the local area based on the sensor data and sends out the request in response to such detection. This process can be dynamic. For example, the localization module134may send new request to the online system106as the AV102changes its position. The localization module134may further localize the AV102with respect to an object in the local area. An example of the object is a building in the local area. The localization module134may determine a pose of the AV102relative to the building based on features in the local area. For example, the localization module134retrieves sensor data from one or more sensors (e.g., camera, LIDAR, etc.) in the onboard sensor suite that detect the features. The localization module134uses the sensor data to determine the pose of the AV102. The features may be lane markers, street curbs, driveways, and so on. A feature may be two-dimensional or three-dimensional. The navigation module136controls motion of the AV102. The navigation module136may control the motor of the AV102to start, pause, resume, or stop motion of the AV102. The navigation module136may further control the wheels of the AV102to control the direction the AV102will move. In various embodiments, the navigation module136generates a navigation route for the AV102based on a location of the AV102, a destination, and a map. The navigation module136may receive the location of the AV102from the localization module134. The navigation module136receives a request to go to a location and generate a route to navigate the AV102from its current location, which is determined by the localization module134, to the location. The navigation module136may receive the destination from the AV delivery module138or an external source, such as the online system106, through the interface module132. The AV delivery module138manages autonomous delivery by the AV102. Functionality attributed to the AV delivery module138may be accomplished by a different component of the autonomous delivery environment100, such as the delivery assembly104. In some embodiments, the AV delivery module138processes delivery requests received from the online system106. The AV delivery module138may communicate with the localization module134and the navigation module136to navigate the AV102based on the delivery requests (e.g., to navigate the AV102to locations specified in the delivery request). The AV delivery module138may monitor or control the delivery assembly104in the AV102. The AV delivery module138may determine a size limit of the delivery assembly104(e.g., based on the size of the container in the delivery assembly104). The AV delivery module138may further determine whether the item that the online system106requests the AV102to deliver (“requested item”) can fit in the delivery assembly104based on the size limit. In embodiments that the AV delivery module138determines that the requested item has a size larger than the size limit of the delivery assembly104, the AV delivery module138may communicate with the online system106to cancel or change the delivery request. Example Delivery Assembly FIG.4illustrates a delivery assembly104according to some embodiments of the present disclosure. The delivery assembly104includes a plurality of cubbies140a-104eand a UI module142. In some embodiments, the delivery assembly104may include different components. For example, the delivery assembly104may include a securing mechanism to secure the delivery assembly104to the AV102. The delivery assembly104can communicate with the network112(and the online system106, the third-party device110, the one or more network elements114, the one or more servers116, and/or cloud services118) on a separate network path other than the network path used by AV102. The delivery assembly104can also communicate with a user's mobile device148to authenticate the user and to allow the user to interact with the UI module142. The user's mobile device148can be a smart phone, wearable, or some other portable communication device associated with the user. Each of the cubbies140a-104eprovides space and securement of items delivered by the AV102and each of the cubbies140a-104emay have various shapes or sizes. Each cubby is locked to protect user privacy in embodiments where the AV102is used to deliver items to multiple users. For example, the item for the first user can be placed in the cubby140a, and the item for the second user can be placed in cubby140b. When the first user unloads the first item from the cubby140a, the second item is invisible to the first user as the second item is in the cubby140b. After the first user finishes unloading the first item (e.g., after the AV102closes the door and leaves the location of the first user) or when the second item can be picked up by the second user (e.g., after the AV102arrives at the location of the second user), the cubby140bcan be unlocked and the second item can be collected by the second user. Each of the cubbies140a-104einFIG.4are for illustration purposes and in other embodiments, the cubbies140a-104emay have other configurations. For example, the cubby140amay be a smaller cubby or cubies104aand104bmay be combined into one large cubby. Each of the cubbies140a-104emay also include a shelf, a drawer, a cabinet, or other types of storage components. The delivery assembly104may be made of a plastic material, metal, other types of materials, or some combination thereof. In some embodiments, the delivery assembly104and each of the cubbies104a-104chas a size limit and the size of items delivered using the delivery assembly104does not exceed the size limit. The delivery assembly104may have a frame that can be secured to the AV102. The UI module142can include a display144and a UI input146. In an example, the display144is a touchscreen display. In some examples, the UI input146may be a keypad (e.g., a physical keypad or a digital keypad). The UI module142provides a user information associated with loading or unloading items. For example, the display144can provide graphical information to the user related to loading or unloading items and the UI input146can allow the user to input information related to loading or unloading items. The UI module142have a shape that is similar to rectangular and can be located in a middle right-side portion of the delivery assembly104. In other embodiments, the UI module142may have a different shape and/or location. The UI module142informs the user of the state of the item in the delivery assembly104or more specifically, a specific cubby (e.g., the item is ready for being picked up, the item has been picked up, etc.), the state of the AV102(e.g., a door is open, a door is to be closed, etc.), actions to be taken by the user (e.g., moving the sliding bin420, unloading an item, loading an item, closing a door of the AV102, etc.), and so on. The UI module142can also be used to authenticate a user (e.g., the user enters a code using the UI input146, the user scans a code on their phone into the UI input146, etc.). For example, the UI module430may include a camera or scanner to capture identification information from the user. The UI module142may provide information to the user through one or more indicators generated by the UI module142. An indicator may be light, text, sound, or some combination thereof. Example UI Module FIG.5illustrates the UI module142according to some embodiments of the present disclosure. The UI module142can include a communication module150, a biometric module152, an authentication module154, a scanner156, an infrared (IR) sensor158, a microphone160, a speaker162, a display engine164, memory168, and a delivery assembly delivery module170. The communication module150can help facilitate communications between the delivery assembly104and the network112(and the online system106, the third-party device110, the one or more network elements114, the one or more servers116, and/or cloud services118). The communication module150can also help facilitate communications between the delivery assembly104and the AV102and between the delivery assembly104and a user's mobile device (e.g., the user's mobile device148, illustrated inFIG.4). The biometric module152can be a biometric sensor or some other device that can collect biometric data of the user. The authentication module154can be configured to authentic a user. For example, the authentication module154can receive biometric data from the biometric module and use the received biometric data to authenticate a user. The scanner156may be a bar code scanner, QR code scanner, or some other type of scanner that can be used to help input data into the UI module142. For example, the scanner156may be a QR code scanner that a user can use to help authenticate a user. Also, the scanner156can be a bar code scanner where items are scanned into the UI module142as they are placed in a cubby. The IR sensor158can be an active IR sensor or a passive IR sensor. The IR sensor can be used to sense characteristics in the environment around the UI module142by emitting and/or detecting infrared radiation. More specifically, the IR sensor158can detect the heat being emitted by an object and detect motion of a user (e.g., when a user approaches the delivery assembly104). The microphone160can be used to detect sound, especially voice commands from the user. The speaker162can be used to provide audio for the user, especially audio prompts about the location of an item in a specific cubby. The display engine164can help provide the visual data that is displayed on the display of the UI module142. Memory168can include data related to the operation of the delivery assembly104such as the specific cubby that includes one or more items for a specific user, user authentication data, etc. The delivery assembly delivery module170can use sensor data generated by sensors in the delivery assembly to determine the state of an item in the delivery assembly. For example, the delivery assembly delivery module170detects whether the item has been removed from a cubby or placed into the cubby by using sensor data generated by a sensor associated with the cubby. In some embodiments, the delivery assembly delivery module170uses the sensor data to determine whether the item matches a description in the delivery request to ensure that the item being removed or placed is the right item. The delivery assembly delivery module170may also determine a physical condition of the item. The delivery assembly delivery module170may also manage the UI module142. For example, the delivery assembly delivery module170generates indicators based on the state of the item or the delivery process and instructs the UI module142to provide the indicators to the user. An indicator may be light, text, sound, or some combination thereof. An indicator may inform the user of the state of the item or the delivery process or provide an instruction to the user. In an embodiment, the delivery assembly delivery module170generates textual or audio messages and instructs the UI module142to display the textual or audio messages. In another embodiment, the delivery assembly delivery module170toggles (e.g., turns on or activates and off or deactivates) a light on the UI module142. The delivery assembly delivery module170may also control the delivery assembly based on user input received through the UI module142. For example, the delivery assembly delivery module170can cause cubby doors in the delivery assembly to lock and unlock based on the user's interaction with the UI module142. In some embodiments, the delivery assembly delivery module170detects and processes errors occurred during the delivery. For example, the delivery assembly delivery module170may detect that the item removed or placed by the user does not match the description of the requested item in the delivery request. After such a detection, the delivery assembly delivery module170may send an error message to the UI module142to inform the user of the error. The delivery assembly delivery module170may also analyze an error, determine a solution to the error, and provide the user an instruction to help fix the error through the UI module142. Additionally or alternatively, the delivery assembly delivery module170may report the error to the online system106and request the online system106to help provide a solution to the error. FIG.6illustrates the UI module142according to some embodiments of the present disclosure. The UI module142can include the display144, the UI input146, the scanner156, the microphone160, and the speaker162. In some examples, the UI input146is a physical keypad. In other examples, the UI input146is a virtual keypad. The UI input146can include a keypad display172to allow the user to see input from the UI input146. In some examples, the display144can be a touchscreen display. Depending on the use case of the delivery assembly104and the UI module142, the display144can present different visual information to the user. More specifically, if a retail user (e.g., a retailer or supplier of goods to a customer user) is using the delivery assembly104, the UI module142can be in a point of origination mode to allow the retail user to load one or more items into the delivery assembly104for delivery to the customer user by the AV102. For example, as illustrated inFIG.6, the display144can display specific loading information related to a customer user. FIG.7illustrates the UI module142according to some embodiments of the present disclosure. The UI module142can include the display144, the UI input146, the scanner156, the microphone160, and the speaker162. In some examples, the UI input146is a physical keypad. In other examples, the UI input146is a virtual keypad. The UI input146can include a keypad display172to allow the user to see input from the UI input146. In some examples, the display144can be a touchscreen display. Depending on the use case of the delivery assembly104and the UI module142, the display144can present different visual information to the user. More specifically, if a customer user is using the delivery assembly104, the UI module142can be in a customer user mode to allow the customer user to access one or more items in the delivery assembly104. For example, as illustrated inFIG.7, the display144can display specific information related to a customer user. In some examples, an indicator174can be used to help the user identify a specific cubby that has been unlocked and can be accessed. More specifically, as illustrated inFIG.7, an arrow on the display144can point to a specific cubby to help the user identify that the specific cubby has been unlocked and can be accessed. Example Process FIG.8is an example flowchart illustrating possible operations of a flow800that may be associated with enabling a delivery by the AV, in accordance with an embodiment. In an embodiment, one or more operations of flow800may be performed by the AV102, the delivery assembly104, the UI module142, the communication module150, the biometric module152, the authentication module154, the scanner156, the IR sensor158, the microphone160, the speaker162, the display engine164, and/or the memory168. At802, a use case for a UI module in a delivery assembly is determined. For example, the use case for the UI module142may be determined by a location of the AV102and/or a location of the delivery assembly104, by a user interacting with the UI module142(e.g., if a customer user entered authentication data into the UI module), or by some other means. At804, the system determines if the use case is for a point of origination. If the use case is for a point of origination, then a point of origination configuration for the UI module is used, as in806. For example, the point of origination configuration for the UI module may include a configuration similar to the one illustrated inFIG.6where a retail user adds one or more items for a customer user to the delivery assembly104. If the use case is not for a point of origination, then the system determines if the use case is for a mid-point destination, as in808. For example, a mid-point destination may be a retailer, other than a retailer that was the point of origination, that adds additional items to the delivery assembly104. If the use case is for a mid-point destination, then a mid-point configuration for the UI module is used, as in810. For example, the mid-point configuration for the UI module may include a configuration similar to the one illustrated inFIG.6where a retailer adds one or more items for a customer user to the delivery assembly104. If the use case is not for a mid-point destination, then the system determines if the use case is for a customer user. If the use case is for a customer user, then a customer user configuration is used, as in814. For example, the customer user configuration for the UI module may include a configuration similar to the one illustrated inFIG.7where a customer user collects one or more items from the delivery assembly104. If the use case is not for a customer user, then a diagnostic configuration is used, as in816. For example, if system cannot determine the use case of the UI module142or if the system is being repaired, then a diagnostic configuration can be used. In some examples, the diagnostic configuration may be a blank display with the words “SERVICE NEEDED” or some other similar words or phrase and the UI module142may be locked or otherwise secured to help prevent tampering of the UI module142and the delivery assembly104. In some examples, if the delivery assembly104is in a diagnostic configuration, only an authorized service user can unlock the delivery assembly104. FIG.9is an example flowchart illustrating possible operations of a flow900that may be associated with enabling a delivery by the AV, in accordance with an embodiment. In an embodiment, one or more operations of flow900may be performed by AV102, the delivery assembly104, the UI module142, the communication module150, the biometric module152, the authentication module154, the scanner156, the IR sensor158, the microphone160, the speaker162, the display engine164, and/or the memory168. At902, a vehicle with a delivery assembly that includes one or more items for a user arrives at a destination. For example, the AV102with the delivery assembly104that includes one or more items for a user can arrive at a designated destination where the user can collect the one or more items. At904, a user approaches the delivery assembly. At906, user authentication is requested. For example, the user may be requested to enter a code on the UI input146of the UI module142, to scan a QR code into the UI module142, or some other means for user authentication may be requested. At908, the system determines if the user is an authorized user. If the user is not an authorized user, then the system returns to906and again, user authentication is requested. If the user is an authorized user, then a user profile is acquired, as in910. For example, the user profile can indicate where the one or more items for the user are location in the delivery assembly. At912, based on the user profile, one or more doors of the delivery assembly are unlocked to allow the user to collect their items from the delivery assembly. FIG.10is an example flowchart illustrating possible operations of a flow1000that may be associated with enabling a delivery by the AV, in accordance with an embodiment. In an embodiment, one or more operations of flow1000may be performed by AV102, the delivery assembly104, the UI module142, the communication module150, the biometric module152, the authentication module154, the scanner156, the IR sensor158, the microphone160, the speaker162, the display engine164, and/or the memory168. At1002, a user approaches a delivery assembly that includes a plurality of cubbies. At1004, the user is identified as an authorized user. At1006, the system determines if the user needs to gain access to more than one cubby. If the user does not need to gain access to more than one cubby, then the cubby that the user needs to access is unlocked, as in1008. If the user needs to gain access to more than one cubby, then the system determines if one cubby should be accessed before the other cubby, as in1010. For example, one cubby may contain a perishable item such as ice-cream or some other frozen item that needs to be collected before other, non-perishable items are collected. If one cubby does not need to be accessed before the other cubby, then a first cubby is unlocked, as in1012and, at1014, a second cubby is unlocked. In an example, the first cubby and second cubby are unlocked at about the same time and the first cubby and the second cubby can be opened at the same time. In another example, the second cubby is not unlocked until the items from the first cubby are removed. At1016, the system determines if one or more cubbies still need to be accessed by the user. Going back to1010, if one cubby should be accessed before the other cubby, then the cubby that should be accessed before the other cubby is unlocked first, as in1018. At1020, after the user has finished with the cubby that was unlocked first, the other cubby is unlocked. At1016, the system determines if one or more cubbies still need to be accessed. If the system determines that one or more cubbies still need to be accessed by the user, the system returns to1006and again determines if the user needs to gain access to more than one cubby. If the system determines that one or more cubbies do not need to be accessed by the user, then the process ends. In an example, the delivery assembly is located in an AV and after the user does not need access to any further cubbies, the AV and delivery assembly proceed to a next destination. It should be noted that in some examples, the above flow allows two or more cubbies to be unlocked and open at the same time. In other examples, only one cubby is unlocked and opened at a time. FIG.11is an example flowchart illustrating possible operations of a flow1100that may be associated with enabling a delivery by the AV, in accordance with an embodiment. In an embodiment, one or more operations of flow1100may be performed by AV102, the delivery assembly104, the UI module142, the communication module150, the biometric module152, the authentication module154, the scanner156, the IR sensor158, the microphone160, the speaker162, the display engine164, and/or the memory168. At1102, a user approaches a delivery assembly. At1104, user authentication is requested. At1106, the system determines if the user is authorized. If the user is not authorized, the system returns to1104and again, user authentication is requested. If the user is authorized, then a specific cubby to be opened is identified, as in1108. For example, an indicator174can be used to help the user identify a specific cubby that has been unlocked and can be accessed. The indicator may be a symbol or icon on the display144of the UI module142, light, text, sound, or some combination thereof. More specifically, as illustrated inFIG.7, an arrow on the display144can point to a specific cubby to help the user identify that the specific cubby has been unlocked and can be accessed. At1110, the system determines if another cubby needs to be accessed. If another cubby needs to be accessed, then the system returns to1106and again determines if the user is authorized. The system returns to1106and again determining if the user is authorized to help prevent theft or tampering. If another cubby does not need to be accessed, then the process ends. FIG.12is an example flowchart illustrating possible operations of a flow1200that may be associated with enabling a delivery by the AV, in accordance with an embodiment. In an embodiment, one or more operations of flow1200may be performed by AV102, the delivery assembly104, the UI module142, the communication module150, the biometric module152, the authentication module154, the scanner156, the IR sensor158, the microphone160, the speaker162, the display engine164, and/or the memory168. At1202, a vehicle with a delivery assembly that includes one or more items for a user arrives at a destination. At1204, user authentication is requested. At1206, the system determines if the user is authorized. If the user is not an authorized user, then the system returns to1204and again, user authentication is requested. If the user is an authorized user, then a door to a cubby that includes the one or more items for the user in the delivery assembly is unlocked, as in1208. At1210, the system determines if the user has removed all of the contents of the cubby. If the user has removed all of the contents of the cubby, then the system determines if another cubby needs to be accessed, as in1212. If another cubby does not need to be accessed, then the process ends. If another cubby does need to be accessed, then the system returns to1208and a door to a cubby that includes the one or more items for the user in the delivery assembly is unlocked. Going back to1210, if the system determines that the user has not removed all of the items of the cubby, then the system determines if the user needs more time to remove the items from the cubby, as in1214. For example, the UI module142can prompt the user to press a button or icon on the UI module142to indicate the user needs more time to remove all of the items from the cubby. If the user does need more time removed all of the items of the cubby, then the system returns to1210and determines if the user has removed all of the contents of the cubby. If the user does not need more time to remove the items from the cubby, then the system determines if the user needs to return one or more items, as in1216. If the user does not need to return one or more items, then the user is reminded to remove all of the items from the cubby, as in1218and the system returns to1210and determines if the user has removed all of the contents of the cubby. If the user does need to return one or more items, then a return process is started for the one or more items, as in1220. For example, the UI module142can be used to prompt the user to initiate a return process for one or more items. In another example, the user's mobile device148can be used to prompt the user to initiate a return process for one or more items. At1212, the system determines if another cubby need to be accessed. Other Implementation Notes, Variations, and Applications It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein. In one example embodiment, any number of electrical circuits of the figures may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various embodiments, the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities. Additionally, one or more of the AV102, the delivery assembly104, and the UI module142may include one or more processors that can execute software, logic, or an algorithm to perform activities as discussed herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM)) or an application specific integrated circuit (ASIC) that includes digital logic, software, code, electronic instructions, or any suitable combination thereof. Any of the potential processing elements, modules, and machines described herein should be construed as being encompassed within the broad term ‘processor.’ Implementations of the embodiments disclosed herein may be formed or carried out on a substrate, such as a non-semiconductor substrate or a semiconductor substrate. In one implementation, the non-semiconductor substrate may be silicon dioxide, an inter-layer dielectric composed of silicon dioxide, silicon nitride, titanium oxide and other transition metal oxides. Although a few examples of materials from which the non-semiconducting substrate may be formed are described here, any material that may serve as a foundation upon which a non-semiconductor device may be built falls within the spirit and scope of the embodiments disclosed herein. In another implementation, the semiconductor substrate may be a crystalline substrate formed using a bulk silicon or a silicon-on-insulator substructure. In other implementations, the semiconductor substrate may be formed using alternate materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, indium gallium arsenide, gallium antimonide, or other combinations of group III-V or group IV materials. In other examples, the substrate may be a flexible substrate including 2D materials such as graphene and molybdenum disulphide, organic materials such as pentacene, transparent oxides such as indium gallium zinc oxide poly/amorphous (low temperature of dep) III-V semiconductors and germanium/silicon, and other non-silicon flexible substrates. Although a few examples of materials from which the substrate may be formed are described here, any material that may serve as a foundation upon which a semiconductor device may be built falls within the spirit and scope of the embodiments disclosed herein. Each of the AV102, the delivery assembly104, and the UI module142may include any suitable hardware, software, components, modules, or objects that facilitate the operations thereof, as well as suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information. Each of the AV102, the delivery assembly104, and the UI module142can include memory elements for storing information to be used in the operations outlined herein. The AV102, the delivery assembly104, and the UI module142may keep information in any suitable memory element (e.g., random access memory (RAM), read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), ASIC, etc.), software, hardware, firmware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Moreover, the information being used, tracked, sent, or received in the AV102, the delivery assembly104, and the UI module142could be provided in any database, register, queue, table, cache, control list, or other storage structure, all of which can be referenced at any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein. In certain example implementations, the functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an ASIC, digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.), which may be inclusive of non-transitory computer-readable media. In some of these examples, memory elements can store data used for the operations described herein. This includes the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein. It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of processors, logic operations, etc.) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described with reference to particular arrangements of components. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the figures may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. Note that all optional features of the systems and methods described above may also be implemented with respect to the methods or systems described herein and specifics in the examples may be used anywhere in one or more embodiments. OTHER NOTES AND EXAMPLES Example M1 is a method including determining a location of the delivery assembly and in response to determining the location of the delivery assembly, configuring a user interface to facilitate the autonomous delivery, where the user interface has at least a point of origination configuration and a user configuration. In Example M2, the subject matter of Example M1 can optionally include authenticating identification information of a user through the user interface. In Example M3, the subject matter of Example M2 can optionally include based on the identification information of the user, allowing the user to access one or more cubbies of the delivery assembly. In Example M4, the subject matter of Example M3 can optionally include informing the user through one or more indicators on the user interface that the user can access a specific cubby. In Example M5, the subject matter of Example M1 can optionally include determining that the user should access at least two cubbies of the delivery assembly, wherein a first cubby should be accessed before a second cubby and informing the user through the user interface that the first cubby is available to be accessed before the user has access to the second cubby. In Example, M6, the subject matter of Example M1 can optionally include where the user interface is configured in the point of origination configuration to allow a retail user to load items into one or more cubbies of the delivery assembly. In Example, M7, the subject matter of Example M1 can optionally include where the user interface is configured in the user configuration to allow a user to unload items from one or more cubbies of the delivery assembly In Example, M8, the subject matter of Example M1 can optionally include determining that a user has approached the delivery assembly, requesting user authentication from the user, determining if the user is an authorized user, and unlocking at least one door of a cubby of the delivery assembly if the user is determined to be an authorized user In Example, M9, the subject matter of any of the Examples M1-M2 can optionally include based on the identification information of the user, allowing the user to access one or more cubbies of the delivery assembly. In Example, M10, the subject matter of any of the Examples M1-M3 can optionally include informing the user through one or more indicators on the user interface that the user can access a specific cubby. In Example, M11, the subject matter of any of the Examples M1-M4 can optionally include determining that the user should access at least two cubbies of the delivery assembly, wherein a first cubby should be accessed before a second cubby and informing the user through the user interface that the first cubby is available to be accessed before the user has access to the second cubby. In Example, M12, the subject matter of any of the Examples M1-M5 can optionally include where the user interface is configured in the point of origination configuration to allow a retail user to load items into one or more cubbies of the delivery assembly. In Example, M13, the subject matter of any of the Examples M1-M6 can optionally include where the user interface is configured in the user configuration to allow a user to unload items from one or more cubbies of the delivery assembly. In Example, M14, the subject matter of any of the Examples M1-M7 can optionally include determining that a user has approached the delivery assembly, requesting user authentication from the user, determining if the user is an authorized user, and unlocking at least one door of a cubby of the delivery assembly if the user is determined to be an authorized user. In Example, M15, the subject matter of any of the Examples M1-M8 can optionally include where the user interface includes a keypad and the user authentication is a keycode entered into the user interface using the keypad. In Example, M16, the subject matter of any of the Examples M1-M5 can optionally include the one or more indicators include light, text, sound, or some combination thereof. In Example, M17, the subject matter of any of the Examples M1-M9 can optionally include providing an indicator through the user interface that informs the user to place one or more items into a specific cubby of the delivery assembly. In Example, M18, the subject matter of any of the Examples M1-M9 can optionally include where the user interface includes a keypad and the user authentication is a keycode entered into the user interface using the keypad. Example MM1 is a method including identifying a delivery assembly located in an autonomous vehicle, wherein the delivery assembly includes a plurality of cubbies for storing items and a user interface, determining a location of the autonomous vehicle, and in response to determining the location of the autonomous vehicle, configuring a user interface, wherein the user interface has at least a point of origination configuration and a user configuration. In Example MM2, the subject matter of Example MM1 can optionally include where determining that a user has approached the delivery assembly, requesting user authentication from the user, determining if the user is an authorized user, and unlocking at least one door of a cubby from the plurality of cubbies if the user is determined to be an authorized user. In Example MM3, the subject matter of Example MM2 can optionally include where the user interface includes a keypad and the user authentication is a keycode entered into the user interface using the keypad. In Example MM4, the subject matter of Example MM1 can optionally include where the user interface is configured in the point of origination configuration to allow a retailer to load items into one or more cubbies of the delivery assembly. In Example MM5, the subject matter of any of Example MM4 can optionally include where providing an indicator to the user through the user interface that informs the user to place one or more items into a specific cubby of the delivery assembly. In Example, MM6, the subject matter of Example MM1 can optionally include where the user interface is configured in the user configuration to allow a user to retrieve one or more items from one or more cubbies of the delivery assembly. In Example, MM7, the subject matter of Example MM6 can optionally include providing an indicator to the user through the user interface that informs the user to retrieve one or more items from a specific cubby of the delivery assembly. In Example, MM8, the subject matter of Example MM1 can optionally include authenticating identification information of the user through the user interface located on the delivery assembly. In Example, MM9, the subject matter of Example MM8 can optionally include based on the identification information of the user, allowing the user to access one or more cubbies of the delivery assembly. In Example, MM10, the subject matter of Example MM9 can optionally include informing the user through one or more indicators on the user interface that the user can access a specific cubby. In Example, MM11, the subject matter of Example MM10 can optionally the one or more indicators include light, text, sound, or some combination thereof. In Example, MM12, the subject matter of Example MM1 can optionally include determining that the user should access at least two cubbies, wherein a first cubby should be accessed before a second cubby and informing the user through the user interface that the first cubby is available to be accessed before the user has access to the second cubby. In Example, MM13, the subject matter of Example MM1 can optionally include where the user interface is configured in the point of origination configuration and all the cubbies are unlocked to allow a retailer to load items into the cubbies of the delivery assembly. In Example, MM14, the subject matter of Example MM1 can optionally include where the user interface is configured in the user configuration to allow a user to unload items from one or more cubbies of the delivery assembly. In Example MM15, the subject matter of any of the Examples MM1-MM2 can optionally include where the user interface includes a keypad and the user authentication is a keycode entered into the user interface using the keypad. In Example MM16, the subject matter of any of the Examples MM1-MM3 can optionally include where the user interface is configured in the point of origination configuration to allow a retailer to load items into one or more cubbies of the delivery assembly. In Example MM17, the subject matter of any of the Examples MM1-MM4 can optionally include where providing an indicator to the user through the user interface that informs the user to place one or more items into a specific cubby of the delivery assembly. In Example, MM18, the subject matter of any of the Examples MM1-MM5 can optionally include where the user interface is configured in the user configuration to allow a user to retrieve one or more items from one or more cubbies of the delivery assembly. In Example, MM19, the subject matter of any of the Examples MM1-MM6 can optionally include providing an indicator to the user through the user interface that informs the user to retrieve one or more items from a specific cubby of the delivery assembly In Example, MM20, the subject matter of any of the Examples MM1-MM7 can optionally include authenticating identification information of the user through the user interface located on the delivery assembly. In Example, MM21, the subject matter of any of the Examples MM1-MM8 can optionally include based on the identification information of the user, allowing the user to access one or more cubbies of the delivery assembly. In Example, MM22, the subject matter of any of the Examples MM1-MM9 can optionally include informing the user through one or more indicators on the user interface that the user can access a specific cubby. In Example, MM23, the subject matter of any of the Examples MM1-MM10 can optionally the one or more indicators include light, text, sound, or some combination thereof. In Example, MM24, the subject matter of any of the Examples MM1-MM11 can optionally include determining that the user should access at least two cubbies, wherein a first cubby should be accessed before a second cubby and informing the user through the user interface that the first cubby is available to be accessed before the user has access to the second cubby. In Example, MM25, the subject matter of any of the Examples MM1-MM12 can optionally include where the user interface is configured in the point of origination configuration and all the cubbies are unlocked to allow a retailer to load items into the cubbies of the delivery assembly. In Example, MM26, the subject matter of any of the Examples MM1-MM13 can optionally include where the user interface is configured in the user configuration to allow a user to unload items from one or more cubbies of the delivery assembly. Example A1, is an autonomous delivery system to deliver items to a user using an autonomous vehicle, the autonomous delivery system comprising a delivery assembly, wherein the delivery assembly can be removably secured in the autonomous vehicle, a plurality of cubbies located in the delivery assembly, wherein each of the plurality of cubbies can store one or more items to be delivered to the user, and a user interface, wherein each of the plurality of cubbies is accessed through the user interface. In Example A2, the subject matter of Example A1 can optionally include where in response to a determined location of the delivery assembly, the user interface is configured in a point of origination configuration or a user configuration. In Example A3, the subject matter of Example A1 can optionally include where the user interface includes an authentication module to authenticate a user and allow the user to access the delivery assembly. In Example A4, the subject matter of Example A1 can optionally include where the delivery assembly is in communication with a network on a network path that is separate from a network path that the autonomous vehicle uses to connect to the network. In Example A5, the subject matter of Example A1 can optionally include where the user interface informs the user through one or more indicators that the user can access a specific cubby. In Example A6, the subject matter of any of Examples A1-A2 can optionally include where the user interface includes an authentication module to authenticate a user and allow the user to access the delivery assembly. In Example A7, the subject matter of any of Examples A1-A3 can optionally include where the delivery assembly is in communication with a network on a network path that is separate from a network path that the autonomous vehicle uses to connect to the network. In Example A8, the subject matter of any of Examples A1-A4 can optionally include where the user interface informs the user through one or more indicators that the user can access a specific cubby. In Example A9, the subject matter of any of Examples A1-A5 can optionally include where the user interface includes a keypad and the user is authenticated using a keycode entered into the user interface using the keypad In Example A10, the subject matter of any of Examples A1-A5 can optionally include where the user interface includes a keypad and a display. In Example A11, the subject matter of any of Examples A1-A5 can optionally include where in response to a determined location of the autonomous vehicle, the user interface is configured in a point of origination configuration or a user configuration. In Example A12, the subject matter of any of Examples A1-A5 can optionally include where the one or more indicators include light, text, sound, or some combination thereof. In Example A13, the subject matter of any of Examples A1-A5 can optionally include where a first cubby should be accessed before a second cubby and the user interface informs the user through one or more indicators that the first cubby is available to be accessed before the user has access to the second cubby Example AA1 is a device including at least one machine-readable medium comprising one or more instructions that, when executed by at least one processor, causes the at least one processor to determine a location of a delivery assembly transported by an autonomous vehicle, and in response to determining the location of the delivery assembly, configure a user interface to help facilitate autonomous delivery of at least one item, wherein the user interface has at least a point of origination configuration and a user configuration. In Example AA2, the subject matter of Example AA1 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to authenticate identification information of a user through the user interface. In Example AA3, the subject matter of Example AA2 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to, based on the identification information of the user, allow the user to access one or more cubbies of the delivery assembly. In Example AA4, the subject matter of Example AA1 can optionally include where the user interface is configured in the point of origination configuration to allow a retail user to load items into one or more cubbies of the delivery assembly. In Example AA5, the subject matter of Example AA1 can optionally include where the user interface is configured in the user configuration to allow a user to unload items from one or more cubbies of the delivery assembly. In Example AA6, the subject matter of Example AA1 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to determine that a user has approached the delivery assembly, request user authentication from the user, determine if the user is an authorized user, and unlock at least one door of a cubby of the delivery assembly if the user is determined to be an authorized user. In Example AA7, the subject matter of any of Examples AA1-AA2 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to, based on the identification information of the user, allow the user to access one or more cubbies of the delivery assembly. In Example AA8, the subject matter of any of Examples AA1-AA3 can optionally include where the user interface is configured in the point of origination configuration to allow a retail user to load items into one or more cubbies of the delivery assembly. In Example AA9, the subject matter of any of Examples AA1-AA4 can optionally include where the user interface is configured in the user configuration to allow a user to unload items from one or more cubbies of the delivery assembly. In Example AA10, the subject matter of any of Examples AA1-AA5 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to determine that a user has approached the delivery assembly, request user authentication from the user, determine if the user is an authorized user, and unlock at least one door of a cubby of the delivery assembly if the user is determined to be an authorized user. In Example AA11, the subject matter of any of Examples AA1-AA6 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to inform the user through one or more indicators on the user interface that the user can access a specific cubby. In Example AA12, the subject matter of Example AA7 can optionally include where the one or more indicators include light, text, sound, or some combination thereof In Example AA13, the subject matter of any of Examples AA1-AA6 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to determine that the user should access at least two cubbies of the delivery assembly, wherein a first cubby should be accessed before a second cubby and inform the user through the user interface that the first cubby is available to be accessed before the user has access to the second cubby. In Example AA14, the subject matter of any of Examples AA1-AA6 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to provide an indicator to the user through the user interface that informs the user to place one or more items into a specific cubby of the delivery assembly. In Example AA15, the subject matter of any of Examples AA1-AA6 can optionally include one or more instructions that, when executed by at least one processor, causes the at least one processor to provide an indicator to the user through the user interface that informs the user to retrieve one or more items from a specific cubby of the delivery assembly. In Example AA16, the subject matter of any of Examples AA1-AA6 can optionally include where the user interface includes a keypad and the user authentication is a keycode entered into the user interface using the keypad. | 89,583 |
11859987 | DETAILED DESCRIPTION The following description explains, by way of illustration only and not of limitation, various embodiments for receiving inputs indicative of a destination and a potential intermediate destination and determining a time potentially available at the intermediate destination. It will be noted that the first digit of three-digit reference numbers and the first two digits of four-digit reference numbers correspond to the figure number in which the element first appears. By way of a non-limiting introduction and overview, in various embodiments, inputs are received identifying a destination, a desired arrival time at the destination, and an intermediate destination to be visited prior to traveling to the destination. The user may also specify a type of intermediate destination, such as “restaurant,” and the system may provide a list of restaurants from which the user may choose. The system may determine routes and travel times to the intermediate destination and the destination, then determine a time that is available to spend at the destination. Given the desired arrival time at the destination and the travel time to reach the destination, when providing a list of possible intermediate destinations of a specified type, the system may eliminate intermediate destinations the user cannot visit while still reaching the destination by the desired arrival time. In this way, the user need not try to mentally calculate or track the travel times to determine whether the user has time to visit the intermediate destination or how much time the user might have to spend there. In various embodiments, the system also may determine whether a vehicle in which the user is travelling has sufficient fuel or energy (i.e., gasoline or electrical charge) to reach the destination, and the system may consider the time to reach and use a station to replenish the vehicle's energy in determining what time may be available at the intermediate destination. Now that a general overview has been given, details of various embodiments will be explained by way of non-limiting examples given by way of illustration only and not of limitation. Referring toFIG.1, in various embodiments a navigation system with time constraint management100(the “system”) includes various subsystems for identifying a potential intermediate destination and determining a time potentially available at the intermediate destination. The system100, as further described below, may be implemented on a computing device having computer-readable media storing instructions configured to cause the computing system to perform the functions herein described. An illustrative computing device is described below. The system100includes an input interface110that enables a traveler or other user to provide input identifying a destination, a desired arrival time at the destination, and a potential intermediate destination. The input interface110may enable the user to identify a destination from a list of previously entered destinations or by entering an address and/or coordinates of the destination. The input interface110also enables the user to enter an arrival time by which the user wishes to reach the destination. The input interface110also enables the user to identify a desired intermediate destination or a type of intermediate destination (e.g., “food,” “restroom,” etc.). Operation of the input interface110is further described below. Using the user input received via the input interface110, a route data interface112enables the system100to access a store of route data for an area that may encompass the trip. The route data may include map or other roadway data in the area that may be used to identify one or more available routes of travel between an origin and a destination in the area. The route data may include a local store of route data114that may be maintained within the system100. In various embodiments, the local store of route data114may include, for example, map data for a nation in which the user resides and/or in which the system100is initially deployed, as well as map data for one or more neighboring nations. The route data interface112also may engage a remote store of route data116. The remote store of route data116may be used to update the local store of route data114to present map information that represents new or changed roads or road conditions, and/or data regarding current road or traffic conditions. The remote store of route data116also may be used to augment the local store of route data114to include map data for one or more additional areas that may not be presently included in the local store of route data114. The remote store of route data116may be maintained on a remote computing system that is accessible by the system100, as further described below. A routing module118uses the data indicative of the trip received by the input interface110and the route data accessible by the route data interface112to identify one or more routes the user may travel from the origin to the destination, the intermediate destination, and other locations (e.g., a station where a user can refuel or recharge a vehicle). In various embodiments, the routing module118may employ a vector map of road segments with defined end points of each segment. A process such as Dijkstra's algorithm, an A* algorithm, or another suitable method may be used to determine one or more shortest path trees between the origin and destination. Based on user preferences, such as preferences for or against freeways, the desire to avoid toll roads, and the like, various possible segments may be eliminated from consideration before determining the shortest path tree. The user may be presented with an option to choose from more than one available route. Each of the one or more routes presented by the routing module118is associated with a timeline. The timeline incorporates an expected time of travel between the origin, the destination, and/or including a trip to the intermediate destination based on a total time to travel each of the segments included in the route at an anticipated travel speed. The anticipated travel speed may be based on a combination of the legal speed limit for each of the segments, anticipated or actual traffic delays, and one or more additional factors. A route may be selected from the one or more routes provided by the routing module118in order to reach the destination by the desired arrival time. Also, using the timeline, it may be determined whether there is time and/or how much time is potentially available to spend at the intermediate destination. The input interface110and the routing module118interoperate with a display module126to receive input from and/or provide information to the user. In various embodiments, the display module126may include a touchscreen display that enables a user to provide input and receive output from a single device, as is commonly used in GPS navigation devices incorporated into vehicles, standalone GPS navigation devices, smartphones, and smartwatches. To provide a further example, a navigation display150shows various features that may be engaged by a user to provide input and receive output from the system100via the display module126. An input section160enables the user to control or enter information into the system100. For example, the input section160may include a location input162that enables the user to enter an address, coordinates, or a name of a desired destination. The name may be the name of a business, such as a name of a hotel, restaurant, etc., or the name of a previously-saved location, such as “home,” “office,” etc. When the user engages the location input162, such as by touching the location input162, an on-screen keyboard (not shown) may be displayed to enable the user to key in the address, coordinates, or name. Alternatively, by tapping a voice input control164, a user may provide verbal input for the address, coordinates, etc. A user also may speak a “wake word” to initiate verbal input to the system100. In various embodiments, the input section160also may include destination types170that enable a user to specify a type of destination or intermediate destination. The destination types170may include “food,” “restrooms,” “parking,” etc. Some of the destination types170, when selected, may invoke further option selections. For example, upon selecting the “food” option, the user may be presented with options for types of food (e.g., “pizza,” “fast food,” etc.). The input section160also may present a list of recently entered or visited destinations172. The list of recent destinations172may include a number of destinations174-176including private addresses or business establishments. For business establishments, a rating178, such as a star rating, may be provided from the user's own prior ratings or from a ratings service. The list of recent destinations172may allow a user to tap on or otherwise select one of the recent destinations174-176. Being able to select from the list of recent destinations172may be convenient, for example, when the user is on a business trip and is traveling back and forth from a hotel to a business to be visited. An output section180shows a map of streets182on which the user is traveling or in the user's vicinity. A route to be travelled and/or turn-by-turn directions (not shown inFIG.1) may be provided relative to the map182. The output section180also may show nearby locations184, which may include businesses, landmarks, and other places. It will be appreciated that the output section180, although potentially primarily providing information to the user, may also receive input. For example, the user may tap on a particular route, street, or location to select a route or destination. The navigation display150is used as a basis for describing operation of the system100, as further described below. In various embodiments, the navigation display150of the system100present a schedule input166. As further explained below, the schedule input166may be used to initiate the functions of selecting a destination, a desired time of arrival at the destination, and, optionally, the scheduling of one or more intermediate destinations or stops. In various embodiments, the scheduling option may be initiated by voice commands as well as by engaging the schedule input166. Operation of the scheduling option is explained in detail below with reference toFIGS.6-14. Referring toFIG.2, the navigation system with time constraint management100ofFIG.1may be integrated with a vehicle200or transportable aboard a vehicle200. In various embodiments, the vehicle200includes a body202that may support a cabin204capable of accommodating an operator, one or more passengers, and/or cargo. In various embodiments, the vehicle200may be controlled by an operator or the vehicle200may be a self-driving vehicle. The vehicle200may be an autonomous vehicle that travels without an operator to transport passengers and/or cargo. The body202also may include a cargo area206separate from the cabin204, such as a trunk or a truckbed, capable of transporting cargo. The vehicle200includes a drive system201selectively engageable with one or more front wheels203and/or one or more rear wheels205to motivate, accelerate, decelerate, stop, and steer the vehicle200. The drive system201may include an electrically-powered system, a fossil-fuel-powered system, a hybrid system using both electric power and fossil fuels, or another type of power source. In various embodiments, the system100ofFIG.1may be an integral part of the vehicle200, including a computing system that is part of the vehicle200, powered by a power system aboard the vehicle200and integrated with one or more instrument panels220disposed in the cabin204of the vehicle200. The instrument panels220might include various operational gauges, such as a speedometer, tachometer, and odometer, climate controls, entertainment controls, and other instruments along with presenting the navigation display150(FIG.1). In various embodiments, the system100may include a separate computing device transportable aboard the vehicle200, such as a smartphone, smartwatch, tablet computer, or other portable computing device. In various embodiments, the navigation system with time constraint management100may include a computing device that is usable separate from the vehicle200, such as a portable or non-portable personal computer usable for trip planning. Consequently, the system100may be used by pedestrians or other users who are not traveling by vehicle. Referring toFIG.3, a dashboard300within the cabin204shows one of the instrument panels220(FIG.2) that includes the navigation display150. In various embodiments, as previously described, the navigation display150may include a touchscreen display that enables an individual to directly engage the navigation display150to interact with the navigation display150, as further described below. In various embodiments where the navigation display150does not include a touchscreen display, controls301-304adjacent to the navigation display150may enable user engagement with the navigation display150to move a cursor, enter characters, or perform other control functions. In various embodiments, the system100is configured to receive voice inputs to interact with the navigation display. Voice input may be received in response to activation of a voice input control164(FIG.1) or by using a wake word. In various embodiments, instead of or in addition to using the navigation display150on the dashboard300, a portable computing device310, such as a smartphone, smartwatch, tablet computer, or other portable computing device, may execute an application that operates to provide functions of the system100. The portable computing device310may operate alone or in some combination with a remote computing system, as further explained below. The portable computing device310may engage with other systems aboard the vehicle200, such as a speedometer or other devices, via an interface320. The interface320may include a wireless interface, such as a Bluetooth or Wi-Fi interface, or a wired interface using a USB or other wired connection. In various embodiments, the interface320may enable the portable computing device310to provide input to a self-driving system aboard the vehicle200to guide the vehicle200to one or more destinations. Referring additionally toFIG.4and given by way of example only and not of limitation, an illustrative computing device400may be used aboard the vehicle200(FIG.2) to perform the functions of the navigation system with time constraint management100(FIG.1). In various embodiments, the computing device400typically includes at least one processing unit420and a system memory430. Depending on the exact configuration and type of computing device, the system memory430may be volatile memory, such as random-access memory (“RAM”), non-volatile memory, such as read-only memory (“ROM”), flash memory, and the like, or some combination of volatile memory and non-volatile memory. The system memory430typically maintains an operating system432, one or more applications434—such as computer-executable instructions to support operation of the integrated route map display system100—and program data436. The operating system432may include any number of operating systems executable on desktop or portable devices including, but not limited to, Linux, Microsoft Windows®, Apple OS®, or Android®, or a proprietary operating system. The computing device400may also have additional features or functionality. For example, the computing device400may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, tape, or flash memory. Such additional storage is illustrated inFIG.4by removable storage440and non-removable storage450. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data. The system memory430, the removable storage440, and the non-removable storage450are all examples of computer storage media. Available types of computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory (in both removable and non-removable forms) or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computing device400. Any such computer storage media may be part of the computing device400. The computing device400may also have input device(s)460such as a keyboard, stylus, voice input device, touchscreen input device, etc. Output device(s)470such as a display, speakers, short-range transceivers such as a Bluetooth transceiver, etc., may also be included. The computing device400also may include one or more communication systems480that allow the computing device400to communicate with other computing systems490, as further described below. As previously mentioned, the communication system480may include systems for wired or wireless communications. Available forms of communication media typically carry computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of illustrative example only and not of limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared and other wireless media. The term computer-readable media as used herein includes both storage media and communication media. In further reference toFIG.4, the computing device400may include global positioning system (“GPS”)/geolocation circuitry485that can automatically discern its location based on relative positions to multiple GPS satellites or other signal sources, such as cellphone towers. As described further below, GPS/geolocation circuitry485may be used to determine a location of the vehicle200. In various embodiments, the GPS/geolocation circuitry485may be used to determine a position of the vehicle200for generation and analysis of the navigation display150. In addition to one or more onboard computing systems, various embodiments may communicate with remote computing systems to perform the functions herein described. Referring toFIG.5, an operating environment500may include one or more sets or remote computing systems520. The remote computing system520may support a map or route data service. The remote computing system520may provide an additional source of mapping and/or navigational data, as well as a directory of destinations by name, address, and/or coordinates that the user may wish to travel to as a destination or intermediate destination. Although shown as a single computing system inFIG.5, it will be appreciated that the remote computing system520may include one or more computing systems residing at one or more locations. The remote computing system520each may include a server or server farm and may communicate with the network510over wired and/or wireless communications links521, respectively. The remote computing system520may access programming and data used to perform their functions over high-speed buses525to interact with data storage530. In various embodiments, the remote computing system520may service requests for map data562, destination data564that may be stored via location or coordinates, and/or destination type data566that may retrieve potential destinations and/or intermediate destinations based on a type specified by the user. The data storage530also may include ratings data568that maintains a quality assessment of various locations. The ratings data568may be created by the user or be drawn from an online ratings service that collects ratings from visitors or patrons of the locations. It will be appreciated that some or all of the data maintained in the data storage300may be accessible from or stored in a user's computing system without accessing the data storage530over the network510. The system100may be disposed aboard the vehicle200. As previously described, the system100may be supported by a computing device integrated with the vehicle200or supported by a portable computing device carried aboard the vehicle200. The system100may communicate over the network510via a communications link512to access the remote computing system520to access the map data562, the destination data564, and/or the destination type data566. The communications link512may include a wireless communications link to enable mobile communications with the system100aboard the vehicle200or may include a wired link when the vehicle200is stopped. The system100also may be supported by a computing system570that is not integrated with or transported aboard the vehicle200. The computing system570may include a portable computing system, such as a portable computer, tablet computer, smartphone, or smartwatch and may be used to generate the navigation display150(FIG.1). The computing system570may communicate over the network510via a communications link571to access the remote computing system520to access the map data562, the destination data564, and/or the destination type data566. The communications link571may include a wireless or a wired communications link. Operation of the system100is further described with reference to the following figures. Referring toFIG.6, the navigation display150enables a user to identify one or more destinations, including a final destination and a potential intermediate destination. As previously described, the user may use physically engage (such as with a finger601) the location input162on the navigation display to enter an address, coordinates, or a name of a desired destination. The user may use an on-screen keyboard (not shown) to enter the desired destination or use a wake word or physically engage the voice input control164to provide verbal input for the address, coordinates, etc. The user also may select one of the locations174listed in the recent destinations list172. The user also may begin by selecting one of the destination types170, such as “food,” “restrooms,” “parking,” etc. To provide an operational example, it is assumed that the user is visiting a city with which the user is not highly familiar. For further sake of example, it is further assumed that the user is staying with a friend or at “bed & breakfast” type of lodging located at 123 Main St. in Estacada, Oregon. It is further assumed that the user may need to return for courtesy or by mandate by 11:00 p.m., or merely that the user is determined to return to the lodging by 11:00 p.m. However, on the way to the destination, the user wishes to stop to have dinner. In order to plan his trip, the user selects the schedule input166. In various embodiments, the user engages the schedule input166by tapping the schedule input166with a finger601. Referring toFIG.7, in various embodiments, a scheduling window710is presented instead of or overlaying the navigation display150. The scheduling window710includes a destination menu720including a range of destination options721-724. For example, the destination menu720may include a “Go Home” option721, which would set the destination to a user's preset home address. The destination menu720includes a “Return to Current Location” option722in case the user wants to specify an intermediate destination (e.g., a restaurant or other location) and return to the user's starting location. The destination menu720includes a “Go Somewhere else” option723to choose another destination. The destination menu720also includes a “Reach Location of My Next Event” option724, which would enable a user to link the system100to the user's calendar to procure directions to the next event. In various embodiments, to choose an option from the destination menu720, the user may tap on a selected option by tapping the selected option with a finger601, such as the “Go Somewhere Else” option723as shown inFIG.7. In various embodiments, the user may choose an option from the destination menu720by sliding a cursor730along the destination menu using the finger601or an external control (e.g., controls301-304ofFIG.3) to move the cursor along a dimension735to choose an option from the destination menu720. In various embodiments, a user also may use voice commands to make a selection from the destination menu720. Referring toFIG.8, in addition to choosing a destination from the destination menu720of the scheduling window710, the user also chooses a desired arrival time from an arrival time menu850. In various embodiments, the arrival time menu850enables a user to use the finger601to scroll the arrival time menu850by sliding the finger601along a dimension852until a desired time appears in a selection frame854. In various embodiments, the user may manipulate the arrival time menu850to present the desired arrival time to the minute or within a range of a few minutes within each hour. In other embodiments, the user may engage an external control (e.g.,301-304ofFIG.3) to manipulate the arrival time menu850until the desired arrival time is presented in the selection frame854. In various embodiments, a user also may use voice commands to make a selection from the arrival time menu850. Referring toFIG.9, after the user finishes making destination and timing selections, the user may engage the system to proceed to a next step by engaging a confirm option960using the finger601or an external control (e.g., controls301-304ofFIG.3), or by providing a voice command. Continuing with the previously-stated example of the user wanting to return to his lodging by 11:00 p.m., the user chooses a desired arrival time of 11:00 p.m. with the selection frame854of the arrival time menu850. Referring toFIG.10, in response to selecting the confirm option960, the user is presented with a destination menu1010. In various embodiments, the destination menu1010may include a “Select a Previous Location” option1020. The “Select a Previous Location” option1020may include a list of previously-visited or previously-entered locations1030, such as locations1031-1033. Although the list of previously-visited or previously-entered locations1030includes only a few entries, it will be appreciated that the list1030may be scrollable using the finger610or an external control (e.g., controls301-304ofFIG.3) or by providing a voice command to access additional previously-visited or previously-entered locations. In various embodiments, when applicable, each of the listed locations1031-1033may be associated with a rating1041-1043, respectively, provided by a ratings service or previously assigned by the user. The user may select from the list1030with the finger601, an external control (e.g., controls301-304ofFIG.3), or by providing a voice command. The destination menu1010also may include a “Search for New Location” option1050. The search for new location option1050includes a search input1062in which the user can enter a name, address, or coordinates for a location. The search may be commenced by engaging the search input1062with the finger601to open an on-screen keyboard (not shown), by using the finger601to activate a voice input control1064to receive voice input, by using a wake word to signal the system to receive voice input, or by other processes. Referring toFIG.11, the user specifies the destination to which the user desires to return by the desired arrival time previously specified as described with reference toFIGS.7-9. Continuing with the previously-described example, the user wishes to return to the user's lodging by 11:00 p.m., as specified with the selection frame854of the arrival time menu850. From either the “Select a Previous Location” option1020or the “Search for New Location” option1050, the user chooses a location by which the user desires to return by 11:00 p.m. In this example, the user chooses the listed location1031which is the address of the user's lodging at 123 Main St. in Estacada, Oregon. Referring toFIG.12, in response to the user having chosen a desired arrival time and the destination to be reached by that time as described with reference toFIGS.7-11, the system100returns the user to the navigation display150where the user may have an option to choose to stop at an intermediate destination. Continuing with the previous example, the user wishes to stop for dinner before returning to the user's lodging at 123 Main St. in Estacada, Oregon by 11:00 p.m. The navigation display150ofFIG.12is similar to that of the navigation display ofFIG.6in which the user first engaged the system100to select the desired arrival time and the destination. A difference is that, because a desired arrival time at a destination has been selected, the schedule input166is overlaid with the desired arrival time indicator1266. The desired arrival time indicator1266both indicates that the system100is operating to guide the user to a specified destination by the desired arrival time and/or to remind the user of the desired arrival time. To select a potential intermediate destination, the user may enter the name or address of a location in the location input162(or activate the voice input control164or use a wake word to do so), select from one of the list of recent destinations172, or select one of the destination types170. The destination types170available may include restaurants, energy replenishment stations (e.g., gas or electric charging stations), parks, grocery stores, and other merchants, business, medical, or recreational destinations, or any other type of destination. In keeping with the current example, the user selects a food option1270with the finger601to choose a place to stop for dinner on the way to returning to the user's lodging by 11:00 p.m. Referring toFIG.13, in response to selecting the food option1270(FIG.12), the user is presented with an intermediate destination selection screen1300. The intermediate destination selection screen1300includes a listing of restaurants1330to enable the user, in keeping with the ongoing example, to get dinner on the way to the user's destination by 11:00 p.m. In presenting the list of restaurants1330, the intermediate destination screen1300has been tailored to the user's selection of the food option1270ofFIG.12. If the user had chosen, for example, the restroom option, it will be appreciated that a different destination screen would have been presented to list locations of available restrooms that the user could visit on the way to the destination. In addition to being tailored to the user's selection of food locations, in various embodiments, the list is tailored according to the desired time (i.e., 11:00 p.m.) by which the user wants to reach the destination. Thus, when a user does not have sufficient time to both travel to a particular food location and travel from the food location to the destination, that food location may not be listed. Moreover, in various embodiments, the system100may include a default minimum visit time that a user may spend at a restaurant. Thus, food locations that the user would not be able to travel to, spend the default minimum visit time, and then travel to the destination by the desired arrival time also may not be listed. The default minimum visit time, as further described below, may be set to 30 minutes. However, a longer or shorter default minimum visit time may be set, and the user may be able to alter the default minimum visit time for the present case or as a general default for present and future cases. Similar calculations may be made for other types of intermediate destinations other than food establishments. For example, a shorter default minimum of 15 minutes may be set for a refueling or recharging stop. Continuing to refer toFIG.13, in addition to the listing of food locations1330, the intermediate destination selection screen1300also includes a listing indicator1310to remind the user that this is a list of food locations. A scheduling mode reminder1312may be presented to indicate that the user is selecting an intermediate destination prior to the user's selected destination arrival time1266of 11:00 p.m. A back or escape icon1320enables the user to exit from the intermediate selection screen1300, such as by using the finger601(not shown inFIG.13) or a voice command. In various embodiments, from the intermediate destination screen1300, the user also may engage the listing indicator1310to change the type of intermediate destination, engage the scheduling mode reminder1312to cancel the scheduling mode or change the selected destination arrival time, or perform other functions. As shown inFIG.13, the listing of food locations1330includes locations1331-1333of three restaurants. Each of the locations1331-1333include an identifier, such as a number, letter, or other symbol, for correlation purposes, as further described below. The listing of food locations1330, respectively for each of the locations1331-1333, includes a name and address1351-1353and a food type1341-1343which may be presented in iconic form as shown or in a textual form. The listing of food locations also includes a rating1361-1363that may be drawn from past user entries or a ratings service. In addition, considering an expected travel time to the destination by way of any of the locations1331-1333, an available time1371-1373(i.e., “Time to Spend Here”) is associated with each of the locations1331-1333, respectively. The available time1371-1373is determined based on travel time to each of the locations1331-1333and the time to travel from each of the locations1331-1333to the destination to be reached by the selected destination arrival time. For example, with the selected destination arrival time1266of 11:00 p.m., if the current time is 9:45 p.m., there is a period of 75 minutes before the user should arrive at the destination. Thus, if the location1331is 5 minutes from the user's point of origin (e.g., the user's present location), and the travel time from the location1331to the destination is 15 minutes, deducting travel time to the location1331and then from the location1331to the destination leaves 55 minutes. Thus, the available time1371at the location1331is 55 minutes. The available time1372and1373for the other locations1332and1333, respectively, may be calculated in the same way. As a result, without mental calculation, the user can make a plan in consideration of one or more of the food type1341-1343, the rating1361-1363, the available time1371-1373, or other presented information. Thus, if the user wants pizza and 45 minutes is determined by the user as sufficient time to order and eat the pizza, the user may choose location1332or the user may make another choice. Additional choices of locations that may be available may be accessed by using a scroll input1320with the finger (not shown) or with voice commands. In various embodiments, the listing of food locations1330is sortable according to user preferences. For example, the listing of food locations1330may be sortable by selection of a sort type including a food type1391, an alphabetical listing1392, a rating1393, or an available time1394. Thus, the user may be able to sort the list by what the user feels like eating, whether the user wants a quick bite or leisurely meal, etc. As a default or by user selection, the listing of food locations1330is sorted in descending order by available time1371-1373. In various embodiments, the intermediate destination selection screen1300also may include a map1302of an area including the locations1331-1333and the destination1310. The locations1331-1333are signified on the map1302by symbols1381-1383, respectively, that correlate with the identifiers used in the listing1330of the locations1331-1333, respectively. The map1302may include streets1303and/or highways1304in the area. For the user's information, the map1302also may include routes1385-1387to each of the locations1331-1333. The map1302also may include a subsequent route1388to the destination1310. InFIG.13, only the subsequent route1388from the location1331to the destination1310is shown inFIG.13. However, in various embodiments, the map1302may also show routes from the other locations1332and1333to the destination1310. The information provided by the map1302also may be useful to the user in making a dining plan in the event the user wants to drive a certain route to see a landmark, avoid potential traffic, or for other considerations. In various embodiments, the system100also may consider the travel capacity of a vehicle in which the user is travelling. For example, if the user's vehicle is low on fuel or battery charge, the user may have to stop on the way to the destination to replenish the vehicle's available energy. The determination may be based on an actual amount of energy needed to reach the destination, on an amount of energy needed to reach the destination while leaving a threshold amount of energy for travel upon leaving the destination, or other considerations. Referring toFIG.14, for example, when the user selects the food option1270in selecting an intermediate destination (FIG.12), the system100determines that any deviation from traveling directly to the destination will require additional energy. An intermediate destination selection screen1400shows the consequences of the deviation for fuel. Accordingly, the system100determines what locations may be available for the food option by considering, in addition to travel time to each of the locations1331and1332and the time to travel from each of locations1331and1332to the destination, the time to travel to a station to replenish the vehicle's energy and a time to replenish the vehicle's energy supply. For a gas-powered vehicle, for example, a refueling time may be estimated at 10 minutes; for an electric vehicle, a longer time to reach a particular level of charge may different. For purposes of the example, the system100determines that it will take 10 minutes to replenish the vehicle's energy. Also, by identifying a station1401that is not out of the way on the routes to the locations1331and1332, only 5 additional minutes are attributed for traveling to the station. Thus, in addition to travel times to the locations and subsequent travel times from the locations to the destination, an additional 15 minutes is deducted from the available times at each of the locations. Thus, the available time1471for location1331is 40 minutes (instead of 55 minutes for the available time1371ofFIG.13). Similarly, the available time1472for location1332is 30 minutes (instead of 45 minutes for the available time1372ofFIG.13). In various embodiments, deducting the additional time for replenishing the energy of the vehicle may leave an available time at some of the intermediate destinations below a default threshold time. Accordingly, some locations, such as location1333(included inFIG.13, but not inFIG.14) may be eliminated from the list of potential intermediate destinations. It will be appreciated that a map1402of the intermediate destination selection screen1400includes a station1401where the user may replenish energy of the vehicle. Information1411for the station1401is also included on the intermediate destination selection screen1400. It will be appreciated that, just as a list of alternative food locations is offered, a list of alternative stations to replenish energy of the vehicle may be provided from which the user can choose. The routes, travel times, available times, etc., may be calculated as a result of the user's choice of station. Referring toFIG.15, an illustrative method1500is provided for receiving inputs indicative of a destination and a potential intermediate destination and determining a time potentially available at the intermediate destination. The method1500starts at a block1505. At a block1510, a first input indicative of a destination is received. At a block1520, a second input indicative of an arrival time at the destination is received. It will be appreciated that the inputs of blocks1510and1520are receivable in the order shown, or the arrival time may be presented as the first input and the destination is receivable as a second input. At a block1530, a third input indicative of an intermediate destination to be visited before traveling to the destination is received. At a block1540, a first travel time to the intermediate destination, a second travel time to the destination, and a time available at the intermediate destination are determined. At a block1550, the time available at the intermediate destination is communicated to a user. The method ends at a block1555. It will be appreciated that the detailed description set forth above is merely illustrative in nature and variations that do not depart from the gist and/or spirit of the claimed subject matter are intended to be within the scope of the claims. Such variations are not to be regarded as a departure from the spirit and scope of the claimed subject matter. | 41,619 |
11859988 | DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope is defined by the appended claims. Disclosed embodiments of the present disclosure provide methods and systems for vehicle ridesharing and vehicle ridesharing management. The term “vehicle” or “ridesharing vehicle” as used herein refers to any kind of vehicle (e.g., car, van, SUV, truck, bus, etc.) suitable for human transportation, such as providing ride services. In some embodiments, a vehicle may be a taxi. In some embodiments, a vehicle may include an autonomous vehicle, wherein a control device integrated with the vehicle or a management system separate front the vehicle may send operational instructions and guide the vehicle to designated pick-up locations and drop-off locations. For the ease and conciseness of description, some embodiments disclosed herein may simply refer to a vehicle or a taxi as an example, which does not limit the scope of the disclosed embodiments. Consistent with some embodiments of the present disclosure, a ridesharing management system may receive a first ride request from a first user. The first ride request may include a starting point and a desired destination. The ridesharing management system may calculate a first estimated pick-up time based on a current location of a vehicle that is in the surrounding areas. After sending a confirmation with the estimated pick-up time, the ridesharing management system may then guide the vehicle to a pick-up location for picking up the first rider. The pick-up location may be a different location from the starting point included in the first ride request. The system may also guide the first user to the pick-up location. In some embodiments, the system may subsequently receive a second ride request from a second user, for example, while the first user is still in the vehicle. The second ride request may include a second starting point and a second desired destination. The system may calculate a second estimated pick-up time, provide a second confirmation to the second rider, and guide the second rider to a second pick-up location. In some embodiments, the second pick-up location may be a different location from the second starting point included in the second ride request. In some embodiments, the system may calculate the fares for each user, based on the solo ride portion for a corresponding user, and the shared portion of the ride. For example, the system may offer a discount for the shared portion of the ride. In some embodiments, the system may also calculate the fare amount for a particular user based on various service-related parameters such as user input regarding whether to use toll roads, the walking distance between the starting point and the pick-up location, and the walking distance between the desired destination and the drop-off location. The embodiments herein further include computer-implemented methods, tangible non-transitory computer-readable mediums, and systems. The computer-implemented methods can be executed, for example, by at least one processor that receives instructions from a non-transitory computer-readable storage medium. Similarly, systems and devices consistent with the present disclosure can include at least one processor and memory, and the memory can be a non-transitory computer-readable storage medium. As used herein, a “non-transitory computer-readable storage medium” refers to any type of physical memory on which information or data readable by at least one processor can be stored. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage medium. Singular terms, such as “memory” and “computer-readable storage medium,” can additionally refer to multiple structures, such a plurality of memories or computer-readable storage mediums. As referred to herein, a “memory” may comprise any type of computer-readable storage medium unless otherwise specified. A computer-readable storage medium may store instructions for execution by at least one processor, including instructions for causing the processor to perform steps or stages consistent with an embodiment herein. Additionally, one or more computer-readable storage mediums may be used in implementing a computer-implemented method. The term “computer-readable storage medium” should be understood to include tangible items and exclude carrier waves and transient signals. FIG.1is a diagram illustrating an example ridesharing management system, in which various implementations as described herein may be practiced, according to some embodiments of the present disclosure. As shown inFIG.1, ridesharing management system100includes one or more mobile communications devices120A-120F (collectively referred to as mobile communications devices120), a network140, a ridesharing management server150, and a database170. The plurality of mobile communications devices120A-120F may further include a plurality of user devices120A-120C associated with users130A-130C respectively, a plurality of driver devices120D and120E associated with drivers130D and130E, and a driving-control device120F associated with an autonomous vehicle130F. Consistent with some embodiments of the present disclosure, ridesharing management server150may communicate with driving-control device120F to direct autonomous vehicle130F to pick-up and drop-off users130A-130C. In one example, autonomous vehicles capable of detecting objects on the road and navigate to designated locations may be utilized for providing ridesharing services. The components and arrangements shown inFIG.1are not intended to limit the disclosed embodiments, as the system components used to implement the disclosed processes and features can vary. For example, ridesharing management system100may include multiple ridesharing management servers150, and each ridesharing management server150may handle a certain category of ridesharing services, ridesharing services associated with a certain category of service vehicles, or ridesharing services in a specific geographical region, such that a plurality of ridesharing management servers150may collectively provide a dynamic and integrated ridesharing service system. Network140may facilitate communications between user devices120and ridesharing management server150, for example, receiving ride requests and other ride server related input from or sending confirmations to user devices, and sending ride service assignments to driver devices and driving-control devices. Network140may be any type of networks that provides communications, exchanges information, and/or facilitates the exchange of information between ridesharing management server150and user devices120. For example, network140may be the Internet, a Local Area Network, a cellular network, a public switched telephone network (“PSTN”), or other suitable connection(s) that enables ridesharing management system100to send and receive information between the components of ridesharing management system100. Network140may support a variety of messaging formats, and may further support a variety of services and applications for user devices120. For example, network140may support navigation services for mobile communications devices120, such as directing the users and service vehicles to pick-up or drop-off locations. Ridesharing management server150may be a system associated with a communication service provider which provides a variety of data or services, such as voice, messaging, real-time audio/video, to users, such as users130A-130E. Ridesharing management server150may be a computer-based system including computer system components, desktop computers, workstations, tablets, handheld mobile communications devices, memory devices, and/or internal network(s) connecting the components. Ridesharing management server150may be configured to receive information from mobile communications devices120over network140, process the information, store the information, and/or transmit information to mobile communications devices120over network140. For example, in some embodiments, ridesharing management server150may be configured to: receive ride requests from user devices120A-120C, send ride confirmation and ride fare information to user devices120A-120C, and send ride service assignments (for example, including pick-up and drop-off location information) to driver devices120D and120E, and driving-control device120F. Further, ridesharing management server150may further be configured to receive user input from user devices120A-120C as to various ride service parameters, such as walking distance to a pick-up location, maximum delay of arrival/detour, and maximum number of subsequent pick-ups, etc. In some embodiments, ridesharing management server150may be further configured to: calculate ride fares based on a solo portion of a user's ride and a shared portion of the ride. Further, the ride fare calculation may further be based on various ride service parameters set by the user, such as the walking distance involved in the ride, and user selection regarding toll road usage, etc. Database170may include one or more physical or virtual storages coupled with ridesharing management server150. Database170may be configured to store user account information (including registered user accounts and driver accounts), corresponding user profiles such as contact information, profile photos, and associated mobile communications device information. With respect to users, user account information may further include ride history, service feedbacks, complaints, or comments. With respect to drivers, user account information may farther include number of ride service assignments completed, ratings, and ride service history information. Database170may further be configured to store various ride requests received from user devices120A-120C and corresponding starting point and desired destination information, user input regarding various service parameters, pick-up and drop-off locations, time of pick-up and drop-off, ride fares, and user feedbacks, etc. Database170may further include traffic data, maps, and toll road information, which may be used for ridesharing service management. Traffic data may include historical traffic data and real-time traffic data regarding a certain geographical region, and may be used to, for example, calculate estimate pick-up and drop-off times, and determine an optimal route for a particular ride. Real-time traffic data may be received from a real-time traffic monitoring system, which may be integrated in or independent from ridesharing management system100. Maps may include map information used for navigation purposes, for example, for calculating potential routes and guiding the users to a pick-off or drop-off location. Toll road information may include toll charges regarding certain roads, and any change or updates thereof. Toll road information may be used to calculate ride fares, for example, in cases where the user permits use of toll roads. The data stored in database170may be transmitted to ridesharing management server150for accommodating ride requests. In some embodiments, database170may be stored in a cloud-based server (not shown) that is accessible by ridesharing management server150and/or mobile communications devices120through network140. While database170is illustrated as an external device connected to ridesharing management server150, database170may also reside within ridesharing management server150as an internal component of ridesharing management server150. As shown inFIG.1, users130A-130E may include a plurality of users130A-130C, and a plurality of drivers130D and130E, who may communicate with one another, and with ridesharing management server150using various types of mobile communications devices120. As an example, a mobile communications device120may include a display such as a television, tablet, computer monitor, video conferencing console, or laptop computer screen. A mobile communications device120may further include video/audio input devices such as a microphone, video camera, keyboard, web camera, or the like. For example, a mobile communications device120may include mobile devices such as a tablet or a smartphone having display and video/audio capture capabilities. A mobile communications device120may also include one or more software applications that facilitate the mobile communications devices to engage in communications, such as IM, VoIP, video conferences. For example, user devices130A-130C may send requests to ridesharing management server150, and receive confirmations therefrom. Drivers130D and130E may use their respective devices to receive ride service assignments and navigation information from ridesharing management server150, and may contact the users with their respective devices120D and120E. In some embodiments, a user may directly hail a vehicle by hand gesture or verbal communication, such as traditional street vehicle hailing. In such embodiments, once a driver accepts the request, the driver may then use his device to input the ride request information. Ridesharing management server150may receive such request information, and accordingly assign one or more additional ride service assignments to the same vehicle, for example, subsequent e-hail ride requests received from other mobile communications devices120through network140. In some embodiments, driver devices120D and120E, and driving-control device120F may be embodied in a vehicle control panel, as a part of the vehicle control system associated with a particular vehicle. For example, a traditional taxi company may install a drive device in all taxi vehicles managed by the taxi company. In some embodiments, driver devices120D and120E, and driving-control device120F, may be further coupled with a payment device, such as a card reader installed as a part of the vehicle control panel or as a separate device associated with the vehicle. A user may then use the payment device as an alternative payment mechanism. For example, a user who hails the taxi on the street may pay through the payment device, without using a user device providing ridesharing service. FIG.2is a diagram illustrating the components of an example mobile communications device200associated with a ridesharing management system, such as system100as shown inFIG.1, in accordance with some embodiments of the present disclosure. Mobile communications device200may be used to implement computer programs, applications, methods, processes, or other software to perform embodiments described in the present disclosure, such as mobile communications devices120A-120F. For example, user devices120A-120C, driver devices120D and120E, and driving-control device120F may respectively be installed with a user side ridesharing application, and a corresponding driver side ridesharing application. Mobile communications device200includes a memory interface202, one or more processors204such as data processors, image processors and/or central processing units, and a peripherals interface206. Memory interface202, one or more processors204, and/or peripherals interface206can be separate components or can be integrated in one or more integrated circuits. The various components in mobile communications device200may be coupled by one or more communication buses or signal lines. Sensors, devices, and subsystems can be coupled to peripherals interface206to facilitate multiple functionalities. For example, a motion sensor210, a light sensor212, and a proximity sensor214may be coupled to peripherals interface206to facilitate orientation, lighting, and proximity functions. Other sensors216may also be connected to peripherals interface206, such as a positioning system (e.g., GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functionalities. A GPS receiver may be integrated with, or connected to, mobile communications device200. For example, a GPS receiver may be included in mobile telephones, such as smartphone devices. GPS software may allow mobile telephones to use an internal or external GPS receiver (e.g., connecting via a serial port or Bluetooth). A camera subsystem220and an optical sensor222, e.g., a charged coupled device (“CCD”) or a complementary metal-oxide semiconductor (“CMOS”) optical sensor, may be used to facilitate camera functions, such as recording photographs and video clips. Communication functions may be facilitated through one or more wireless/wired communication subsystems224, which includes a Ethernet port, radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of wireless/wired communication subsystem224may depend on the communication network(s) over which mobile communications device200is intended to operate. For example, in some embodiments, mobile communications device200may include wireless/wired communication subsystems224designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth® network. An audio subsystem226may be coupled to a speaker228and a microphone230to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. I/O subsystem240may include touch screen controller242and/or other input controller(s)244. Touch screen controller242may be coupled to touch screen246. Touch screen246and touch screen controller242may, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen246. While touch screen246is shown inFIG.2, I/O subsystem240may include a display screen (e.g., CRT or LCD) in place of touch screen246. Other input controller(s)244may be coupled to other input/control devices248, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. Touch screen246may, for example, also be used to implement virtual or soft buttons and/or a keyboard. Memory interface202may be coupled to memory250. Memory250includes high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). Memory250may store an operating system252, such as DRAWN, RTXC, LINUX, iOS, UNIX, OS X, WINDOWS, or an embedded operating system such as VXWorkS. Operating system252may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, operating system252can be a kernel (e.g., UNIX kernel). Memory250may also store communication instructions254to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. Memory250can include graphical user interface instructions256to facilitate graphic user interface processing; sensor processing instructions258to facilitate sensor-related processing and functions; phone instructions260to facilitate phone-related processes and functions; electronic messaging instructions262to facilitate electronic-messaging related processes and functions; web browsing instructions264to facilitate web browsing-related processes and functions; media processing instructions266to facilitate media processing-related processes and functions; GPS/navigation instructions268to facilitate GPS and navigation-related processes and instructions; camera instructions270to facilitate camera-related processes and functions; and/or other software instructions272to facilitate other processes and functions. In some embodiments, communication instructions254may include software applications to facilitate connection with ridesharing management server150that handles vehicle ridesharing requests. Graphical user interface instructions256may include a software program that facilitates a user associated with the mobile communications device to receive messages from ridesharing management server150, provide user input, and so on. For example, a user may send ride requests and ride service parameters to ridesharing management server150and receive ridesharing proposals and confirmation messages. A driver may receive ride service assignments from ridesharing management server150, and provide ride service status updates. Each of the above identified instructions and applications may correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. Memory250may include additional instructions or fewer instructions. Furthermore, various functions of mobile communications device200may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits. FIG.3is a diagram illustrating the components of an example an automated ridesharing dispatch system300that includes ridesharing management server150associated with a ridesharing management system100, in accordance with some embodiments of the present disclosure. Ridesharing management server150may include a bus302(or other communication mechanism), which interconnects subsystems and components for transferring information within ridesharing management server150. As shown inFIG.3, automated ridesharing dispatch system300may include one or more processors310, one or more memories320storing programs330including, for example, server app(s)332, operating system334, and data340, and a communications interface360(e.g., a modem, Ethernet card, or any other interface configured to exchange data with a network, such as network140inFIG.1). Automated ridesharing dispatch system300may communicate with an external database170(which, for some embodiments, may be included within ridesharing management server150). Automated ridesharing dispatch system300may include a single server (e.g., ridesharing management server150) or may be configured as a distributed computer system including multiple servers, server farms, clouds, or computers that interoperate to perform one or more of the processes and functionalities associated with the disclosed embodiments. The term “cloud server” refers to a computer platform that provides services via a network, such as the Internet. When ridesharing management server150is a cloud server it may use virtual machines that may not correspond to individual hardware. Specifically, computational and/or storage capabilities may be implemented by allocating appropriate portions of desirable computation/storage power from a scalable repository, such as a data center or a distributed computing environment. Processor310may be one or more processing devices configured to perform functions of the disclosed methods, such as a microprocessor manufactured by Intel™ or manufactured by AMD™. Processor310may comprise a single core or multiple core processors executing parallel processes simultaneously. For example, processor310may be a single core processor configured with virtual processing technologies. In certain embodiments, processor310may use logical processors to simultaneously execute and control multiple processes. Processor310may implement virtual machine technologies, or other technologies to provide the ability to execute, control, run, manipulate, store, etc. multiple software processes, applications, programs, etc. In some embodiments, processor310may include a multiple-core processor arrangement (e.g., dual, quad core, etc.) configured to provide parallel processing functionalities to allow ridesharing management server150to execute multiple processes simultaneously. It is appreciated that other types of processor arrangements could be implemented that provide for the capabilities disclosed herein. Memory320may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible or non-transitory computer-readable medium that stores one or more program(s)330such as server apps332and operating system334, and data340. Common forms of non-transitory media include, for example, a flash drive, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. Ridesharing management server150may include one or more storage devices configured to store information used by processor310(or other components) to perform certain functions related to the disclosed embodiments. For example, ridesharing management server150may include memory320that includes instructions to enable processor310to execute one or more applications, such as server apps332, operating system334, and any other type of application or software known to be available on computer systems. Alternatively or additionally, the instructions, application programs, etc., may be stored in an external database170(which can also be internal to ridesharing management server150) or external storage communicatively coupled with ridesharing management server150(not shown), such as one or more database or memory accessible over network140. Database170or other external storage may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible or non-transitory computer-readable medium. Memory320and database170may include one or more memory devices that store data and instructions used to perform one or more features of the disclosed embodiments. Memory320and database170may also include any combination of one or more databases controlled by memory controller devices (e.g., server(s), etc.) or software, such as document management systems, Microsoft SQL databases, SharePoint databases, Oracle™ databases, Sybase™ databases, or other relational databases. In some embodiments, ridesharing management server150may be communicatively connected to one or more remote memory devices (e.g., remote databases (not shown)) through network140or a different network. The remote memory devices can be configured to store information that ridesharing management server150can access and/or manage. By way of example, the remote memory devices may include document management systems, Microsoft SQL database, SharePoint databases, Oracle™ databases, Sybase™ databases, or other relational databases. Systems and methods consistent with disclosed embodiments, however, are not limited to separate databases or even to the use of a database. Programs330may include one or more software modules causing processor310to perform one or more functions of the disclosed embodiments. Moreover, processor310may execute one or more programs located remotely from one or more components of the ridesharing management system100. For example, ridesharing management server150may access one or more remote programs that, when executed, perform functions related to disclosed embodiments. In the presently described embodiment, server app(s)332may cause processor310to perform one or more functions of the disclosed methods. For example, devices associated with users, drivers and autonomous vehicles may respectively be installed with user applications for vehicle ridesharing services, and driver applications for vehicle ridesharing services. Further, a mobile communications device may be installed with both the driver applications and the user applications, for uses in corresponding situations. In some embodiments, other components of ridesharing management system100may be configured to perform one or more functions of the disclosed methods. For example, mobile communications devices120may be configured to calculate estimate pick-up and drop-off times based on a certain ride request, and may be configured to calculate estimate ride fares. As another example, mobile communications devices120may further be configured to provide navigation service, and location service, such as directing the user to a particular pick-up or drop-off location, and providing information about a current location of the respective user or vehicle to ridesharing management server150. In some embodiments, program(s)330may include operating system334performing operating system functions when executed by one or more processors such as processor310. By way of example, operating system334may include Microsoft™, Windows™, Unix™, Linux™, Apple™ operating systems, Personal Digital Assistant (PDA) type operating systems, such as Apple iOS, Google Android, Blackberry OS, Microsoft CE™, or other types of operating systems. Accordingly, the disclosed embodiments may operate and function with computer systems running any type of operating system334. Ridesharing management server150may also include software that, when executed by a processor, provides communications with network140through communications interface360and/or a direct connection to one or more mobile communications devices120. Specifically, communications interface360may be configured to receive ride requests (e.g., from user devices120A-120C) headed to differing destinations, and receive indications of the current locations of the ridesharing vehicles (e.g., from driver devices120D and120E or driving-control device120F). In one example, communications interface360may be configured to continuously or periodically receive current vehicle location data for the plurality of ridesharing vehicles that are part of ridesharing management system100. The current vehicle location data may include global positioning system (GPS) data generated by at least one GPS component of a mobile communications device120associated with each ridesharing vehicle. In some embodiments, data340may include, for example, profiles of users, such as user profiles or driver profiles. Data340may further include ride requests from a plurality of users, user ride history and driver service record, and communications between a driver and a user regarding a particular ride request. In some embodiments, data340may further include traffic data, toll road information, and navigation information, which may be used for handling and accommodating ride requests. Automated ridesharing dispatch system300may also include one or more I/O devices350having one or more interfaces for receiving signals or input from devices and providing signals or output to one or more devices that allow data to be received and/or transmitted by automated ridesharing dispatch system300. For example, automated ridesharing dispatch system300may include interface components for interfacing with one or more input devices, such as one or more keyboards, mouse devices, and the like, that enable automated ridesharing dispatch system300to receive input from an operator or administrator (not shown). FIGS.4A and4Bare flowcharts of example processes410and420for vehicle ridesharing management, in accordance with some embodiments of the present disclosure. In one embodiment, all of the steps of process400may be performed by a ridesharing management server, such as ridesharing management server150described above with reference toFIGS.1and3. Alternatively, at least some of the steps of process400may be performed by a mobile communications device, such as the mobile communications devices120described above with reference toFIGS.1and2. In the following description, reference is made to certain components ofFIGS.1-3for purposes of illustration. It will be appreciated, however, that other implementations are possible and that other components may be utilized to implement example methods disclosed herein. At step411, ridesharing management server150may receive a first ride request from a first wireless communication of a first user, for example, a request from user130A sent through user device120A. The first ride request may include a first starting point and a first desired destination. A ride request may refer to a request from a user needing transportation service from a certain location to another. A starting point may refer to a current location of the user, as input by the user through an input device of an associated user device, or as determined by a location service application installed on the user device. In some embodiments, the starting point may be a location different from the current location of the user, for example, a location where the user will subsequently arrive at (e.g., entrance of a building). A desired destination may refer to a location where the user requests to be taken to. In some embodiments, the actual pick-up location and the actual drop-off location may be different from the starting point and the desired destination. For example, the pick-up location may be of a certain distance from the starting point, where the user may be directed to for pick-up. By encouraging the user to walk to a pick-up location nearby, consistent with some embodiments, the vehicle may more easily and quickly locate the user without excessive detour, or causing excessive delay for users who are in the vehicle. Similarly, by encouraging the user to walk from a drop-off location different from but within a certain distance from the desired destination, the vehicle may be able to accommodate subsequent pick-ups, or arrive at the subsequent pick-up locations more quickly. The vehicle ridesharing service management system may provide incentives or rewards for the user who are willing to walk a certain distance. For example, the ridesharing management system may offer certain discounts based on the number and distances of the walks involved in a particular ride. Alternatively, the ridesharing management system may offer ride credits corresponding to the number and distance of the walks undertaken by the user during his rides. The user may use the credits for subsequent ride payment, or redeem the credit for money, free rides, or other rewards. Further, advantages of such embodiments may include more efficient vehicle use and management, more user flexibility, and less air pollution associated with vehicle use. In some embodiments, prior to or after the user sends a ride request to ridesharing management server150, the user may further input ride service parameters through, for example, a settings component provided on a user interface. Ride service parameters refer to user preference parameters regarding a vehicle ridesharing service, for example, a maximum walking distance from the starting point to a pick-up location, a maximum walking distance from a drop-off location to a desired destination, a total maximum walking distance involved in a ride, a maximum number of subsequent pick-ups, maximum delay of arrival/detour incurred by subsequent pick-ups during a ride, and a selection whether to permit toll road usage during the ride, etc. Ride service parameters may be transmitted to ridesharing management server150for processing the request and assignment of an available vehicle based on the ride service parameters. For example, a ride request may be associated with a maximum walking distance of 300 meters from a starting point to a pick-up location. When assigning an available vehicle to pick-up the user, ridesharing management server150may include in the assignment an assigned pick-up location within 300 meters or less of the starting point. Similarly, a ride request may be associated with a maximum walking distance of 0.5 mile from a drop-off location to a desired destination. When assigning an available vehicle to pick-up the user, ridesharing management server150may include in the assignment an assigned drop-off location within 0.5 mile or less from the desired destination. For requests associated with a maximum total walking distance of one mile during the ride, when assigning an available vehicle to pick-up the user, ridesharing management server150may include in the assignment an assigned pick-up location and an assigned drop-off location, a total of a distance from the starting point to the assigned pick-up location and a distance from the assigned drop-off location to a desired destination may be equal to or less than one mile. In the above examples, the values regarding the walking distances are only exemplary. Other embodiments consistent with the present disclosure may use different options of the distances and may provide a list of options. The distances may further be measured in different units, for example, miles, meters, kilometers, blocks, and feet, etc., which are not limited by the disclosed embodiments herein. In some embodiments, the distance may further be represented by an average, walking time from a certain location to another, based on average walking speed, for example, ten minutes, five minutes, etc. With respect to parameters regarding subsequent pick-ups, such as a maximum number of subsequent pick-ups, and maximum delay of arrival incurred by subsequent pick-ups, ridesharing management server150may assign subsequent pick-ups accordingly, without exceeding the parameters set by the user. For example, a ride request may be associated with a maximum number of two subsequent pick-ups during the ride. Ridesharing management server150may monitor the service status of the vehicle assigned to pick-up the user, and refrain from assigning a third subsequent pick-up before the vehicle arrives at the a drop-off location for dropping off the user. As another example, for a ride request associated with a maximum delay of arrival of ten minutes, when assigning subsequent ride requests, ridesharing management server150may calculate an estimated delay that may occur to the user if the same vehicle was to undertake the subsequent ride request. If the estimated delay that may occur to the user is more than ten minutes, ridesharing management server150may assign the subsequent ride request to other available vehicles. In some embodiments, the user may also input selection of toll road usage through the associated user device, to allow or disallow use of toll roads. Ridesharing management server150may then take the user's selection into account when assigning an available vehicle for accommodating the ride request, determining travel route, and calculating ride fare for the user. For example, ridesharing management server150may adjust the ride fare amount for a corresponding user based on the toll roads selection input and toll charges involved. For another example, if a first user does not permit toll road usage, before any subsequent pick-ups during the ride, ridesharing management server150may send a route to an assigned vehicle that does not include toll roads. For another example, if a subsequent user sharing the ride permits usage of toll road, ridesharing management server150may not charge the first user for any overlap portion of the ride where toll roads are used, change the route to include toll roads after the first user is dropped off, or assign the second user to a ridesharing vehicle with users that permit toll road usage. In some embodiments, the ride request information may also be input from the driver device, for example, driver device120D, or from a device associated with the vehicle. In the case of street hailing, where the user hails a vehicle on the street without using a vehicle ridesharing service application on a mobile communications device, the driver, for example, driver130D, may input information such as the starting point/pick-up information and destination information through driver device120D, which may then be transmitted to ridesharing management server150. At step413, ridesharing management server150may calculate an estimated pick-up time, for example, based on a current location of an assigned vehicle and the first starting point included in the first ride request. An estimated pick-up time may refer to a time period before an assigned vehicle arrives at a pick-up location for picking up the user. The assigned vehicle may refer to the vehicle that is assigned to undertake the first ride request, for example, a taxi in a taxi fleet, one of a plurality of vehicles managed by a transportation service system, or a plurality of vehicles owned by a plurality of owners and used to provide ridesharing services. The pick-up location may be the same as the starting point, or an assigned pick-up location associated with the starting point. The estimated pick-up time may be determined based on a distance between a current location of the assigned vehicle and the pick-up location, and an estimate speed of traveling along the route between the two locations. The current location of the assigned vehicle may be determined by a location service application installed on a driver device, a driving-control device, or by a location determination component in the ridesharing management system100, which may be a part of or separate from ridesharing management server150. In some embodiments, the estimated pick-up time may further be determined based on historical or real-time traffic data, and a route currently followed by the vehicle. In some embodiments, process410may further include locating one or a plurality of potential available vehicles, and selecting an assigned vehicle therefrom. For example, potential available vehicles may include vacant vehicles in the surrounding areas of the first starting point, and vehicles heading to a location close to the first starting point for assigned pick-ups or drop-offs. Ridesharing management server150may filter potential available vehicles by ride service parameters set by the users who are inside the vehicle, for example, removing occupied vehicles where the a user inside the vehicle does not permit subsequent pick-ups, or occupied vehicles where the user requires a minimal delay. In some embodiments, ridesharing management server150may filter potential assignment vehicles by choosing a vehicle that would involve minimal walking of the user, or walking without the need of crossing the street. In some embodiments, ridesharing management server150may further filter potential assignment vehicles by choosing a vehicle that would involve minimal detour for the vehicle to arrive at the pick-up location. In some embodiments, the assigned vehicle may be selected by applying multiple filter criteria, or by applying multiple filter criteria in a certain order. In some embodiments, the pick-up location may be an assigned pick-up location different from the first starting point, for example, half a block or further away from the first starting point. Ridesharing management server150may assign a pick-up location based on ride service parameters set by the first user, as described above at step411. Ridesharing management server150may further assign a pick-up location which is along a main street where an assigned vehicle can easily locate, or a location which would not require an assign vehicle to take a U-turn. In cases where there are one or more other users in the vehicle, ridesharing management server150may assign a pick-up location close to the vehicle's next assigned drop-off, or on the side of a street where the vehicle will soon go through. In some embodiments, ridesharing management server150may adjust selection of the pick-up location based on filtering results of potential assignment vehicles, or vice versa. The two selection processes may complement each other to reach one or more optimal combinations. In some embodiments, where there are multiple potential assignment vehicles, each with a corresponding potential pick-up location, an estimated pick-up time may be respectively calculated corresponding to each of the potential assignment vehicles. Ridesharing management server150may then choose the vehicle with the shortest estimated pick-up time to be the assigned vehicle. At step415, ridesharing management server150may send a first message to a user device associated with the first user, which is, in this example, user device120A. The first message may be configured to cause an indication of the calculated first estimated pick-up time to appear on a display of user device120A. The message may appear in different formats, for example, a text message including the estimated pick-up time, an audio message, or an image, the specific implementation of which are not limited by the disclosed embodiments herein. In one embodiment, the message includes a confirmation that the ridesharing request is accepted. If ridesharing management server150assigns a pick-up location different from the starting point, the message may further cause the display of an indication of the assigned pick-up location. Ridesharing management server150may further provide a navigation option which may be displayed on a user interface. A selection of the navigation option may then provide walking directions the user to the assigned pick-up location for pick-up. The message may further cause a display of an indication of an estimated walking distance from the starting point to the assigned pick-up location. In addition, the message may include an estimated walking distance from the assigned drop-off location to the desired destination. The assigned drop-off location may be a location close to the desired destination, within the maximum walking distance parameters set by the first user. For example, the drop-off location may be at a location half a block away or further from the desired destination, and may be along a main street where the vehicle may easily locate and access. For another example, the drop-off location may be determined based on a route towards the next pick-up location, such that the vehicle may easily drop-off the first user on its way to the next pick-up location, thereby avoiding an extra detour. In another embodiment, the message may include one or more proposals associated with different vehicles. Each proposal may include information about the proposed pick-up location. The information about the proposed pick-up location may include the distance from the user to the proposed pick-up location. Each proposal may include a price of the ride associated with the type of the ride, and an estimation of a pick-up time. The estimate may be presented as a range. In one example, each proposal may include different pick-up locations, different prices, and/or different estimations of a pick-up time. According to this embodiment, step415may also include receiving a proposal selection reflective of a selected pick-up vehicle and sending an addition message that includes information about the selected vehicle, and the driver associated with the vehicle. For example, the vehicle information may include the license plate number, brand, color, and/or model of the vehicle. The driver information may include a name, nickname, profile photo, ratings, number of previous rides, and/or contact information of the driver. The message may further include a contact option allowing the user to contact the driver, for example, a “contact the driver” button, which the user may select to initiate a communication session with the driver. At step417, ridesharing management server150may guide the assigned vehicle to the first pick-up location for picking up the first user. For example, ridesharing management server150may transmit direction information to the driver device associated with the assigned vehicle, for example, driver device120D or driving-control device120F. In some embodiments, a navigation component of the driver device, or the driving-control device may perform the step of guiding the vehicle to the first pick-up location. Correspondingly, ridesharing management server150or a navigation component of the user device120A, may guide the user to the first pick-up location, in cases where the pick-up location is an assigned pick-location different from the first starting point. For example, for autonomous vehicles used for ridesharing services, such as autonomous vehicle130F as shown inFIG.1, the vehicle itself may be capable of using a variety of techniques to detect its surroundings, identify feasible paths, and navigate without direct human input. In some embodiments, once the vehicle is assigned to pick-up the user, ridesharing management server150may assign a communication channel for the driver associated with the assigned vehicle to communicate with the user, for example, a masked phone number. In some embodiments, a user interface of a driver device, such as driver device1201D, may include an option to send notification messages to the user, for example, a pre-defined message button of “I'm here.” Once the vehicle arrives at the pick-up location, the driver may click the message button to send the message to the user. This way, the driver may not need to dial out or type a message in order to notify the user of the vehicle's arrival, reducing driver distraction and associated safety hazards. At step419, ridesharing management server150may receive a second ride request from a second user. In some embodiments, the second user request may be a street hailing request received directly by the vehicle while the first user is still inside, namely, before dropping off the first user. The vehicle may then undertake the second ride request, if the first user permits subsequent pick-ups. In some embodiments, the driver of the vehicle may input the second ride request information through a driver device, for example, driver device120D associated with driver130D. The input may inform ridesharing management server150that the vehicle has undertaken a second ride request, or may further include the pick-up location and destination information of the second user. Ridesharing management server150may then accordingly determine whether to assign additional pick-ups to the same vehicle, and may further send direction information guiding the vehicle to the second user's destination. In some embodiments, the second ride request may be received by ridesharing management server150from a second wireless mobile communications device, for example, user device120B associated with user130B as shown inFIG.1. The second ride request may further include a second starting point, and a second desired destination. Ridesharing management server150may then assign a corresponding ride service to an available vehicle, which may be the vehicle that has picked up the first user, before dropping off the first user. In processing the second ride request, the example process420as shown inFIG.4Bmay be performed. At step422, ridesharing management server150may calculate a second estimated pick-up time, for example, based on a second current location of the vehicle and the second starting point. The second estimated pick-up time may refer to an estimated time period before the vehicle arrives at a second pick-up location for picking up the second user. The second pick-up location may be an assigned pick-up location different from, but associated with, the second starting point. Assignment of the second pick-up location may include similar steps as described above with reference toFIG.4A, details of which are not repeated herein. At step424, ridesharing management server150may send a second message to the second wireless mobile communication device, which is user device120B in this example. The second message may be configured to cause an indication of the calculated second estimated pick-up time to appear on a display of the second wireless mobile communication device. As described above with reference toFIG.4A, the message may appear in different formats, and may further cause a display of multiple proposals with multiple options for the second pick-up location, walking distance, walking directions from the second starting point to the second pick-up location, etc., the details of which are not repeated herein. In some embodiments, ridesharing management server150may set the second pick-up location at substantially the same location as the first pick-up location, for example, half a block away, or 100 meters away from the first pick-up location. This way, the vehicle may pick-up both users at about the same time at substantially the same location, further improving service efficiency. In some embodiments, ridesharing management server150may set the second pick-up location at a substantially the same location as the first drop-off location, wherein the vehicle may drop-off the first user, and pick-up the second user at about the same time, without extra travelling. Further, in some embodiments, the second drop-off location may be set at substantially the same location as the first drop-off location, such that the vehicle may drop-off multiple users at the same time. In some embodiments, ridesharing management server150may set the first pick-up location to substantially differ from the first starting point, and the second pick-up location to substantially differ from the second starting point, for example, to ensure both pick-up locations are along the same side of the same street where the vehicle may go through. Ridesharing management server150may then send respective directions to the first user device and the second user device, to guide the users to the respective pick-up locations. In some embodiments, ridesharing management server150may set the first pick-up location at substantially the same as the first starting point, and set the second pick-up location to substantially differ from the second starting point. For example, the selection of the pick-up locations may be made such that the first pick-up location and the second pick-up location are close to one another, both pick-up locations are along the same street, or the second pick-up location is close to the first drop-off location. Ridesharing management server150may then send respective directions to the first user device and the second user device, to guide the users to the respective pick-up locations. At step426, ridesharing management server150may guide the vehicle to a second pick-up location for picking up the second user. As described above with reference toFIG.4A, this step may also be performed by a navigation component of the driver's device (e.g., driver device120D or driving-device120F associated with autonomous vehicle130F). In some embodiments, ridesharing management server150may change the first drop-off location after receiving the second ride request, and the change may be made without pre-approval of the first user. The first drop-off location refers to a location for dropping off the first user. As described above with reference toFIG.4A, the first drop-off location may be the same as the first desired destination, or at a location different from the first desired destination. For example, the second pick-up location may be set at a location close to the first desired destination, included in the first ride request. When assigning the second ride request to the vehicle, ridesharing management server150may change the first drop-off location to a location closer to or at the first desired destination, thus reducing the walking distance for the first user to arrive at his desired destination. For another example, the first drop-off location may be changed to a location where the first user does not need to cross the street to arrive at his desired destination, without causing or increasing detour for the vehicle to arrive at the second pick-up location. In some embodiments, ridesharing management system100may subsequently receive a plurality of subsequent ride requests. These additional ride requests may either be received by ridesharing management server150and assigned to the vehicles, or received by the vehicles in the form of street hailing. Steps described above with reference toFIGS.4A and4Bmay similarly be used to process the third ride request. For example, ridesharing management server150may receive a third ride request from a third user device, for example, user device120C associated with user130C, as shown inFIG.1. Ridesharing management server150may process the request and assign the request to the vehicle while at least one of a first user and a second user is still in the vehicle. The third ride request may further include a third starting point and a third desired destination. Ridesharing management server150may calculate a third estimated pick-up time, and send a confirmation to a user's device e.g., user device120C). Ridesharing management server150may transmit direction and route information to the driver's device associated with the vehicle (e.g., driver device120D as shown inFIG.1), to guide the vehicle to pick-up and drop-off user130C. As described above with reference toFIGS.4A and4B, processing of subsequent ride requests may take into account of the ride service parameters set by the users whose requests have previously been received and assigned. For example, if both the first user and the second user are still in the vehicle, and one of them has set a maximum delay of arrival, ridesharing management server150may not assign the third request to the same vehicle if such assignment would cause a delay longer than the set value. For example, if the first user has set a maximum delay of arrival of 10 minutes, ridesharing management server150may calculate an estimated time period it takes for the vehicle to pick-up (and/or drop-off) the third user before dropping off the first user. If the estimated time would cause a total delay of arrival for the first user to exceed 10 minutes, ridesharing management server150may therefore assign the third ride request to another vehicle. For another example, if the second user has set a maximum number of one co-rider and the second user will be dropped off earlier than the first user, ridesharing management server150may not assign to the same vehicle, as such assignment may cause violation of the parameter (maximum number of one co-rider) set by the second user. FIG.5is a diagram of three example timelines showing ridesharing arrangements, in accordance with some embodiments of the present disclosure. As shown in example timelines510,520, and530, for a particular assigned vehicle undertaking a first ride request from a first user and a second ride request from a second user, the order of pick-ups and drop-offs for the second user may vary. For example, ridesharing management server150may receive a plurality of ride requests, design an optimal path and pick-up/drop-off order for a particular assigned vehicle undertaking multiple requests, and assign additional pick-ups as the vehicle completes a part of or all of the ride requests. For example, as shown in example timeline510, a vehicle may receive a second ride request after picking up the first user, and drop-off the first user before dropping off the second user. A corresponding shared ride portion may be the portion of ride between the pick-up of the second user and drop-off of the first user. As shown in example timeline520, the vehicle may receive a second ride request after picking up the first user, and drop-off the second user before dropping off the first user. A corresponding shared ride portion may be the portion of ride between the pick-up of the second user and drop-off the second user. As another example, as shown in example timeline530, the vehicle may receive the first ride request and the second ride request before any pick-up. The vehicle may then pick-up the second user before picking up the first user, and drop-off the second user before dropping off the first user. A corresponding shared ride portion may be the portion of ride between pick-up of the first user and drop-off of the second user. Depending on the order of pick-ups and drop-offs, the ridesharing management server may then determine a corresponding shared ride portion, and calculate ride fare for each user based on, for example, the shared portion, solo portion of each user, and/or other factors such as the ride service parameters set by each user. Under-Utilization of Vehicle Carmen. Embodiments of the present disclosure may allow for the implementation of capacity blocks on rideshare vehicles. For example, the capacity blocks may ensure that the full capacity of the rideshare vehicles is not used in regular operation. This may enhance the experience of users who might otherwise feel cramped in vehicles at or near capacity. Indeed, some embodiments may implement the threshold block across a fleet of ridesharing vehicles (e.g., by applying a threshold block to each vehicle in the fleet, whether the same threshold or different thresholds and/or by applying an aggregate threshold block to one or more parts of the fleet). In addition, the capacity block may be adjusted based on tracking of passengers' physical conditions capable of impacting capacity of a ridesharing vehicle, tracking of passengers' luggage capable of impacting capacity of a ridesharing vehicle, or the like. Additionally of alternatively, the capacity block may be overridden in particular circumstances (e.g., inclement weather, special events, etc.). Such dynamic reassessment of the capacity block may increase the efficiency of a fleet of rideshare vehicles on the whole. FIG.6depicts an example of a memory module600for under-utilizing vehicle capacity. Although depicted as a single memory inFIG.6, memory600may comprise one or more non-volatile (e.g., hard disk drive, flash memory, etc.) and/or volatile (e.g., RAM or the like) memories. In some embodiments, memory600may be included in ridesharing management server150. For example, memory600may comprise, at least in part, a portion of memory320. As depicted inFIG.6, memory600may include assignment module610. Assignment module610may receive requests for shared rides from a plurality of users. For example, assignment module610may receive the requests using a communications interface. The communications interface may comprise, for example, one or more network interface controllers (NICs). These one or more NICs may communicate over one or more computer networks, such as the Internet, a local area network (LAN), or the like. Assignment module610may further assign the users to rideshare vehicles in a fleet. For example, assignment module610may assign a first user to a first rideshare vehicle. In addition, assignment module610may assign a second user to the first rideshare vehicle. For example, assignment module610may combine the first user and the second user based on the closeness of a pick-up location of the first user to a pick-up location and/or a destination of the second user, the closeness of a destination of the first user to the pick-up location and/or the destination of the second user, overlap between a first predicted route from the pick-up location of the first user to the destination of the first user and a second predicted route from the pick-up location of the second user to the destination of the second user, or the like. The predicted routes may be calculated using one or more maps, optionally in combination with traffic information. The one or more maps may be retrieved from one or more memories and/or using the communications interface. Similarly, the traffic information may be retrieved from one or more memories and/or using the communications interface. As further depicted inFIG.6, memory600may include capacity tracking module620. Capacity tracking module620may track a current utilized capacity in the rideshare fleet. For example, capacity tracking module620may track the current utilized capacity of each specific ridesharing vehicle in the fleet. For example, capacity tracking module620may track capacity using assignments from assignment module610. Additionally or alternatively, capacity tracking module620may track capacity when users are picked up by rideshare vehicles, e.g., using signals received from one or more devices (such as mobile communications device200) associated with drivers of the rideshare vehicles and/or signals received from one or more devices (such as mobile communications device200) associated with the users. In some embodiments, capacity tracking module620may account for other factors that impact capacity. For example, capacity tracking module620may track passengers' physical condition capable of impacting capacity of a ridesharing vehicle (e.g., based on signals from the one or more devices associated with the users and/or from the one or more devices associated with the drivers indicating whether a passenger has a wheelchair, crutches, or the like). In another example, capacity tracking module620may track passengers' luggage capable of impacting capacity of a ridesharing vehicle (e.g., based on signals from the one or more devices associated with the users and/or from the one or more devices associated with the drivers indicating amount and/or size of luggage, or the like). As depicted inFIG.6, memory600may include a threshold block module630. Threshold block module630may implement a threshold block when a ridesharing vehicle's current utilized capacity is above a threshold. For example, threshold block module630may receive the current utilized capacity from capacity tracking module620. In some embodiments, the threshold may be less than the ridesharing vehicle's passenger-capacity. For example, the threshold may be at least 10%, at least 15%, at least 20%, at least 25%, or the like of the specific vehicle's capacity. In a similar example, the threshold may be one seat less than the specific vehicle's capacity, two seats less, three seats less, or the like. The threshold block may be implemented, for example, by sending a block signal to assignment module610to prevent assignment of additional users to the ridesharing vehicle. In some embodiments, threshold block module630may determine a value for the threshold. For example, threshold block module630may access stored information about the ridesharing vehicle to determine the value. The stored information may be in memory600and/or in one or more additional memories. Additionally or alternatively, the stored information may be received over the communications interface. In certain aspects, the stored information may include a model of the vehicle, a make of the vehicle, a year of the vehicle, one or more passengers' reviews of the vehicle, or the like. In some embodiments, threshold block module630may override the threshold. For example, threshold block module630may override the threshold block in response to a received indication of an inclement weather condition, such as rain, snow, hail, or the like. In such an example, the indication may be received from one or more memories and/or using the communications interface. In another example, threshold block module630may override the threshold block override in response to a received indication of a special event condition, such as a sporting event, a festival, a marathon, or the like. In such an example, the indication may be received from one or more memories and/or using the communications interface. In yet another example, threshold block module630may override the threshold block when an estimated time in which the ridesharing vehicle's utilized capacity is above the threshold is less than a predefined period of time, such as 1 minute, 3 minutes, 5 minutes, 10 minutes, or the like. The predefined period of time may be fixed or may be dynamic (e.g., determined based on stored information about the ridesharing vehicle). In a fourth example, threshold block module630may override the threshold block in response to a received indication of an unscheduled-user condition (e.g., an indication that three passengers entered to the van when only one passenger scheduled the ride). In such an example, the indication may be received from one or more devices (such as mobile communications device200) associated with the drivers and/or signals received from one or more devices (such as mobile communications device200) associated with the users. Memory600may further include a database access module640, and may also include database(s)650. Database access module640may include software instructions executable to interact with database(s)650, to store and/or retrieve information (e.g., information about the ridesharing vehicle as described above, weather information, traffic information, one or more maps, or the like). FIG.7is a diagram of example timelines showing the use of threshold blocks in a rideshare fleet, in accordance with some embodiments of the present disclosure. As shown in example timeline710, for a particular rideshare vehicle, ridesharing management server150may receive a first request and a second request and assign both to the particular rideshare vehicle. In example timeline710, ridesharing management server150may implement a threshold block when the capacity of the particular rideshare vehicle reaches 2. Accordingly, in example timeline710, a third request is assigned to another rideshare vehicle on account of the threshold block. Although depicted as tracking the capacity upon assignment of the requests, ridesharing management server150may additionally or alternatively track the capacity upon pick-up of the users. As shown in example timeline720, for a particular rideshare vehicle, ridesharing management server150may receive an Nth request when the particular rideshare vehicle is at capacity 1 and assign the Nth request to the particular rideshare vehicle. In example timeline720, ridesharing management server150may implement a threshold block after assigning the Nth request (i.e., when the capacity of the particular rideshare vehicle reaches 2). Accordingly, in example timeline720, the (N+1)th request is assigned to another rideshare vehicle on account of the threshold block. When the Nth user is dropped off, the capacity may return to 1 and, accordingly, the (N+2)th request is assigned to the particular rideshare vehicle. Similar to timeline710, although depicted as tracking the capacity upon assignment of the requests in timeline720, ridesharing management server150may additionally or alternatively track the capacity upon pick-up of the users. As shown in example timeline730, for a particular rideshare vehicle, ridesharing management server150may receive an Nth request when the particular rideshare vehicle is at capacity 1 and assign the Nth request to the particular rideshare vehicle. In addition, ridesharing management server150may receive an (N+1)th request when the particular rideshare vehicle is at capacity 2 and assign the (N+1)th request to the particular rideshare vehicle. In example timeline730, ridesharing management server150may implement a threshold block after assigning the (N+1)th request (i.e., when the capacity of the particular rideshare vehicle reaches 3). Accordingly, in example timeline730, the (N+2)th request is assigned to another rideshare vehicle on account of the threshold block. When the (N+1)th user is dropped off, the capacity may return to 2 and, accordingly, the (N+2)th request may be re-assigned to the particular rideshare vehicle. Similar to timelines710and720, although depicted as tracking the capacity upon assignment of the requests in timeline730, ridesharing management server150may additionally or alternatively track the capacity upon pick-up of the users. As shown in example timeline740, for a particular rideshare vehicle, ridesharing management server150may receive a first request and a second request and assign both to the particular rideshare vehicle. In example timeline710, the first user may have an additional rider, one or more pieces of luggage, and/or a physical condition that uses 3 seats of capacity rather than 1. Accordingly, because ridesharing management server150may implement a threshold block when the capacity of the particular rideshare vehicle reaches 3, the third user is re-assigned to another rideshare vehicle on account of the threshold block activated upon pick-up of the first user. Although the examples ofFIG.7use a threshold block of 2, various thresholds may be used, as explained above with regards to threshold block module630. For example, the threshold may be at least 10%, at least 15%, at least 20%, at least 25%, or the like of the specific vehicle's capacity. In a similar example, the threshold may be one seat less than the specific vehicle's capacity, two seats less, three seats less, or the like. FIG.8depicts example method800for managing a fleet of ridesharing vehicles. Method800may, for example, be implemented by ridesharing management server150ofFIG.3. At step811, server150may assign, to ridesharing vehicles already transporting users, additional users for simultaneous transportation in the ridesharing vehicles. For example, as explained above with regards to assignment module610, server150may make assignments based on the closeness of pick-up locations of the users and the additional users, the closeness of destinations of the users and the additional users, the closeness of pick-up locations of the users to destinations of the additional users (or vice versa), overlap between predicted routes front the pick-up locations of the users to the destinations of the additional users and predicted routes from the pick-up locations of the users to the destinations of the additional users, or the like. At step813, server150may track a current utilized capacity of each specific ridesharing vehicle. For example, as explained above with respect to capacity tracking module620, server150may track the current utilized capacity using the assignments. Additionally or alternatively, server150may track the current utilized capacity when the users and/or the additional users are picked up by rideshare vehicles, e.g., using signals received from one or more devices (such as mobile communications device200) associated with a driver of the specific ridesharing vehicle and/or signals received from one or more devices (such as mobile communications device200) associated with the users and/or the additional users. At step815, server150may implement a threshold block that prevents assignment of additional users to a ridesharing vehicle when the ridesharing vehicle's current utilized capacity is above a threshold being less than the ridesharing vehicle's passenger-capacity. For example, server150may implement the threshold block to prevent at least 10% (or at least 15%, at least 20%, at least 25%, or the like) of the specific vehicle's capacity from being utilized. For example, if the specific vehicle is an eight-seat van, at least one seat or at least two seats may remain empty. In some embodiments, server150may implement the threshold block across a fleet of ridesharing vehicles. For example, server150may apply a threshold block to each rideshare vehicle in the fleet. Additionally or alternatively, server150may implement one or more threshold blocks to one or more groups of vehicles. In some embodiments, server150may store information about the ridesharing vehicle. For example, server150may store static information such as a year of the vehicle (e.g., 1999, 2005, 2017, etc.), a make of the vehicle (e.g., GM, Honda, Hyundai, Lincoln, etc.) model of the vehicle (e.g., Camry, Malibu, etc.). Additionally or alternatively, server150may store dynamic information such as one or more reviews of the vehicle by passengers. For example, server150may receive reviews (such as a rating, like 4 out of 5 or “Good,” optionally coupled with comments from the user) from one or more devices (such as mobile communications device200) associated with the users and/or the additional users. In such an example, server150may couple with received reviews with a capacity associated with the reviews. For example, if a review is received from a user that rode in the vehicle when the capacity of the vehicle was at 3, server150may associate the review with a capacity of 3. In an example where a user rode in a vehicle during different capacities (e.g., began the ride at a capacity of 2 and ended the ride at a capacity of 3), server150may associate with the review with an average of the different capacities, a minimum of the different capacities, a maximum of the different capacities, or the like. Based on the stored information, server150may determine a value for the threshold based on information stored in the memory, the value being specific to each ridesharing vehicle. For example, server150may determine a passenger-capacity of the specific ridesharing vehicle based on the particular year, make, and/or model of the specific ridesharing vehicle. Server150may then determine a value for the threshold such that the determined value for a specific ridesharing vehicle is one seat less than the passenger-capacity of the specific ridesharing vehicle, the determined value for a specific ridesharing vehicle is two seats less than the passenger-capacity of the specific ridesharing vehicle, the determined value for a specific ridesharing vehicle is three seats less than the passenger-capacity of the specific ridesharing vehicle, or the like. In a similar example, when a specific ridesharing vehicle has more than ten seats, the determined value for the specific ridesharing vehicle may be less than ten seats. Additionally or alternatively, server150may determine a value for the threshold based on the reviews. For example, server150may determine the value based on the majority of reviews being above a score threshold when the associated capacities are below the threshold and the majority of reviews being below the score threshold when the associated capacities are above the threshold. In another example, server150may determine the value based on detected sentiment of comments included with the reviews. In such an example, server150may determine the value for the threshold based on a value of the associated capacities at which the detected sentiments changes from positive to negative or based on a value of the associated capacities at which a negativity of the detected sentiments exceeds a negativity threshold. Method800may further include additional steps. For example, method800may include overriding the threshold block in response to a received indication of an inclement weather condition. For example, server150may receive an indication of rain, snow, hail, or other inclement weather conditions and override the threshold block in response. Such an indication may be retrieved from one or more memories and/or received using the communications interface (e.g., from a weather server and/or weather update service using the Internet). In another example, method800may include overriding the threshold block override in response to a received indication of a special event condition. For example, server150may receive an indication of a sporting event, a holiday, a festival, or other special event and override the threshold block in response. Such an indication may be retrieved from one or more memories and/or received using the communications interface (e.g., from a global calendar, a local calendar of events, a sports calendar, a holiday database, or the like using the Internet). In yet another example, method800may include overriding the threshold block when an estimated time in which the ridesharing vehicle's utilized capacity is above the threshold is less than a predefined period of time. For example, server150may estimate the time based on an overlap between the routes of the users and the routes of the additional users. The predefined period of time may be, for example, 3 minutes, 5 minutes, 10 minutes, or the like. In a fourth example, method800may include overriding the threshold block in response to a received indication of an unscheduled-user condition. For example, server150may receive an indication that more passengers entered the vehicle than initially indicated when the ride was scheduled (e.g., 3 passengers enter when only 1 passenger requested a ride). Such an indication may be received from one or more devices (such as mobile communications device200) associated with the users. Method800may further include cancelling the assignment of a first rideshare vehicle and re-assigning a second rideshare vehicle in order to enable the first rideshare vehicle to pick-up a passenger not originally assigned to the first rideshare vehicle. For example, if more passengers enter the first vehicle than initially indicated when the ride was scheduled (e.g., 2 passengers enter when only 1 passenger requested a ride), server150may cancel another assignment to the first vehicle and re-assign that request to a second vehicle. In some embodiments, the cancellation and rescheduling may be performed any time that more passengers enter than originally indicated. In other embodiments, the cancellation and rescheduling may only be performed when the extra passengers would cause the threshold block to be exceeded. For example, if 2 passengers enter the first vehicle when only 1 passenger requested a ride, and the other assignment to the first vehicle is for 2 passengers, server150may only cancel and re-assign the other assignment if the threshold block for the first vehicle is less than 4. In another example, if 3 passengers enter the first vehicle when only 2 passengers requested a ride, and the other assignment to the first vehicle is for 1 passenger, server150may only cancel and re-assign the other assignment if the threshold block for the first vehicle is less than 4. The re-assignment may be communicated to one or more devices (such as mobile communications device200) associated with the additional users. Server150may also account for factors other than passenger count that may affect capacity. For example, server150may track passengers' luggage capable of impacting capacity of the ridesharing vehicle. In such an example, server150may receive an indication that a user has one or more suitcases, a bicycle, a music instrument, or the like. Based on this indication, server150may increase the tracked utilized capacity of the vehicle to account for the luggage or otherwise assign fewer passengers to the vehicle. Such an indication may be received from one or more devices (such as mobile communications device200) associated with the user. Additionally or alternatively, such an indication may be received from one or more devices (such as mobile communications device200) associated with a driver of the vehicle (e.g., if the user failed to indicate s/he had any luggage when submitting a ride request). In such an embodiment, server150may cancel one or more additional assignments to the vehicle and re-assign a second rideshare vehicle in order to enable the vehicle to pick-up the luggage. Additionally or alternatively, server150may track a passenger's physical condition capable of impacting capacity of the ridesharing vehicle. For example, server150may track if the user has a wheelchair, a baby, crutches, injury, or the like, or is obese or has any other physical condition that requires additional space. In such an example, server150may receive an indication of the physical condition. Based on this indication, server150may increase the tracked utilized capacity of the vehicle to account for the physical condition or otherwise assign fewer passengers to the vehicle. Such an indication may be retrieved from one or more memories (e.g., a user database storing information associated with user accounts including indications of physical conditions) and/or received from one or more devices (such as mobile communications device200) associated with the user. Additionally or alternatively, such an indication may be received from one or more devices (such as mobile communications device200) associated with a driver of the vehicle (e.g., if the user failed to indicate s/he had a physical condition when registering for a user account and/or submitting a ride request). In such an embodiment, server150may cancel one or more additional assignments to the vehicle and re-assign a second rideshare vehicle in order to enable the vehicle to accommodate the physical condition. Dynamic Re-Assignment of Vehicles Embodiments of the present disclosure may allow for the dynamic re-assignment of rideshare vehicles. For example, passenger assignments may be changed between a time of initial assignment and a time of picking up the passenger. This may enhance the experience of users when a vehicle to which they are initially assigned is delayed, for example, on account of traffic, weather, wrong turns, or the like. In addition, the dynamic re-assignment may be used to allow the fleet of rideshare vehicles to handle urgent requests. For example, users having a medical emergency, family emergency, running late for a flight, or the like, may be prioritized to increase aggregate satisfaction across all users. FIG.9depicts an example of a memory module900for dynamic re-assignment of rideshare vehicles. Although depicted as a single memory inFIG.9, memory900may comprise one or more non-volatile (e.g., hard disk drive, flash memory, etc.) and/or volatile (e.g., RAM or the like) memories. In some embodiments, memory900may be included in ridesharing management server150. For example, memory900may comprise, at least in part, a portion of memory320. As depicted inFIG.9, memory900may include location module910. Location module910may determine pick-up locations for users assigned to ridesharing vehicles. For example, location module910may receive location information (e.g., using GPS) from a first plurality of communication devices (such as mobile communications device200) associated with a plurality of users. In such an example, the received location information may be included in ride requests as starting points. Additionally, location module910may receive location information front a second plurality of communication devices (such as mobile communications device200) associated with a plurality of ridesharing vehicles. In such an embodiment, location module910may continuously receive location information from the first and second pluralities of communication devices. As used herein, “continuously” does not necessarily mean without interruption but may refer to the receipt of information in a discretized manner having spacings (and/or interruptions) each below a threshold of time, such as 50 ms, 100 ms, 500 msec, 1 sec, 5 sec, 10 sec, 30 sec, or the like. Location module910may determine pick-up locations for a first group of users assigned to a first ridesharing vehicle. For example, location module910may determine the pick-up locations based on one or more optimization models run on one or more predicted routes between starting points and destinations of the users. The one or more optimization models may include a shortest distance optimization, a shortest travel time optimization (e.g., accounting for speed limits, traffic, wrong turns, etc.), a combination of distance and travel time optimization, a fuel efficiency optimization (e.g., based on known fuel ratings of the ridesharing vehicle), an electric battery charge optimization, or the like. For example, any solution to the P v. NP problem later derived may also be incorporated into the optimization models. For at least some of the first group of users, the determined pick-up locations may differ from the starting points. Additionally or alternatively, location module910may determine drop-off locations for a first group of users assigned to a first ridesharing vehicle. For example, location module910may determine the drop-off locations based on one or more optimization models run on one or more predicted routes between starting points and destinations of the users. The one or more optimization models may include a shortest distance optimization, a shortest travel time optimization (e.g., accounting for speed limits, traffic, wrong turns, etc.), a fuel efficiency optimization (e.g., based on known fuel ratings of the ridesharing vehicle), or the like. For example, any solution to the P v. NP problem later derived may also be incorporated into the optimization models. For at least some of the first group of users, the determined drop-off locations may differ from the destinations. The pick-up locations and/or drop-off locations may be changed depending on cancellations and re-assignments performed by assignment module920. For example, location module910may change at least one drop-off location of at least one second user in a second ridesharing vehicle after assignment of the second ridesharing vehicle to a first user. In a similar example, location module910may change a pick-up location of the first user after cancellation of an assignment of a first ridesharing vehicle to the first user and/or after re-assignment of the second ridesharing vehicle to the first user. In some embodiments, location module910may also send data to at least some of the first group of users to guide each user to a respective pick-up location different from a corresponding starting point of each said user. For example, location module910may send GPS coordinates of the pick-up locations, physical addresses of the pick-up locations, or the like to devices of the first plurality of communication devices (such as mobile communications device200) associated with the at least some of the first group of users. Each device may use received coordinates, a received address, or the like to route (e.g., via walking) the associated user from a current location of the device to the pick-up location. In another example, location module910may determine routes (e.g., via walking) from current locations received from the devices to the pick-up locations and send the route to the devices. Accordingly, location module910may send to the at least some of the first group of users walking directions to the respective pick-up locations. Additionally or alternatively, location module910may send to the at least some of the first group of users at least one of a location and an address of the respective pick-up locations. For example, location module910may send GPS coordinates of the respective pick-up locations, physical addresses of the pick-up locations, or the like to devices of the first plurality of communication devices (such as mobile communications device200) associated with the at least some of the first group of users. Guiding the at least some of the first group of users (e.g., using routes and/or walking directions as described above) may be performed by the devices. As further depicted inFIG.9, memory900may include assignment module920. Assignment module920may receive ride requests from a first plurality of communication devices (such as mobile communications device200) associated with a plurality of users. For example, assignment module920may receive the requests using a communications interface. The communications interface may comprise, for example, one or more network interface controllers (NICs). These one or more NICs may communicate over one or more computer networks, such as the Internet, a local area network (LAN), or the like. Assignment module920may further assign the rideshare vehicles in a fleet to pick-up a plurality of users. For example, assignment module920may assign a first ridesharing vehicle to pick-up a first group of the plurality of users. For example, assignment module920may combine users to form the first group based on the closeness of the pick-up location of one user in the first group to a pick-up location and/or a destination of another user in the first group, the closeness of a destination of the user in the first group to the pick-up location and/or the destination of the other user in the first group, overlap between a first predicted route from the pick-up location of the user to the destination of the user and a second predicted route from the pick-up location of the other user to the destination of the other user, or the like. The predicted routes may be calculated using one or more maps, optionally in combination with traffic information. The one or more maps may be retrieved from one or more memories and/or using the communications interface. Similarly, the traffic information may be retrieved from one or more memories and/or using the communications interface. In some embodiments, assignment module920may cooperate with location module910to perform dynamic re-assignment. For example, prior to a first user arriving at a first pick-up location, assignment module920may cancel the assignment of a first ridesharing vehicle to the first user while maintaining the assignment of the first ridesharing vehicle to others of the first group of users. Additionally or alternatively, assignment module920may cooperate with prediction module930to perform dynamic re-assignment. For example, assignment module920may re-assign the first user to a second ridesharing vehicle, e.g., when a predicted passing time when a second ridesharing vehicle may pass the first pick-up location is after a predicted arrival time when the first user will arrive at the first pick-up location. In some embodiments, the second ridesharing vehicle may be carrying at least one second user while being assigned to the first user. For example, assignment module920may assign to the second ridesharing vehicle (e.g., a van) already transporting at least four second users for simultaneous transportation with the first user. In another example, assignment module920may assign to the second ridesharing vehicle (e.g., a taxi) already transporting one or two second users for simultaneous transportation with the first user. The re-assignment may be performed similar to the initial assignment. For example, assignment module920may assign the first user to the second ridesharing vehicle based on a current location of the second ridesharing vehicle and a desired destination of the at least one second user. Additionally or alternatively, assignment module920may assign the first user to the second ridesharing vehicle based on an overlap between a current route of the second ridesharing vehicle and a predicted route from the first pick-up location to the destination of the first user, or the like. Additional dynamic re-assignments may be performed by assignment module920. For example, assignment module920may cancel the assignment of the first rideshare vehicle when an estimated arrival time of the first ridesharing vehicle is before an estimated arrival time of the first user. In certain aspects, assignment module920may cancel the assignment of the first rideshare vehicle when the estimated arrival lime of the first ridesharing vehicle is more than a predetermined period of time (e.g., 0.5 minutes, 1 minute, 2 minutes, 3 minutes, 5 minutes, or the like) before the estimated arrival time of the first user. In another example, assignment module920may cancel the assignment of the first rideshare vehicle when a delay in an arrival of the first ridesharing vehicle at the first pick-up location is predicted. In certain aspects, assignment module920may cancel the assignment when the predicted delay is more than a predetermined period of time (e.g., 5 minutes, 10 minutes, 15 minutes, or the like) as compared to an original estimated arrival time of the first ridesharing vehicle at the first pick-up location, and the second ridesharing vehicle that may be reassigned with the first user is predicted to pass first pick-up location earlier than first ridesharing vehicle. As explained above with respect toFIG.6, assignment module920may cancel the assignment of the first rideshare vehicle and re-assign the second rideshare vehicle to enable the first rideshare vehicle to pick-up a passenger not originally assigned to the first rideshare vehicle (e.g., 4 passengers enter when only 2 passengers requested a ride). For example, if the first rideshare vehicle has more passengers board than were initially requested prior to picking up the first user, the second rideshare vehicle may be re-assigned to the first user. In certain aspects, the second rideshare vehicle may be re-assigned only if the total passengers in the first rideshare vehicle exceed a threshold (e.g., 2 passengers, 4 passengers, a threshold block as described above, etc.). In general, assignment module920may re-assign the second rideshare vehicle in order to minimize a total waiting time of the plurality of users. For example, assignment module920may determine that re-assigning the second rideshare vehicle to one or more users initially assigned to the first rideshare vehicle (such as the first user) results in a lower total waiting time (i.e., a total waiting time for each user assigned to the first rideshare vehicle and each user assigned to the second rideshare vehicle) and then perform the re-assignment. The predicted total waiting time may depend on routes between starting locations of the users and pick-up locations of the users as well as predicted arrival times for the first rideshare vehicle and/or the second rideshare vehicle at the pick-up locations. Additionally or alternatively, assignment module920may re-assign the second rideshare vehicle in order to minimize a total travel time of the plurality of users. For example, assignment module920may determine that re-assigning the second rideshare vehicle to one or more users initially assigned to the first rideshare vehicle (such as the first user) results in a lower total travel time (i.e., a total travel time for each user assigned to the first rideshare vehicle and each user assigned to the second rideshare vehicle) and then perform the re-assignment. The predicted total travel time may depend on routes between pick-up locations of the users and drop-off locations of the users as well as predicted arrival times for the first rideshare vehicle and/or the second rideshare vehicle at the drop-off locations and may change in real time due to wrong turns and/or changes in traffic conditions. Any time the assignment of the first ridesharing, vehicle to the first user is cancelled, the route of the first ridesharing vehicle may be automatically updated. For example, the first ridesharing vehicle may be re-routed to bypass the pick-up location of the first user and therefore reach another pick-up location or drop-off location at an earlier time and/or after traversing a shorter distance. Assignment module920may further guide the first ridesharing vehicle to the other pick-up location or drop-off location. For example, assignment module920may send the updated route to one or more devices of the second plurality of communication devices (such as mobile communications device200) associated with the first ridesharing vehicle. Similarly, any time the first user is re-assigned to the second ridesharing vehicle, the route of the second ridesharing vehicle may be automatically updated. For example, the second ridesharing vehicle may be re-routed to pass the pick-up location of the first user, optionally before another pick-up location or drop-off location of the at least one second user. Assignment module920may guide the second ridesharing vehicle to the first pick-up location. For example, assignment module920may send a route (or updated route) to one or more devices of the second plurality of communication devices (such as mobile communications device200) associated with the second ridesharing vehicle. As further depicted inFIG.9, memory900may include prediction module930. Prediction module930may use the location information from the first and second pluralities of communication devices to estimate arrival times at respective pick-up locations. For example, prediction module930may use information received from a first communications device (e.g., derived from a GPS of the first communications device) of a first user to predict when the first user will arrive at the assigned first pick-up location. Additionally or alternatively, information used for predicting when the first user will arrive at the assigned first pick-up location may be derived from the ride request (e.g., which may include a current location of the first user). In another example, prediction module930may use information received from a second communications device (e.g., derived from a GPS of the second communications device) of the first ridesharing vehicle to predict when the first ridesharing vehicle will arrive at the assigned first pick-up location. Similarly, prediction module930may use information received from a second communications device (e.g., derived from a GPS of the second communications device) of a second ridesharing vehicle to predict when a second ridesharing vehicle may pass the first pick-up location. Prediction module930may then compare the predicted passing time of the second ridesharing vehicle with the arrival time of the first user. Prediction module930may make additional or alternative comparisons. For example, prediction module930may compare a predicted passing time of the first ridesharing vehicle with the arrival time of the first user and/or any other user. In another example, prediction module930may compare a predicted passing time of a third ridesharing vehicle with the arrival lime of the first user and/or any other user. In embodiments where location module910continuously receives location information from the first and second pluralities of communication devices, prediction module930may use the continuously received location information to estimate arrival times at respective pick-up locations, similar to the examples explained above. In such embodiments, prediction module930may additionally or alternatively use the continuously received location information to predict a delay in an arrival of the first ridesharing vehicle at the first pick-up location. For example, prediction module930may use weather, traffic information, and/or information about emergency (e.g., fire, police, medical, etc.) activity (e.g., received using the communications interface and/or retrieved from one or more memories) to predict the delay. Additionally or alternatively, prediction module930may compare the continuously received location information with a predicted route to determine if any wrong turns, unexpected slow downs, wrong turns, or the like, cause the first ridesharing vehicle to be at a different portion of the route than expected. Additionally or alternatively, prediction module930may receive information from one or more second communication devices regarding a malfunctioning of an associated ridesharing vehicle and predict a delay therefrom. Any of the above examples may similarly be used to predict a delay in arrival of one or more users and/or one or more additional ridesharing vehicles. Memory900may further include a database access module940, and may also include database(s)950. Database access module940may include software instructions executable to interact with database(s)950, to store and/or retrieve information (e.g., information used to perform any of the predictions described above, weather information, traffic information, one or more maps, or the like). FIG.10Ais a diagram of example timelines showing the use of dynamic re-assignment in a rideshare fleet, in accordance with some embodiments of the present disclosure. As shown in example timeline1010, ridesharing management server150may receive a first request from a first user, a second request from a second user, and a third request from a third user. Ridesharing management server150may assign the first request, the second request, and the third request to a first ridesharing vehicle. Accordingly, the first user, the second user, and the third user may form a first group of users. After assignment, ridesharing management server150may determine a first pick-up location for the first user, a second pick-up location for the second user, and a third pick-up location for the third user. For at least one of the users, the corresponding pick-up location may be different from a starting point included in the corresponding request. Additionally or alternatively, ridesharing management server150may determine a first drop-off location for the first user, a second drop-off location for the second user, and a third drop-off location for the third user. For at least one of the users, the corresponding drop-off location may be different from a desired destination included in the corresponding request. In example timeline1010, ridesharing management server150may predict an arrival time for the first user at the first pick-up location, an arrival time for the second user at the second pick-up location, and an arrival time for the third user at the third pick-up location. For example, ridesharing management server150may use information received from a first communications device of the first user to predict the arrival time for the first user, may use information received from a first communications device of the second user to predict the arrival time for the second user, and/or may use information received from a first communications device of the third user to predict the arrival time for the third user. As further shown in example timeline1010, prior to the first user arriving at the first pick-up location, ridesharing management server150may cancel the assignment of the first ridesharing vehicle to the first user while maintaining the assignment of the first ridesharing vehicle to others of the first group of users. Further, ridesharing management server150may predict when a second ridesharing vehicle may pass the first pick-up location. For example, ridesharing management server150may use information received from a second communications device associated with the second ridesharing vehicle. In example timeline1010, because the predicted passing time of the second ridesharing vehicle is after the predicted arrival time of the first user, ridesharing server150may re-assign the first user to the second ridesharing vehicle. As shown in example timeline1020, ridesharing management server150may receive a first request from a first user and a second request from a second user. Ridesharing management server150may assign the first request and the second request to a first ridesharing vehicle. Accordingly, the first user and the second user may form a first group of users. After assignment, ridesharing management server150may determine a first pick-up location for the first user and a second pick-up location for the second user. For at least one of the users, the corresponding pick-up location may be different from a starting point included in the corresponding request. Additionally or alternatively, ridesharing management server150may determine a first drop-off location for the first user and a second drop-off location for the second user. For at least one of the users, the corresponding drop-off location may be different from a desired destination included in the corresponding request. In example timeline1020, ridesharing management server150may predict an arrival time for the first user at the first pick-up location, an arrival time for the second user at the second pick-up location, and an arrival time of the first ridesharing vehicle at the first pick-up location and/or the second pick-up location. For example, ridesharing management server150may use information received from a first communications device of the first user to predict the arrival time for the first user, may use information received from a first communications device of the second user to predict the arrival time for the second user, and/or may use information received from a second communications device associated with the first ridesharing vehicle. Ridesharing management server150may continuously receive location information to estimate arrival dines at respective pick-up locations. Accordingly, in example timeline1020, ridesharing management server150may calculate an updated arrival time of the first ridesharing vehicle at the first pick-up location and/or the second pick-up location. Although not depicted inFIG.10A, ridesharing server150may additionally or alternatively calculate an updated arrival time for the first user at the first pick-up location and/or an updated arrival time for the second user at the second pick-up location. As further shown in example timeline1020, ridesharing management server150may cancel the assignment of the first rideshare vehicle when the estimated arrival time of the first ridesharing vehicle is before the estimate arrival time of the first user. Accordingly, ridesharing management server150cancels the assignment when the updated arrival time of the first ridesharing vehicle at the first pick-up location and/or the second pick-up location is before the arrival time for the first user at the first pick-up location. In some embodiments, ridesharing management server150may cancel the assignment of the first rideshare vehicle when the estimated arrival time of the first ridesharing vehicle is more than a predetermined period of time (e.g., 0.5 minutes, 1 minute, 2 minutes, 3 minutes, 5 minutes, etc.) before the estimated arrival time of the first user. Although not depicted inFIG.10A, ridesharing management server150may further predict when a second ridesharing vehicle may pass the first pick-up location, compare the predicted passing time of the second ridesharing vehicle with the arrival time of the first user, and re-assign the first user to the second ridesharing vehicle when the predicted passing time is after the predicted arrival time. As shown in example timeline1030, ridesharing management server150may receive a first request from a first user and a second request from a second user. Ridesharing management server150may assign the first request and the second request to a first ridesharing vehicle. Accordingly, the first user and the second user may form a first group of users. After assignment, ridesharing management server150may determine a first pick-up location for the first user and a second pick-up location for the second user. For at least one of the users, the corresponding pick-up location may be different from a starting point included in the corresponding request. Additionally or alternatively, ridesharing, management server150may determine a first drop-off location for the first user and a second drop-off location for the second user. For at least one of the users, the corresponding drop-off location may be different from a desired destination included in the corresponding request. In example timeline1030, ridesharing management server150may predict an arrival time for the first user at the first pick-up location, an arrival time for the second user at the second pick-up location, and an arrival time of the first ridesharing vehicle at the first pick-up location and/or the second pick-up location. For example, ridesharing management server150may use information received from a first communications device of the first user to predict the arrival time for the first user, may use information received from a first communications device of the second user to predict the arrival time for the second user, and/or may use information received from a second communications device associated with the first ridesharing vehicle. Ridesharing management server150may continuously receive location information to estimate arrival dines at respective pick-up locations. Accordingly, in example timeline1030, ridesharing management server150may calculate an updated arrival time of the first ridesharing vehicle at the first pick-up location and/or the second pick-up location. Although not depicted inFIG.10Aridesharing server150may additionally or alternatively calculate an updated arrival time for the first user at the first pick-up location and/or an updated arrival time for the second user at the second pick-up location. In the example of timeline1030, ridesharing management server150has predicted a delay in an arrival of the first ridesharing vehicle at the first pick-up location and/or the second pick-up location. For example, the delay may be due to traffic, weather, vehicle malfunction, police activity, wrong turns, or the like. As further shown in example timeline1030, ridesharing management server150may cancel the assignment of the first rideshare vehicle because the delay is predicted. Accordingly, ridesharing management server150cancels the assignment when the updated arrival time of the first ridesharing vehicle is after the original estimated arrival time of the first ridesharing vehicle. In some embodiments, ridesharing management server150may cancel the assignment of the first rideshare vehicle when the estimated arrival time of the first ridesharing vehicle is more than a predetermined period of time (e.g., 5 minutes, 10 minutes, 15 minutes, etc.) later than the original estimated arrival time of the first ridesharing vehicle. Although not depicted inFIG.10A, ridesharing management server150may further predict when a second ridesharing vehicle may pass the first pick-up location, compare the predicted passing time of the second ridesharing vehicle with the arrival time of the first user, and re-assign the first user to the second ridesharing vehicle when the predicted passing time is before the predicted arrival time. In addition, in some embodiments, ridesharing manager server may cancel the assignment of the first user to the first ridesharing vehicles only if reassignment to the second ridesharing vehicle succeeds. FIG.10Bis a diagram of additional example timelines showing the use of dynamic re-assignment in a rideshare fleet, in accordance with some embodiments of the present disclosure. As shown in example timeline1040, ridesharing management server150may receive a first request from a first user, a second request from a second user, and a third request from a third user. Ridesharing management server150may assign the first request, the second request, and the third request to a first ridesharing vehicle. Accordingly, the first user, the second user, and the third user may form a first group of users. After assignment, ridesharing management server150may determine a first pick-up location for the first user, a second pick-up location for the second user, and a third pick-up location for the third user. For at least one of the users, the corresponding pick-up location may be different from a starting point included in the corresponding request. Additionally or alternatively, ridesharing management server150may determine a first drop-off location for the first user, a second drop-off location for the second user, and a third drop-off location for the third user. For at least one of the users, the corresponding drop-off location may be different from a desired destination included in the corresponding request. In example timeline1040, ridesharing management server150may predict an arrival time for the first user at the first pick-up location, an arrival time for the second user at the second pick-up location, and an arrival time for the third user at the third pick-up location. For example, ridesharing management server150may use information received from a first communications device of the first user to predict the arrival time for the first user, may use information received from a first communications device of the second user to predict the arrival time for the second user, and/or may use information received from a first communications device of the third user to predict the arrival time for the third user. As further shown in example timeline1040, the first ridesharing vehicle may pick-up the first user (e.g., from the first pick-up location). Thereafter, the first ridesharing vehicle may pick-up the third user (e.g., from the third pick-up location). In the example of timeline1040, although the third user has only requested a ride for a single passenger, two passengers actually board the first ridesharing vehicle. In one example, a second communications device associated with the first ridesharing vehicle may send a signal to ridesharing management server150regarding the passenger not originally assigned to the first rideshare vehicle. In response, ridesharing management server150cancels the assignment of the first rideshare vehicle in order to enable the first rideshare vehicle to pick-up a passenger not originally assigned to the first rideshare vehicle. Although not depicted inFIG.109, ridesharing management server150may further re-assign a second ridesharing vehicle to the second user. For example, ridesharing management server150may predict when the second ridesharing vehicle may pass the second pick-up location and re-assign the second ridesharing vehicle when the predicted passing time is after the predicted arrival time of the second user or after the planned pickup time of the second user. As shown in example timeline1050, ridesharing management server150may receive a first request from a first user and a second request from a second user. Ridesharing management server150may assign the first request and the second request to a first ridesharing vehicle. Accordingly, the first user and the second user may form a first group of users. After assignment, ridesharing management server150may determine a first pick-up location for the first user and a second pick-up location for the second user. For at least one of the users, the corresponding pick-up location may be different from a starting point included in the corresponding request. Additionally or alternatively, ridesharing management server150may determine a first drop-off location for the first user and a second drop-off location for the second user. For at least one of the users, the corresponding drop-off location may be different from a desired destination included in the corresponding request. In example timeline1050, ridesharing management server150may predict an arrival time for the first user at the first pick-up location, an arrival time for the second user at the second pick-up location, and an arrival time of the first ridesharing vehicle at the first pick-up location and/or the second pick-up location. For example, ridesharing management server150may use information received from a first communications device of the first user to predict the arrival time for the first user, may use information received from a first communications device of the second user to predict the arrival time for the second user, and/or may use information received from a second communications device associated with the first ridesharing vehicle. Ridesharing management server150may further determine a total waiting time of the plurality of users. For example, each difference between the predicted arrival time of a user and the predicted arrival time of the first ridesharing vehicle may be summed. In embodiments where the predicted arrival time of the first ridesharing vehicle is before a predicted arrival time of one or more of the users, ridesharing management server150may either subtract the difference between the arrival times from the total waiting time or may ignore the one or more of the users in determining the total waiting time. As further shown in example timeline1050, ridesharing management server150may cancel the assignment of the first rideshare vehicle to the second user and re-assign the second rideshare vehicle in order to minimize a total waiting time of the plurality of users. For example, ridesharing management server150may predict an arrival time of the second ridesharing vehicle at the second pick-up location and, therefrom, predict an updated total waiting time if the second ridesharing vehicle were to be re-assigned to the second user. If the updated total waiting time is less than the initial total waiting time, ridesharing management server150may cancel the assignment of the first rideshare vehicle to the second user and re-assign the second rideshare vehicle. Additionally or alternatively, the second user may also be re-assigned to a new pick-up location during re-assignment. In such an embodiment, ridesharing management server150may predict an arrival time of the second ridesharing vehicle at the updated pick-up location and predict an arrival time of the second user at the updated pick-up location. Based on the arrival times at the updated pick-up location, ridesharing management server150may predict an updated total waiting time if the second ridesharing vehicle were to be re-assigned to the second user. If the updated total waiting time is less than the initial total waiting time, ridesharing management server150may cancel the assignment of the first rideshare vehicle to the second user, re-assign the second rideshare vehicle, and send the updated pick-up location to the second user. Although not depicted inFIG.10B, ridesharing management server150may decline to re-assign the second ridesharing vehicle even if the updated total waiting time is less than the initial total waiting time. For example, one or more thresholds (e.g., 10 minutes, 15 minutes, or the like) may be applied to a predicted waiting time for an individual user. In this example, ridesharing management server150may decline to re-assign the second ridesharing vehicle if the re-assignment would result in a predicted waiting time for a user exceeding the threshold. Accordingly, inconveniences to individual users may be capped in order to encourage such users to become repeat riders and enjoy the advantages of fleet-wide optimization on one or more future trips. Any of the examples ofFIGS.10A and10Bmay be combined. For example, the minimization of total wait time in example timeline1050may be applied to any of the example timelines1010,1020,1030, and/or1040. In another example, the enablement of a rideshare vehicle to pick-up a passenger not originally assigned to the rideshare vehicle in example timeline1040may be applied to any of the example timelines1010,1020,1030, and/or1050. FIGS.11A and11Bdepict example method1100for managing a fleet of ridesharing vehicles. Method1100may, for example, be implemented by ridesharing management server150ofFIG.3. At step1107, server150may receive ride requests from a first plurality of communication devices associated with a plurality of users. For example, server150may receive the ride requests using one or more communications interfaces (such as communications interface360). In some embodiments, each ride request includes a starting point and a desired destination corresponding to each of the plurality of users. The starting point may be included as GPS coordinates, a physical address, or the like. Similarly, the desired destination may be included as GPS coordinates, a physical address, or the like. At step1109, server150may receive location information from a second plurality of communication devices associated with a plurality of ridesharing vehicles. For example, server150may receive the ride requests using one or more communications interfaces (such as communications interface360). In some embodiments, the location information may include information derived from one or more GPS devices of the second communications devices. At step1111, server150may assign a first ridesharing vehicle to pick-up a first group of the plurality of users. For example, the first group may include a first user, a second user, a third user, etc. Some users may be included in the same request (e.g., if a first user and a second user are included in a first request). As explained above with regards to assignment module920, server150may assemble the first group of the plurality of users based on the closeness of starting points of the assembled users, the closeness of the desired destinations of the assembled users, the closeness of the starting points of some of the first group to desired destinations of others of the first group, overlap between predicted routes from the starting points to the desired destinations of some of first group and predicted routes from the starting points to the desired destinations of others of the first group, or the like. At step1113, server150may determine pick-up locations for the first group of users. For example, sever150may determine a pick-up location for each user in the first group of users at which the corresponding user will meet the first ridesharing vehicle. For at least some of the first group of users, the determined pick-up locations may differ from the starting points. At step1115, server150may send data to the at least some of the first group of users to guide each user to a respective pick-up location different from a corresponding starting point of each said user. For example, server150may send to at least some of the first group of users at least one of a location and an address of the respective pick-up locations. In this example, server150may send the location (e.g., GPS coordinates) and/or the address to a corresponding first communications device associated with the corresponding user. Additionally or alternatively, server150may send to at least some of the first group of users walking directions to the respective pick-up locations. At step1117, server150may use information received from a first communications device of a first user to predict when the first user will arrive the assigned first pick-up location. For example, the information used for predicting when the first user will arrive at the assigned first pick-up location may be derived from the ride request. Additionally or alternatively, the information used for predicting when the first user will arrive at the assigned first pick-up location is derived from a GPS of the first communications device. At step1119, prior to the first user arriving at a first pick-up location, server150may cancel the assignment of the first ridesharing vehicle to the first user while maintaining the assignment of the first ridesharing vehicle to others of the first group of users. For example, server150may send the cancellation to a second communications device associated with the first ridesharing vehicle and/or to the first communications device associated with the first user. Additionally or alternatively, server150may cancel the assignment of the first rideshare vehicle when an estimated arrival time of the first ridesharing vehicle is before the estimate arrival time of the first user. For example, as explained above with respect to prediction module930, server150may estimate an arrival time for the first ridesharing vehicle in addition to predicting the arrival time for the first user in step1117. Optionally, as explained above with respect to assignment module920, server150may cancel the assignment when an estimated arrival time of the first ridesharing vehicle is more than a predetermined period of time before the estimate arrival time of the first user. Server150may optionally automatically update a route of the first ridesharing vehicle after cancelling the assignment of the first ridesharing vehicle to the first user. For example, as explained above with respect to location module910, the updated route may omit a pick-up location and a drop-off location associated with the first user. At step1121, server150may predict when a second ridesharing vehicle may pass the first pick-up location. For example, as explained above with respect to prediction module930, server150may estimate an arrival time for the second ridesharing vehicle in addition to predicting the arrival time for the first user in step1117. At step1123, server150may compare the predicted passing time of the second ridesharing vehicle with the arrival time of the first user. For example, server1:50may perform an absolute comparison (e.g., a difference of 2 minutes, a difference of 5 minutes, etc.), a relative comparison (e.g., a difference of 10%, a difference of 20%, etc.), or the like. In some embodiments, the second ridesharing vehicle may be carrying at least one second user (e.g., one second user, two second users, four second users, etc.) while being assigning to the first user. At step1125, server150may re-assign the first user to the second ridesharing vehicle when the predicted passing time is after the predicted arrival time. Optionally, server150may re-assign the first user only when the predicted passing time is within one or more thresholds after the predicted arrival time. For example, server150may re-assign the first user if the predicted passing time is less than 5 minutes, 10 minutes, etc. after the predicted arrival time and/or may re-assign the first user if the predicted passing time is more than 1 minute, 2 minutes, etc. after the predicted arrival time. In some embodiments, server150may guide the second ridesharing vehicle to the first pick-up location. For example, as explained above with respect to location module910, server150may send a location (e.g., GPS coordinates) and/or an address of the first pick-up location to a second communications device associated with the second ridesharing vehicle and/or send driving directions to the first pick-up location to the second communications device. Additionally or alternatively, server150may assign the first user to the second ridesharing vehicle based on a current location of the second ridesharing vehicle and a desired destination of the at least one second user. For example, as explained above with regards to assignment module920, server150may assign the first user based on the closeness of starting points of the first user and the at least one second user, the closeness of the desired destinations of the first user and the at least one second user, the closeness of the starting point of the first user to desired destinations of the at least one second user (or vice versa), overlap between a predicted route from the starting point to the desired destination of the first user and predicted routes from the starting points to the desired destinations of the at least one second user, or the like. Additionally or alternatively, server150may re-assign the second rideshare vehicle in order to minimize a total waiting time of the plurality of users. For example, as explained above with regards to prediction module930, server150may calculate an initial total wait time and predict an updated total wait time and perform the (cancellation and) re-assignment if the updated total wait time is less than the initial total wait time. Optionally, as explained above with regards to prediction module930, server150may perform the (cancellation and) re-assignment only if the difference between the updated total wait time and the initial total wait time is above a threshold (e.g., 2 minutes, 5 minutes, 10 minutes, etc.) and/or only if a predicted wait time of the re-assigned user remains below a threshold (such as 10 minutes, 15 minutes, or the like). Server150may optionally automatically update a route of the second ridesharing vehicle after the re-assigning of the first user. For example, as explained above with respect to location module910, the updated route may include a pick-up location and a drop-off location associated with the first user. Moreover, server150may optionally change at least one drop-off location of the at least one second user after assignment of the second ridesharing vehicle to the first user. Additionally or alternatively, server150may optionally change a pick-up location of the first user after assignment of the second ridesharing vehicle to the first user. Method1100may further include additional steps. For example, as explained above with respect to location module910, method1100may include continuously receiving location information from the first and second pluralities of communication devices to estimate arrival times at respective pick-up locations. In such embodiments, as explained above with respect to assignment module920and prediction module930, server150may cancel the assignment of the first rideshare vehicle when the estimated arrival time of the first ridesharing vehicle is before the estimate arrival time of the first user. Additionally or alternatively, in such embodiments, as explained above with respect to assignment module920and prediction module930, server150may predict a delay in an arrival of the first ridesharing vehicle at the first pick-up location and may cancel the assignment of the first rideshare vehicle when a delay is predicted. For example, server150may cancel the assignment of the first rideshare vehicle when the predicted delay in arrival of the first ridesharing vehicle at the first pick-up location is more than a predetermined period of time compared to an original estimated arrival time of the first ridesharing vehicle at the first pick-up location. In another example, method1100may further include (as an additional step or in combination with or as an alternatively to step1119) cancelling the assignment of the first rideshare vehicle and re-assigning the second rideshare vehicle in order to enable the first rideshare vehicle to pick-up a passenger not originally assigned to the first rideshare vehicle. For example, as explained above with respect to example timeline1040, one or more additional passengers may board the first ridesharing vehicle than initially requested or anticipated, and server150may re-assign the first user to accommodate such additional passengers. FIGS.11C and11Ddepict another example method1150for managing a fleet of ridesharing vehicles. Method1100may, for example, be implemented by ridesharing management server150ofFIG.3. At step1157, server150may receive ride requests from a first plurality of communication devices associated with a plurality of users. For example, server150may receive the ride requests using one or more communications interfaces (such as communications interface360). In some embodiments, each ride request includes a starting point and a desired destination corresponding to each of the plurality of users. The starting point may be included as GPS coordinates, a physical address, or the like. Similarly, the desired destination may be included as GPS coordinates, a physical address, or the like. At step1159, server150may receive location information from a second plurality of communication devices associated with a plurality of ridesharing vehicles. For example, server150may receive the ride requests using one or more communications interfaces (such as communications interface360). In some embodiments, the location information may include information derived from one or more GPS devices of the second communications devices. At step1161, server150may assign a first ridesharing vehicle to pick-up a group of the plurality of users. For example, the group may include a first user, a second user, a third user, etc. Some users may be included in the same request (e.g., if a first user and a second user are included in a first request). As explained above regarding assignment module920, server150may assemble the group of the plurality of users based on the closeness of starting points of the assembled users, the closeness of the desired destinations of the assembled users, the closeness of the starting points of some of the group to desired destinations of others of the group, overlap between predicted routes from the starting points to the desired destinations of some of group and predicted routes from the starting points to the desired destinations of others of the group, or the like. At step1163, server150may determine pick-up locations for the group of users. For example, sever150may determine a pick-up location for each user in the group of users at which the corresponding user will meet the first ridesharing vehicle. In some embodiments, the determined pick-up locations may differ from the starting points. At step1165, server150may send data to the group of the plurality of users indicating appointed pick-up times at the determined pick-up locations. For example, server150may send to the group of users at least one of a location and an address of the respective pick-up locations. In this example, server150may send the location (e.g., GPS coordinates) and/or the address to a corresponding first communications device associated with the corresponding user. Additionally or alternatively, server150may send to the group of users walking directions to the respective pick-up locations. Moreover, as explained above with respect to prediction module930, server150may estimate arrival times for the first ridesharing vehicle at the determined pick-up locations and/or arrival times for corresponding users at corresponding pick-up locations. Based on predicted arrival times, server150may appoint corresponding pick-up times to users. For example, the pick-up times may correspond to predicted arrival times of the first ridesharing vehicle, to predicted arrival times of the corresponding users, or any combination thereof (e.g., by selecting the maximum of two predicted arrival times). In some embodiments, the pick-up times may correspond to predicted arrival times with one or more buffers added. For example, server150may add an absolute buffer, such as one minute, two minutes, five minutes, or the like, and/or a relative buffer, such as 5%, 10%, 15%, or the like, to a predicted arrival time to determine a corresponding pick-up time. At step1167, server150may use information received from at least one of the plurality of ridesharing vehicles to predict when the first ridesharing vehicle will arrive to a first pick-up location assigned to a first user. For example, as explained above with respect to prediction module930, server150may estimate an arrival time for the first ridesharing vehicle. In some embodiments, the information used to predict when the first ridesharing vehicle will arrive at the first pick-up location may include GPS data from the first ridesharing vehicle and/or real-time traffic updates from the plurality of ridesharing vehicles. At step1169, prior to a first pick-up time associated with the first user, server150may estimate that the first ridesharing vehicle is going to be late to the first pick-up location by more than a time threshold. For example, as explained above with respect to prediction module930, server150may estimate an arrival time for the first user in addition to predicting the arrival time for the first ridesharing vehicle in step1169. Accordingly, server150may estimate lateness based on a comparison of the estimated arrival time for the first ridesharing vehicle and the estimated arrival time for the first user. Additionally or alternatively, server150may estimate lateness based on a comparison of the estimated arrival time for the first ridesharing vehicle and the appointed pick-up time for the first user. In some embodiments, the time threshold may have a value between 2 minutes and 20 minutes e.g., after the first pick-up time). At step1171, server150may identify a second ridesharing vehicle to be assigned to pick-up the first user. For example, server150may identify the second ridesharing vehicle based on which vehicles in the plurality of ridesharing vehicles have no current passengers and/or are near the first pick-up locations. Server150may use information from the second plurality of communication devices associated with the plurality of ridesharing vehicles to perform the identification. At step1173, server150may cancel the assignment of the first ridesharing vehicle to the first user while maintaining the assignment of the first ridesharing vehicle to others of the group of the plurality of users. Server150may optionally automatically update a route of the first ridesharing vehicle after cancelling the assignment of the first ridesharing vehicle to the first user. For example, as explained above with respect to location module910, the updated route may omit a pick-up location and a drop-off location associated with the first user. At step1175, server150may assign the second ridesharing vehicle to pick up the first user. Optionally, server150may determine that the second ridesharing vehicle can pick up the first user before the first ridesharing vehicle. For example, as explained above with respect to prediction module930, server150may predict a passing time for the first ridesharing vehicle and a passing time the second ridesharing vehicle at the first pick-up location and compare the predicted passing times to make the determination. Method1150may include additional steps. For example, method1150may include determining that, by assigning the second ridesharing vehicle to pick up the first user, a first time delay of at least one passenger riding in the first ridesharing vehicle is lower than a second time delay of the at least one passenger when the first user is assigned to the first ridesharing vehicle. For example, as explained above with respect to prediction module930, server150may predict the second time delay (e.g., a delay in arrival of the first ridesharing vehicle at a corresponding pick-up location, a delay in arrival of the first ridesharing vehicle at a corresponding drop-off location, and/or a delay in the length of drive between the corresponding pick-up location and the corresponding drop-off location) for at least one passenger (e.g., not the first user) riding in (and/or assigned to) the first ridesharing vehicle. The time delay may be caused, for example, by the lateness of the first ridesharing vehicle to the first pick-up location (and corresponding and/or cascading lateness to other pick-up locations). Moreover, as explained above with respect to prediction module930, server150may predict the first time delay for the at least one passenger based on a hypothesis that the first user is assigned to the second ridesharing vehicle rather than the first ridesharing vehicle. Accordingly, server150may assign the first use to the second ridesharing vehicle only if the first time delay is predicted to be lower than the second time delay (i.e., that assignment of the first user to the second ridesharing vehicle will reduce a time delay for at least one passenger riding in and/or assigned to the first ridesharing vehicle). In some embodiments, server150may perform the assignment only if the first time delay is higher than the second time delay by a threshold, e.g., six minutes, twelve minutes, or the like. Method1150may optionally be implemented in combination with method1100. For example, server150may re-assign the first user to the second ridesharing vehicle when the predicted passing time is after the predicted arrival time and when the first ridesharing vehicle is going to be late to the first pick-up location by more than a time threshold. In other words, server150may combine methods1100and1150such that steps1125and1175are combined. Sub-Optimization of Individual Routes Embodiments of the present disclosure may allow for the sub-optimization of individual routes within a ridesharing fleet. For example, individual routes for one or more users may be sub-optimized in order to allow for greater optimization of the fleet (or at least a portion of the fleet) as a whole. This may enhance the overall experience of users while incurring insignificant costs to individual users. Users may particularly experience the effects of fleet-wide optimization if they are repeat customers. In some embodiments, the sub-optimization may be limited by one or more constraints. For example, an individual sub-optimization may be rejected if it results in a user waiting more than 10 minutes, 15 minutes, or the like to be picked up by a ridesharing vehicle, even if such a sub-optimization would allow for greater optimization of the fleet as a whole. Such constraints balance the needs of individual users with the benefits of fleet-wide optimization. FIG.12depicts an example of a memory module1200for sub-optimization of individual routes. Although depicted as a single memory inFIG.12, memory1200may comprise one or more non-volatile (e.g., hard disk drive, flash memory, etc.) and/or volatile (e.g., RAM or the like) memories. In some embodiments, memory1200may be included in ridesharing management server150. For example, memory1200may comprise, at least in part, a portion of memory320. As depicted inFIG.12, memory1200may include request module1210. Request module1210may receive a first request for a shared ride from a first user. For example, the first request may be received via a communications interface, such as communications interface360. The communications interface may comprise, for example, one or more network interface controllers (NICs). These one or more NICs may communicate over one or more computer networks, such as the Internet, a local area network (LAN), or the like. In some embodiments, the first request may include information related to a first pick-up location of the first user and a first desired destination of the first user. The information related to the first pick-up location may include GPS coordinates, a physical address, or the like of the first pick-up location. Similarly, the information related to the first desired destination may include GPS coordinates, a physical address, or the like of the first desired destination. Additionally or alternatively, the information related to the first pick-up location may include a current location of the first user (e.g., derived from a GPS device of a first communications device associated with the first user) and/or a user-requested pick-up location. For example, the first user may request to be picked up at a certain location (e.g., certain GPS coordinates, a certain physical address, or the like), e.g., by inputting the certain location into the first communications device associated with the first user. Optionally, the information related to the first pick-up location may also include a requested time for pick-up, e.g., based on input to the first communications device associated with the first user. Request module1210may further receive a second request for a shared ride from a second user. For example, the second request may be received via the communications interface, such as communications interface360. In some embodiments, the second request may include information related to a second pick-up location of the second user and a second desired destination of the second user. The information related to the second pick-up location may include GPS coordinates, a physical address, or the like of the second pick-up location. Similarly, the information related to the second desired destination may include GPS coordinates, a physical address, or the like of the second desired destination. Additionally or alternatively, the information related to the second pick-up location may include a current location of the second user (e.g., derived from a GPS device of a first communications device associated with the second user) and/or a user-requested pick-up location. For example, the second user may request to be picked up at a certain location (e.g., certain GPS coordinates, a certain physical address, or the like), e.g., by inputting the certain location into the first communications device associated with the first user. Optionally, the information related to the second pick-up location may also include a requested time for pick-up, e.g., based on input to the first communications device associated with the second user. Request module1210may further receive a third request for a shared ride from a third user. For example, the third request may be received via the communications interface, such as communications interface360. In some embodiments, the third request may include information related to a third pick-up location of the third user and a third desired destination of the third user. The information related to the third pick-up location may include GPS coordinates, a physical address, or the like of the third pick-up location. Similarly, the information related to the third desired destination may include GPS coordinates, a physical address, or the like of the third desired destination. Additionally or alternatively, the information related to the third pick-up location may include a current location of the third user (e.g., derived from a GPS device of a first communications device associated with the third user) and/or a user-requested pick-up location. For example, the third user may request to be picked up at a certain location (e.g., certain GPS coordinates, a certain physical address, or the like), e.g., by inputting the certain location into the first communications device associated with the third user. Optionally, the information related to the third pick-up location may also include a requested time for pick-up, e.g., based on input to the first communications device associated with the third user. In some embodiments, request module1210may receive the requests for shared rides from a plurality of first mobile communications devices (such as mobile communications device200) associated with the plurality of users. As explained above, such first mobile communications device may send requests to request module1210via the communications interface including information input into the mobile communications device by the associated user, information derived from a GPS device of the mobile communications device, or the like. Additionally or alternatively, request module1210may receive from a plurality of second mobile communication devices associated with a plurality of ridesharing vehicles, information about a current location of each of the second mobile communications devices. For example, the current location may be derived from a location circuit within each of the second mobile devices. The plurality of second mobile communication devices may include a plurality of handheld devices (such as mobile communications device200) associated with drivers of at least a part of the fleet of ridesharing vehicles and/or a plurality of transmitters (such as driving-control device120F) embedded in autonomous vehicles that are a part of the fleet of ridesharing vehicles. In some embodiments, request module1210(and/or route module1220, described below) may continuously receive location information from the plurality of first mobile communications devices and/or the plurality of second mobile communication devices. As used herein, “continuously” does not necessarily mean without interruption but may refer to the receipt of information in a discretized manner having spacings (and/or interruptions) each below a threshold of time, such as 50 ms, 100 ms, 500 msec, 1 sec, 5 sec, 10 sec, 30 sec, or the like. Request module1210may further identify a first ridesharing vehicle and a second ridesharing vehicle that are currently without passengers. For example, request module1210may identify the vehicles using signals received from associated second mobile communication devices. In such an example, the second mobile communication devices may send an. Indicator of how many passengers are in the associated vehicle based on input from a driver of the associated vehicle and/or based on output from an application (or other software module) on the second mobile communication device that tracks the number of passengers in the associated vehicle. Additionally or alternatively, request module1210may identify the vehicles based on centralized tracking of capacity at ridesharing server150(e.g., as explained above with respect to capacity tracking module620). In some embodiments, the third request may be received while both the first user and the second user are riding in the first ridesharing vehicle. In such an embodiment, request module1210may schedule picking up of the third user before dropping off the first user. Examples of this embodiment are depicted inFIGS.13E and13F. Alternatively, request module1210may schedule picking up the third user after dropping off the first user and before dropping off the second user. Examples of this embodiment are depicted inFIGS.13C and13D. As further depicted inFIG.12, memory1200may include route module1220. Route module1220may assign a first user and a second user to a first ridesharing vehicle. For example, mute module1220may assign the users to the first ridesharing vehicle when request module1210identifies the first ridesharing vehicle as currently without passengers. Additional or alternative factors may be considered by route module1220when assignment the users to the first ridesharing vehicle. For example, route module1220may assign the first user and the second user to the first ridesharing vehicle based on the closeness of a starting point and/or the pick-up location of the first user to a starting point, the pick-up location, the desired destination, and/or a drop-off location of the second user, the closeness of the desired destination and/or a drop-off location of the first user to a starting point, the pick-up location, the desired destination, and/or a drop-off location of the second user, overlap between a first predicted route from the pick-up location of the first user to the desired destination of the first user and a second predicted route from the pick-up location of the second user to the desired destination of the second user, or the like. The predicted routes may be calculated using one or more maps, optionally in combination with traffic information. The one or more maps may be retrieved from one or more memories and/or using the communications interface. Similarly, the traffic information may be retrieved from one or more memories and/or using the communications interface. Route module1220may also generate a route to the first ridesharing vehicle for picking up and dropping off each of the first user and the second user. For example, route module1220may generate the route based on one or more optimization models run on the pick-up locations of the first user and the second user as well as the desired destinations of the first user and the second user. One or more maps (e.g., retrieved from one or more memories and/or using the communications interface), optionally with traffic and/or weather information (e.g., retrieved from one or more memories and/or using the communications interface) may also be fed into the optimization model(s). The one or more optimization models may include a shortest distance optimization, a shortest travel time optimization (e.g., accounting for speed limits, traffic, etc.), a fuel efficiency optimization (e.g., based on known fuel ratings of the ridesharing vehicle), or the like. For example, any solution to the P v. NP problem later derived may also be incorporated into the optimization models. In some embodiments, route module1220may determine the pick-up locations and/or drop-off locations for at least one of (or each of) the first, second, and third users. For example, route module1220may determine the pick-up location(s) and/or the drop-off location(s) based on one or more optimization models run on one or more predicted routes between starting points and destinations of the users. The one or more optimization models may include a shortest distance optimization, a shortest travel time optimization (e.g., accounting for speed limits, traffic, wrong turns, etc.), a fuel efficiency optimization (e.g., based on known fuel ratings of the ridesharing vehicle), or the like. For example, any solution to the P v. NP problem later derived may also be incorporated into the optimization models. For at least one of (or each of) the first, second, and third users, the determined drop-off locations may differ from the desired destinations. Similarly, for at least one of (or each of) the first, second, and third users, the determined pick-up locations may differ from current locations of the users. One or more of the pick-up and/or drop-off locations may be sub-optimized. For example, route module1220may sub-optimize the drop-off location of the first user in order to minimize a total waiting time of the third user. An example of this embodiment is depicted inFIGS.13C and13F, The sub-optimized drop-off location may be determined initially or may be determined as an updated drop-off location. Additionally or alternatively, route module1220may sub-optimize the third pick-up location of the third user to minimize a total travel time of the first and second users. An example of this embodiment is depicted inFIGS.13D and13E. The sub-optimized pick-up location may be determined initially or may be determined as an updated pick-up location. Any of the sub-optimizations described above may be subject to one or more thresholds. For example, route module1220may decline a sub-optimization if it would result in a walking time and/or distance for a user above a threshold (e.g., 10 minutes, 15 minutes, etc., 0.25 miles, 0.5 miles, 1 kilometer, or the like). In embodiments where one or more determined pick-up locations differ from starting points, route module1220may cause notices of the determined pick-up locations to be sent to the mobile communications devices of each of the first, second, and third users. For example, route module1220may transmit data associated with the notices to the mobile communications devices of each of the first, second, and third users. For example, the data may include GPS coordinates of the pick-up locations, physical addresses of the pick-up locations, or the like. Each mobile communication device may use received coordinates, a received address, or the like to route (e.g., via walking) the associated user front a current location of the device to the pick-up location. In another example, the data may include walking directions to the determined pick-up locations. Similarly, in embodiments where one or more determined drop-off locations differ from desired destinations, route module1220may cause notices of the determined drop-off locations to be sent to the mobile communications devices of each of the first, second, and third users. For example, route module1220may transmit data associated with the notices to the mobile communications devices of each of the first, second, and third users. For example, the data may include GPS coordinates of the drop-off locations, physical addresses of the drop-off locations, or the like. Each mobile communication device may use received coordinates, a received address, or the like to route (e.g., via walking) the associated user (e.g., after being dropped off) to the drop-off location. In another example, the data may include walking directions to the determined drop-off locations. The pick-up locations and/or drop-off locations may be changed depending on cancellations and re-assignments performed by arrival time module1230. For example, route module1220may change at least one pick-up location and/or drop-off location of at least one of (or each of) the first, second, and third users if a user's assignment is cancelled and/or if a user is re-assigned. Additionally or alternatively, route module1220may change at least one pick-up location and/or drop-off location of at least one of (or each of) the first, second, and third users if route module1220determines that such a change would further optimize the generated route for the ridesharing vehicle. Such optimization may be rejected if the new pick-up location and/or drop-off location would be too inconvenient for a corresponding user (e.g., by exceeding a threshold walking distance and/or time, or the like). Corresponding users may be notified of updates to drop-off locations and/or pick-up locations as described above. In embodiments where the third user is assigned to the first ridesharing vehicle, route module1220may generate an updated route for the first ridesharing vehicle to pick-up the third user. The updated route may be calculated like the original route described above while account for a pick-up location and a drop-off location of the third user. Alternatively, route module1220may generate a route for the second ridesharing vehicle to send the second ride sharing vehicle toward an area with predicted imminent passenger demand. For example, the area with predicted demand is identified using a request history (e.g., stored in database1250) and/or real-time information (e.g., using event information retrieved from one or more memories and/or using the communications interface). For example, route module1220may determine that requests are expected in an area near a stadium after a sporting event concludes. As depicted inFIG.12, memory1200may further include arrival time module1230. Arrival time module1230may calculate a first expected arrival time of the first ridesharing vehicle at one or more pick-up locations (such as the third pick-up location). The expected arrival time may depend on a predicted route (e.g., calculated by route module1220as explained above) for the first ridesharing vehicle. Arrival time module1230may also account for weather, traffic information, and/or information about emergency (e.g., fire, police, medical, etc.) activity, wrong turns, or the like (e.g., received using the communications interface and/or retrieved from one or more memories). Similarly, arrival time module1230may calculate a second expected arrival time of the second ridesharing vehicle at one or more pick-up locations (such as the third pick-up location). The second expected arrival time may be calculated similar to the first excepted arrival time, described above. In embodiments where the second expected arrival time is sooner than the first expected arrival time and both the first expected arrival time and the second expected arrival time are below a predetermined threshold (e.g., 15 minutes, 10 minutes, or the like), arrival time module1230may assign the third user to the first ridesharing vehicle. Additionally or alternatively, arrival time module1240may assign the third user to the first ridesharing vehicle when an estimated delay for each of the first user and the second user is below another predetermined threshold (e.g., 30 minutes, 20 minutes, or the like). Accordingly, the assignment of the third user may be rejected if such an assignment is too inconvenient for the first user and/or the second user. Additionally or alternatively, arrival time module1240may assign the third user to the first ridesharing vehicle when the third desired destination of the third user is in a same neighborhood as the second desired destination of the second user and/or the first desired destination of the first user. For example, the third desired destination may be within a particular range (e.g., 10 miles, 20 kilometers, etc.) of the second desired destination and/or the first desired destination, within a zone defining the neighborhood of the second desired destination and/or the first desired destination (e.g., a square, a rectangle, a parallelogram, other regular shapes, irregular figures, or the like), etc. Memory1200may further include a database access module1240, and may also include database(s)1250. Database access module1240may include software instructions executable to interact with database(s)1250, to store and/or retrieve information (e.g., a request history as described above, weather information, traffic information, one or more maps, or the like). FIGS.13A and13Billustrate an example process for managing a fleet of ridesharing vehicles. At step1300, a first user and a second user may be assigned to a first ridesharing vehicle. Accordingly, a route may be generated for the first ridesharing vehicle including a pick-up location of the first user, a pick-up location of the second user, a desired destination of the first user, and a desired destination of the second user. In addition, at step1300, a third ride request from a third user having a third pick-up location may be received. At step1301, a first expected arrival time of the first ridesharing vehicle at the third pick-up location is determined (e.g., 10 min. In the example ofFIG.13A). Similarly, in step1303, a second expected arrival time of the second ridesharing vehicle at the third pick-up location is determined (e.g., 5 min in the example ofFIG.13A). The arrival times may be calculated as explained above with respect to arrival time module1230. In the example ofFIG.13A, the second expected arrival time is sooner than the first expected arrival time. At step1305, the third user is assigned to the first ridesharing vehicle and an updated route is generated for the first ridesharing vehicle to pick-up the third user. Although not depicted inFIG.13B, the second ride sharing vehicle may be sent toward an area with predicted imminent passenger demand (e.g., as explained above with respect to route module1220). Although depicted without current locations of the first user, the second user, and the third user, the example ofFIGS.13A and13Bmay be modified to use one or more pick-up locations that differ from current locations of users and/or one or more drop-off locations that differ from the desired destinations of the users, FIG.13Cillustrates an example1320of scheduling picking up a third user after dropping off a first user and before dropping off a second user. Moreover, in the example ofFIG.13C, a drop-off location of the first user is sub-optimized to minimize a total waiting time of the third user. FIG.13Dillustrates an example1330of scheduling picking up a third user after dropping off a first user and before dropping off a second user. Moreover, in the example ofFIG.13D, a third pick-up location of the third user is sub-optimized to minimize a total travel time of the first and second users. FIG.13Eillustrates an example1340of scheduling picking up a third user before dropping off a first user. Moreover, in the example ofFIG.13E, a third pick-up location of the third user is sub-optimized to minimize a total travel time of the first and second users. FIG.13Fillustrates an example1350of scheduling picking up a third user before dropping off a first user. Moreover, in the example ofFIG.13F, a drop-off location of the first user is sub-optimized to minimize a total waiting time of the third user. FIGS.14A and14Bdepict example method1400for managing a fleet of ridesharing vehicles. Method1400may, for example, be implemented by ridesharing management server150ofFIG.3. At step1411, server150may identify a first ridesharing vehicle and a second ridesharing vehicle that are currently without passengers. For example, as explained above with respect to request module1210, server150identify the vehicles using signals received from second mobile communication devices associated with the ridesharing vehicles. Additionally or alternatively, server1210may identify the vehicles based on centralized tracking of capacity at ridesharing server150(e.g., as explained above with respect to capacity tracking module620). Optionally, server150may receive, from a plurality of second mobile communication devices associated with a plurality of ridesharing vehicles (e.g., including the first ridesharing vehicle and the second ridesharing vehicle), information about a current location of each of the second mobile communications devices, derived from a location circuit (e.g., a GPS locator) within each of the second mobile devices. The plurality of second mobile communication devices may include a plurality of handheld devices associated with drivers of at least a part of the fleet of ridesharing vehicles and/or a plurality of transmitters embedded in autonomous vehicles (e.g., autonomous vehicle130F) that are a part of the fleet of ridesharing vehicles. At step1413, server150may receive a first request for a shared ride from a first user. For example, server150may receive the first request using one or more communications interfaces (such as communications interface360). In some embodiments, the first request may include information related to a first pick-up location of the first user and a first desired destination of the first user. For example, the information related to the first pick-up location may include a current location of the first user or a user-requested pick-up location. At step1415, server150may receive a second request for a shared ride from a second user. For example, server150may receive the second request using one or more communications interfaces (such as communications interface360. In some embodiments, the second request may include information related to a second pick-up location of the second user and a second desired destination of the second user. For example, the information related to the second pick-up location may include a current location of the second user or a user-requested pick-up location. Optionally, server150may receive the requests for shared rides from a plurality of first mobile communications devices associated with the plurality of users. In some embodiments, server150may also determine pick-up locations and/or drop-off locations for each of the first and second users. As explained above with respect to route module1220, the pick-up locations may differ from current locations of users and/or the drop-off locations may differ from the desired destinations of the users. In such embodiments, server150may cause notices of the determined pick-up locations to be sent to the mobile communications devices of each of the first and second users. For example, as described above with respect to route module1220, server150may transmit data associated with the notices to the mobile communications devices of each of the first and second users, and the data may include walking directions to the determined pick-up locations. At step1417, server150may assign the first user and the second user to the first ridesharing vehicle. At step1419, server150may generate a route to the first ridesharing vehicle for picking up and dropping off each of the first user and the second user. For example, as described above with respect to route module1220, server150may generate the route based on one or more optimization models nm on the pick-up locations of the first user and the second user as well as the desired destinations (and/or drop-off locations) of the first user and the second user. At step1421, server150may receive via the communications interface, a third request for a shared ride from a third user. For example, server150may receive the third request using one or more communications interfaces (such as communications interface360). In some embodiments, the third request may include information related to a third pick-up location of the third user and a third desired destination of the third user. For example, the information related to the third pick-up location may Include a current location of the third user or a user-requested pick-up location. Optionally, as explained above with respect to steps1413and1415, server150may receive the third request from a first mobile communications device associated with the third user. In some embodiments, as explained above with respect to steps1413and1415, server150may also determine a pick-up location and/or a drop-off location for the third user. As explained above with respect to route module1220, the pick-up location may differ from current location of the third user and/or the drop-off location may differ from the desired destinations of the third users. In such embodiments, server150may cause a notice of the determined pick-up location to be sent to the mobile communications device of the third user. For example, as described above with respect to route module1220, server150may transmit data associated with the notice to the mobile communications devices of the third user, and the data may include walking directions to the determined pick-up location. In some embodiments, server150may receive the third request while both the first user and the second user are riding in the first ridesharing vehicle. In such embodiments, server150may schedule picking up of the third user before dropping off the first user or may schedule picking up the third user after dropping off the first user and before dropping off the second user. At step1423, server150may calculate a first expected arrival time of the first ridesharing vehicle at the third pick-up location. For example, as explained above with respect to arrival time module1230and depicted in the example ofFIG.13A, server150may determine the first expected arrival time based on a predicted route (e.g., calculated by route module1220as explained above) for the first ridesharing vehicle; weather, traffic information, and/or information about emergency (e.g., fire, police, medical, etc.) activity (e.g., received using the communications interface and/or retrieved from one or more memories), or the like. At step1425, server150may calculate a second expected arrival time of the second ridesharing vehicle at the third pick-up location. For example, as explained above with respect to arrival time module1230and depicted in the example ofFIG.13A, server150may determine the second expected arrival time based on a predicted route (e.g., calculated by route module1220as explained above) for the second ridesharing vehicle; weather, traffic information, and/or information about emergency (e.g., fire, police, medical, etc.) activity (e.g., received using the communications interface and/or retrieved from one or more memories); wrong turns; or the like. The second expected arrival time may be sooner than the first expected arrival time. At step1427, when both the first expected arrival time and the second expected arrival time are below a predetermined threshold, server150may assign the third user to the first ridesharing vehicle. For example, as explained above with respect to arrival time module1230, the predetermined threshold may be less than twenty minutes. Additionally or alternatively, server150may assign the third user to the first ridesharing vehicle when an estimated delay for each of the first user and the second user is below another predetermined threshold. For example, as explained above with respect to arrival time module1230, the other predetermined threshold may be less than ten minutes. Additionally or alternatively, server150may assign the third user to the first ridesharing vehicle when the third desired destination of the third user is in a same neighborhood as the second desired destination of the second user. For example, as explained above with respect to arrival time module1230, the third desired destination may be within a particular range (e.g., 10 miles, 20 kilometers, etc.) of the second desired destination, within a zone defining the neighborhood of the second desired destination (e.g., a square, a rectangle, a parallelogram, other regular shapes, irregular figures, or the like), etc. If any or all of the above conditions are not satisfied, server150may assign the third user to the second ridesharing vehicle. If the second ridesharing vehicle is assigned to the third user, server150may generate a route to the second ridesharing vehicle for picking up and dropping off the third user. In any of the above embodiments, server150may sub-optimize the drop-off location of the first user in order to minimize a total waiting time of the third user (e.g., as depicted inFIGS.13C and13F) or may sub-optimize the third pick-up location of the third user to minimize a total travel time of the first and second users (e.g., as depicted inFIGS.13D and13E). As explained above with respect to arrival time module1230 At step1429, server150may generate an updated route for the first ridesharing vehicle to pick-up the third user. For example, as explained above with respect to route module1220, server150may generate the route based on one or more optimization models run on the pick-up locations of the first, second, and third users as well as the desired destinations (and/or drop-off locations) of the first, second, and third users. Method1400may further include additional steps. For example, method1400may include generating a route for the second ridesharing vehicle to send the second ride sharing vehicle toward an area with predicted imminent passenger demand. As explained above with respect to route module1220, the area with predicted demand may be identified using a request history (e.g., stored in database1250) and/or real-time information (e.g., using event information retrieved from one or more memories and/or using the communications interface). Server150may generate such a route if the third user is not assigned to the second ridesharing vehicle. Preposition in Empty Vehicles Based on Predicted Future Demand In some embodiments, ridesharing management system100may collect a large volume of information over time related to the demand for ridesharing vehicles in geographical areas at particular times, places, etc. This historical data may be stored by ridesharing management system100, for example, in database170for future use. However, some existing ridesharing management systems may encounter the technical problem of how to process the large amount of historical data that is collected and to use the historical data to provide an improved user experience for the riders and/or drivers in the ridesharing network. This problem may be particularly prevalent in systems for which the number and type of ridesharing vehicles present in a geographical area at a given point in time fluctuates based on human behavior, choices, traffic conditions, weather conditions, seasonality etc. Some of the presently disclosed embodiments may address these technical problems by collecting and processing historical data to make predictions of future demand in general zones of a geographical area. Further, in some embodiments, the ridesharing management system100may use the predicted future demand for ridesharing vehicles to preposition vehicles with capacity to transport passengers in areas proximate to the expected passengers. Further, the historical data may be used to selectively position user-driven vehicles and autonomous vehicles within a geographical area to better meet an expect demand based on historical patterns. For example, in one embodiment, the user-driven vehicles may be positioned in a holding zone proximate a general zone with an expected high demand while the autonomous vehicles may be assigned a route (e.g., a circular route) through the general zone. Presently disclosed embodiments may offer one or more advantages over ridesharing systems driven solely by present demand. For example, some embodiments may direct one or more vehicles to maintain a presence proximate an area of expected high demand such that the high demand will be met even though present demand exists in another geographical area. These and other features of presently disclosed embodiments are discussed in more detail below. FIG.15is a diagram illustrating an example of memory320storing a plurality of modules, consistent with the disclosed embodiments. The modules may be executable by at least one processor to perform various methods and processes disclosed herein. Further, it should be noted that memory320may store more or fewer modules than those shown inFIG.15, depending on implementation-specific considerations. As illustrated inFIG.15, memory320may store software instructions to execute a historical data module1501, a holding zone selection module1502, an instruction generation module1503, a database access module1504, and may also include database(s)1505. Historical data module1501may include software instructions for storing, receiving, and/or using historical data associated with past demand for ridesharing vehicles, e.g., in a geographical area. Holding zone selection module1502may include software instructions for selecting a holding zone for propositioning one or more empty (i.e., without passengers) ridesharing vehicles in preparation for a predicted imminent demand for ridesharing services. Instruction generation module1503may include software instructions for generating control signals for directing one or more ridesharing vehicles to one or more holding zones despite other ridesharing demand in the geographical area. Database access module1504may include software instructions executable to interact with database(s)1505, to store and/or retrieve information (e.g., historical data associated with past demand). Historical data module1501may include software instructions for storing, receiving, and/or accessing historical data associated with past demand for ridesharing vehicles in a geographical area. The historical data may include any information associated with demand for ridesharing vehicles at a prior point in time. For example, the historical data may include any historical information that may be used to analyze, estimate, or determine past demand for ridesharing vehicles, such as data collected about pick-up locations, days, times, etc. of prior ride requests. The historical data may also include information collected about weather conditions associated with a prior ride request. For example, the historical data may indicate that fewer rides are requested on a sunny day than on a rainy day in the same month. The historical data may also include raw data tracking previous ridesharing requests, such as a log created contemporaneously as past requests for a ride were initiated. The log may include, for example, an indication of the time and/or date of the request, the location of the user when the request was initiated, the time it look for the user to begin a trip in a ridesharing vehicle, etc. In other embodiments, however, the historical data may include analyzed and/or compiled data, such as a ride request frequency defined by a total number of ride requests received over a given period of time in a given area. For further example, the historical data may include data analyzed based on proximity to a given venue when a show or event ends, begins, or is occurring. For example, the historical data may include an average or median number of rides requested when a concert, play, etc. ends at a given venue on a weekend night. The geographical area may include any physical region, depending on implementation-specific considerations. For example, in one embodiment, the geographical area may be a legally defined area, such as a city, a state, a county, a country, etc. However, in other embodiments, the geographical area may be defined by the ridesharing management system100to include, for example, a certain numbers of streets, square miles, landmarks, etc. Indeed, the geographical area may be any physical area with boundaries assigned based on any suitable criteria for the given implementation. In some embodiments, historical data module1501may store the historical data associated with past demand for one or more ridesharing vehicles in a fleet of ridesharing vehicles in database1505. The historical data may be stored contemporaneously with its collection (e.g., within 1-2 minutes of its collection), at specific time intervals (e.g., once a day, once a month, biweekly, etc.), when initiated by a user, or in any other suitable manner. The stored historical data may then be accessed by historical data module1501, e.g., through database access module1504, at a later point in time than the data was collected. In one embodiment, the historical data may be used to predict imminent demand of ridesharing requests. As used herein, predicted imminent demand refers to demand that is predicted to occur within a predetermined time period from a given point in time. The predetermined time period may be within seconds (e.g., 10 seconds, 20 seconds, 30 seconds, etc.), within 1 minute, 5 minutes, 10 minutes, 15 minutes, 20 minutes, 25 minutes, 30 minutes, 35 minutes, 40 minutes, 45 minutes, 50 minutes, 55 minutes, or 1 hour from a given point in time. Further, in some embodiments, to predict imminent demand, the amount of demand may need to meet or exceed a predetermined threshold. For example, the demand may need to meet a predicted number of ride requests in a given period of time, such as greater than 10 requests over a 10 minute period of time, greater than 100 requests over 10 minutes, etc. For example, in one embodiment, predicting imminent demand of ridesharing requests may include predicting general zones in a geographical area associated with the imminent demand. The general zones may be any physical region located within a geographical area, depending on implementation-specific considerations. For example, in one embodiment, the geographical area may be a city, and the general zone may be a portion of the city where imminent demand is predicted to occur within the next 15 minutes. In another embodiment, the general zone may be defined to be an area within a certain distance from a venue, such as a concert hall, movie theatre, school, workplace, mall, airport, etc. For example, the general zone may be defined to be a certain number of square miles surrounding the venue. Holding zone selection module1502may include software instructions for selecting a holding zone for propositioning one or more empty ridesharing vehicles in order to expedite satisfaction of the predicted imminent demand. As used herein, empty ridesharing vehicles may refer to vehicles that are not carrying a passenger who requested a ride, including vehicles driven by a user and/or an autonomous vehicles. Such vehicles may be prepositioned before a predicted imminent demand materializes (i.e., before the ride requests are made). In this way, the satisfaction of the users requesting rides when the demand materializes may be increased compared to systems that do not include prepositioned vehicles. As used herein, a holding zone may be any physical area in the geographical area where one or more ridesharing vehicles may be located. For example, the holding zone may be a neighborhood (e.g., an area bounded by a set of streets), a specific location (e.g., a parking lot, parking garage, etc.), or a combination thereof. Further, the holding zone may include parking spots, predetermined holding patterns/routes, or any other areas for ridesharing vehicles to congregate. For example, in one embodiment, a holding zone may include parking spots for vehicles driven by users to wait for the predicted imminent demand to materialize. In some embodiments, a holding zone may include a continuous route for autonomous vehicles to follow while waiting for the imminent demand to materialize. Still-further, one or more of the holding zones may include a location where one or more partially or fully electrically-powered vehicles may charge an energy storage device (e.g., a battery) while waiting for a user assignment. In some embodiments, the holding zone for a particular ridesharing vehicle may be selected from a plurality of pre-identified holding zones stored in memory, e.g., database1505. The pre-identified holding zones may be areas where the ridesharing management system100provider has pre-negotiated, for the ridesharing vehicles to be located. For example, the ridesharing management system100provider may have agreements with owners of certain garages, parking lots, etc. Further, the holding zone for a specific ridesharing vehicle may be selected using real-time data (i.e., data collected within 10 seconds, 20 seconds, 30 seconds, 40 seconds, 50 seconds, a minute, 5 minutes, 10 minutes, 20 minutes, etc., of when the data is analyzed or used). For example, the holding zone for a given vehicle may be selected to be close to downtown office buildings during rush hour, a football stadium when a football game is expected to end, etc. Indeed, when selecting a holding zone for a specific ridesharing vehicle, holding zone selection module1502may take into account one or more implementation-specific considerations. In one embodiment, the holding zone for a given vehicle may be selected using data about passenger-capacity of the vehicle (e.g., vans or high capacity vehicles may be sent to holding zones where a large number of passengers will likely need a ride together, such as near a concert venue). In other embodiments, a holding zone may be selected for a specific ridesharing vehicle using data about a shift of a driver of the vehicle. For example, a vehicle driven by a driver for a longer period of time (e.g., 4 hours) may be selected to go to the holding zone instead of a vehicle driven by a driver for a shorter period of time (e.g., 15 minutes). Further, in some embodiments, holding zone selection module1502may be configured to identify a plurality of holding zones and direct a plurality of empty ridesharing vehicles to the plurality of holding zones. In some embodiments, the plurality of empty ridesharing vehicles may be selectively paired with the plurality of holding zones. For example, based on the predicted general zones in the geographical area, a single empty ridesharing vehicle may be directed to a first holding zone (e.g., near a location where demand is expected to be low, such as a small office building) and at least two empty ridesharing vehicles to a second holding zone (e.g., near a location where demand is expected to be comparably higher, such as a large office building). Instruction generation module1503may include software instructions for generating and/or sending instructions to at least one ridesharing vehicle. For example, in some embodiments, instructions directing one or more ridesharing vehicles to one or more holding zones identified for the respective vehicle(s) may be sent to mobile communication device(s) in the respective vehicles. In some embodiments, the instructions may include an indication that the one or more ridesharing vehicles should maintain a presence in the one or more holding zones despite other ridesharing demand in the geographical area. That is, a ridesharing vehicle may be instructed to stay in a holding zone in anticipation of predicted imminent demand instead of picking up a passenger currently demanding a ride in the geographical area. In this way, predicted surges in demand may be accommodated acid/or prioritized over contemporaneous and/or unexpected demand. Database1505may be configured to store any type of information of use to modules1501-1504, depending on implementation-specific considerations. For example, in embodiments in which historical data module1501is configured to store historical data associated with past demand for ridesharing vehicles, database1505may store the historical data. Further modules1501-1504may be implemented in software, hardware, firmware, a mix of any of those, or the like. For example, if the modules are implemented in software, they may be stored in memory320. However, in some embodiments, any one or more of modules1501-1504and data associated with database1505, may, for example, be stored in processor310and/or located on server ridesharing management server150, which may include one or more processing devices. Processing devices of server150may be configured to execute the instructions of modules1501-1504. In some embodiments, aspects of modules1501-1504may include software, hardware, or firmware instructions (or a combination thereof) executable by one or more processors, alone or in various combinations with each other. For example, modules1501-1504may be configured to interact with each other and/or other modules of server150to perform functions consistent with disclosed embodiments. FIG.16illustrates an example environment including a geographical area1600in which ridesharing vehicles130F,1624, and1626are dispatched and under control of ridesharing management system100. In the illustrated embodiment, processor310has determined that the geographical area1600includes a first general zone1602, a second general zone1604, and a third general zone1606. In the illustrated embodiment, the first and second general zones1602,1604have been predicted to be associated with imminent demand for rides. Therefore, the first general zone1602includes a first holding zone1608that enables propositioning of one or more ridesharing vehicles in the first general zone1602. Likewise, the second general zone1604includes a second holding zone1610that enables propositioning of one or more ridesharing vehicles in the second general zone1604. In the illustrated embodiment, the third general zone1606includes only a parking lot1612. Therefore, the third general zone1606has been associated with a lack of imminent demand and, accordingly, has not been assigned a holding zone. In some embodiments, the processor310may be configured to receive a rideshare request from a mobile communications device of a user in a vicinity of a specific ridesharing vehicle driving toward a selected holding zone, but assign the ride to another vehicle farther away from the user to pick up the user. For example, in the first general zone1602, user130A may request a ride via device120A while standing outside user's house1614. Vehicle130F may be driving toward the first holding zone1608, as indicated by arrows1616, when the ride request of user130A is received. However, vehicle1618, which is farther from user130A when the ride request is received, may be assigned to pick up user130A while vehicle130F may continue to the first holding zone1608. The foregoing feature may offer one or more advantages over systems that connect user130A to vehicle130F on the basis of closest proximity. For example, by directing vehicle130F to the first holding zone1608, the processor310may reduce or prevent the likelihood that a demand surge that occurs when a concert ends at concert hall1620is not met. In other words, user130A may experience a delay in being picked up for a ride to expedite satisfaction of the users who may be in concert hall1620for a concert that is about to end. In other embodiments, however, the processor310may receive a rideshare request from a mobile communications device of a user in a vicinity of a specific ridesharing vehicle driving toward a selected holding zone and send a message to the specific ridesharing vehicle to pick up the user when the desired destination of the user is in proximity to the selected holding zone. For example, in some embodiments, the processor310may direct vehicle130F to pick up user130A on its way to the first holding zone1608when the user's destination is ice cream shop1622, which is proximate the first holding zone1608. In some embodiments, one or more vehicles in a holding zone may be directed to leave a holding zone to pick up a passenger. For example, the processor310may be configured to receive a rideshare request from mobile communications device120A of user130A in a vicinity of a selected holding zone1608. The processor310may then to send a message to an empty rideshare vehicle130F that has been positioned in the selected holding zone1608to pick-up user130A. The instructions may include routing instructions to a pick-up location, such as house1614, in a vicinity of the selected holding zone1608. In some embodiments, the processor310may assign passengers to one or more ridesharing vehicles, such as vehicle1624and vehicle1626. In one embodiment, each of vehicles1624and1626may be already transporting one or more users and be assigned one or more additional users for simultaneous transportation. However, in another embodiment, each of vehicles1624and1626may be transporting a single user. Further, the processor310may track assignments of vehicle1624and vehicle1626to identify that vehicles1624and1626are about to be without passengers and without future assignments. Vehicle1624may be directed toward second holding zone1610, as indicated by arrow1628. The processor310may direct vehicle1624to second holding zone1610based on the current location of the vehicle (e.g., on a road headed toward second holding zone1610) and a predicted imminent demand proximate second holding zone1610. In the illustrated embodiment, imminent demand may be predicted because storm clouds1630indicate that it is likely to rain in second general zone1604. Vehicle1626may be directed to another holding zone (not illustrated inFIG.16) in a direction1632away from second holding zone1610. The processor310may direct vehicle1626to another bolding zone other than second holding zone1610based on the current location of vehicle1626(e.g., on a road headed away from second holding zone1610) and a predicted imminent demand proximate another holding zone. However, in other embodiments, vehicles1624and1626may be directed to separate holding zones based on a variety of implementation-specific considerations. For example, vehicle1624may be closer to second holding zone1610than to first holding zone1608such that vehicle1624may be assigned to meet the demand in second general zone1604instead of first general zone1602. For further example, in some embodiments, one or more of the vehicles may be directed to another holding zone if second holding zone1610is at full capacity (i.e., it cannot accommodate any more vehicles). In some embodiments, one or more of the holding zones may include a location where one or more ridesharing vehicles may park while waiting for a user assignment. For example, second holding zone1610includes parking spots1634. In some embodiments, one or more of the holding zones may include a route along which a vehicle may be directed to drive while awaiting a pick-up assignment. For example, second holding zone1610includes circular route1636. In one embodiment, manually-drivable vehicles may be directed to parking spots1634and autonomous vehicles may be directed to route1636. In the illustrated embodiment, route1636is shown as a circular route proximate to parking spots1634. However, in other embodiments, route1636may be a series of streets in a neighborhood, or any other suitable path one or more vehicles can follow while waiting for a passenger assignment. FIG.17Aillustrates a flowchart of an exemplary method1700for dispatching at least one ridesharing vehicle, in accordance with some embodiments of the present disclosure. The method1700may be carried out, for example, by processor310. For exemplary purposes only, method1700for dispatching at least one ridesharing vehicle is described with respect to processing device310cooperating with memory320to execute modules1501-1504. In accordance with method1700, processor310may store historical data associated with past ridesharing demand at block1702. For example, historical data module1501may store historical data in database1505as it is collected when ride requests are initiated for a ride. The stored historical data may then be accessed at a later point in time at block1704. For example, historical data module1501may access the historical data via database access module1504. Historical data module1501may use the accessed historical data to predict imminent demand of ridesharing requests at block1706. For example, the processor310may predict that first general zone1602and second general zone1604in geographical area1600will likely experience imminent demand for ridesharing requests. Holding zone selection module1502may select a holding zone for propositioning at least one ridesharing vehicle to address the predicted imminent demand at block1708. For example, the processor310may select second holding zone1610for vehicle1624based on predicted imminent demand in second general zone1604. The imminent demand may be predicted, for example, based on the likelihood of rain given storm clouds1630. The processor310may send instructions to at least one rideshare vehicle (e.g., vehicle1624) to travel to the selected holding zone (e.g., second holding zone1610) at block1710. FIG.17Billustrates a flowchart of an exemplary method1712for dispatching a plurality of ridesharing vehicles, in accordance with some embodiments of the present disclosure. The method1712may be carried out, for example, by processor310. For exemplary purposes only, method1712for dispatching at least one ridesharing vehicle is described herein with respect to processing device310cooperating with memory320to execute modules1501-1504. In accordance with method1712, processor310may receive location information from a plurality of ridesharing vehicles at block1714. For example, processor310may receive location information from vehicle1624and vehicle1626. The location information may include any information that enables the processor310to determine vehicle location within geographical area1600. For example, the location information may include global positioning system (“GPS”) coordinates, current speed, current direction of travel, etc. The processor310may assign additional users to one or more of the plurality of ridesharing vehicles for simultaneous transportation with users already being transported in the ridesharing vehicles at block1716. For example, vehicle1624may be carrying user130A when processor310assigns vehicle1624an additional user130B to transport. In this way, user130A and user130B may be simultaneously transported by vehicle1624because at least a portion of the trip of user130A overlaps with at least a portion of the trip of user130B such that they are in vehicle1624at the same time for a portion of their respective trips. The processor310may identify first and second ridesharing vehicles about to be without passengers and without future assignments at block1718. For example, the processor310may determine that vehicle1624has stopped at the final destination of its passengers and has not accepted another trip request. Likewise, the processor310may determine that vehicle1626is 0.1 miles from the final destination of its passengers and has not accepted another trip request. Based on this information, the processor310may direct the first ridesharing vehicle to a first holding zone at block1720and the second ridesharing vehicle to a second holding zone at block1722. In some embodiments, the first ridesharing vehicle and the second ridesharing vehicle may be directed to the holding zones to which each vehicle is closest. In other embodiments, the ridesharing vehicles may be directed based on a variety of implementation-specific considerations, such as holding zone capacity, holding zone occupancy rate, expected level of demand in a general zone or each zone, length of time the vehicle has been driven etc. Dynamic Route Planning In some embodiments, ridesharing management system100may collect a large volume of information over time related to available routes for ridesharing vehicles in geographical areas at particular times, places, etc. This data may be stored by ridesharing management system100, for example, in database170for future use. However, some existing ridesharing management systems may encounter the technical problem of how to process the large amount of stored data and large number of possible routes and to use the data to provide an improved user experience for the riders and/or drivers in the ridesharing network. This problem may be particularly prevalent in systems for which the number and type of ridesharing vehicles present in a geographical area at a given point in time fluctuates based on human behavior, choices, traffic conditions, weather conditions, etc. Some of the presently disclosed embodiments may address these technical problems by collecting and processing past route data and/or current variables affecting possible vehicle routes to determine an optimal route for a particular ridesharing vehicle through a given geographical area. For example, presently disclosed embodiments may take into account a capacity of a given ridesharing vehicle and how much of that capacity is being utilized at a given point in time to determine a vehicle route. For instance, in one embodiment, the ridesharing management system100may route a particular ridesharing vehicle along a route that results in a later arrival time for one or more passengers, as compared to another available route, when the ridesharing vehicle is operating with a different, e.g., higher capacity. In this way, a greater number of users may be serviced more quickly compared to systems that route based only on expected arrival time and not other variables. FIG.18is a diagram illustrating an example of memory320storing a plurality of modules, consistent with the disclosed embodiments. The modules may be executable by at least one processor to perform various methods and processes disclosed herein. Further, it should be noted that memory320may store more or fewer modules than those shown inFIG.18, depending on implementation-specific considerations. As illustrated inFIG.18, memory320may store software instructions to execute an input data collection module1801, a capacity status determination module1802, a vehicle routing module1803, a database access module1804, and may also include database(s)1805. Input data collection module1801may include software instructions for receiving input data (e.g., user ride requests, current location of ridesharing vehicles, etc.) from one or more sources. Capacity status determination module1802may include software instructions for determining a capacity status for one or more ridesharing vehicles based on a known passenger capacity. Vehicle routing module1803may include software instructions for sending one or more ridesharing vehicles to pick up user(s) and directing the vehicles along a determined route. Database access module1804may include software instructions executable to interact with database(s)1805, to store and/or retrieve information (e.g., geographical maps associated with a geographical area in which a ridesharing vehicle is operating). Input data collection module1801may include software instructions for receiving input data related to ridesharing vehicle routing. The input data may be any data relevant to directing one or more ridesharing vehicles along a route. For example, the input data may include ride requests from a plurality of users headed to different destinations. The ride requests may include information such as a starting point, a desired destination, an identity of the user, a rating of the user, etc. Moreover, the input data may be a current location of one or more ridesharing vehicles. The current location of a ridesharing vehicle may be received, for example, from a mobile communications device (e.g., a smartphone, tablet, etc.) associated with (e.g., located in the passenger cabin) the ridesharing vehicle. Capacity status determination module1802may include software instructions for determining a capacity status for one or more ridesharing vehicles. The capacity status of a vehicle at a given point in time may be any variable that captures the relative available capacity of the vehicle compared to a known capacity of the vehicle when empty. For example, the capacity status and/or known capacity may be measured with respect to a numerical value for each passenger. For example, if the known passenger capacity of a vehicle is 4 riders and 1 rider is in the vehicle, the capacity status of the vehicle is 3. Further, the capacity status and/or known capacity of the vehicle may include the vehicle's driver or may be computed without counting the vehicle's driver (e.g., in the case of an autonomous vehicle). In some embodiments, the capacity status may be adjusted based on factors other than the number of passengers currently in the vehicle but that affect the available capacity of the vehicle. For example, if a passenger has a suitcase that is taking up the space in the vehicle, the capacity status availability may be reduced by 2 passengers instead of 1 passenger. As another example, if a passenger takes up more than one seat in the vehicle, the capacity status may be reduced accordingly. Additionally, the capacity status may be adjusted automatically based on, for example, metadata associated with a user's ride request, and/or manual inputs, such as the driver's observations when the passenger is picked up. Capacity status determination module1802may also include software instructions for retrieving, receiving, and/or determining a capacity threshold for a given ridesharing vehicle. The capacity threshold may be any variable that captures the lack of further availability of a vehicle to accommodate transport of additional passengers and/or items. For example, the capacity threshold may be a percentage of the known passenger capacity of the given ridesharing vehicle. In one embodiment, the capacity threshold may be set at 75% of the known passenger capacity such that if 3 of the 4 available seats in a vehicle are full, the capacity threshold is met. In other embodiments, the capacity threshold may be determined based on a vehicle ride type selected and/or paid for by a given rider. For example, in one embodiment, a user may select a private ride such that the capacity threshold is set to one passenger. In some embodiments, the capacity threshold may be set based on the passenger-capacity of a given ridesharing vehicle. For example, in some embodiments, the capacity threshold may be one person less than a passenger-capacity of a ridesharing vehicle. In other embodiments, the capacity threshold may be two persons less than a passenger-capacity of the ridesharing vehicle. In other embodiments, the capacity threshold may be three persons less than a passenger-capacity of the ridesharing vehicle. In another embodiment, the capacity threshold may be four persons less than a passenger-capacity of the ridesharing vehicle. Further, the capacity threshold may be any given number of passengers, such as two passengers, three passengers, four passengers, etc. Further, in some embodiments, a particular vehicle type may have a known passenger capacity (e.g., a four door vehicle may accommodate 4 passengers other than the driver). However, certain sub-types of the vehicle type may nevertheless be assigned a capacity status or have a capacity threshold adjusted up or down for the particular vehicle type. For example, a small vehicle with reduced room inside may be assigned a lower known passenger capacity or may be found to meet a threshold capacity sooner than a larger vehicle. Likewise, a large vehicle with increased room inside may be assigned a higher known passenger capacity or may be found to reach a capacity threshold later than a smaller vehicle. Capacity status determination module1802may also include software instructions for determining whether the capacity status of a ridesharing vehicle meets the capacity threshold. For example, the module1802may compare a normalized capacity status to a normalized capacity threshold to determine if the threshold is met. The capacity status and capacity threshold may be normalized to both be represented as a whole number, percentage, ratio, etc. to enable comparison. In some embodiments, if the capacity status of the vehicle is below the capacity threshold, the ridesharing vehicle may be directed to pick up one or more additional passengers. Further, if the capacity status of the vehicle meets or exceeds the capacity threshold, the ridesharing vehicle may be directed to a route that transports the existing passengers to their respective destinations as quickly as possible. Vehicle routing module1803may include software instructions for routing a ridesharing vehicle to pick up and/or transport one or more users. For example, in response to the ride requests received by input data collection module1801from the plurality of users headed to differing destinations, vehicle routing module1803may send the ridesharing vehicle to pick up the plurality of users headed to the different destinations. That is, vehicle routing module1803may direct the ridesharing vehicle along one or more routes through the surrounding environment based on the current state of one or more variables. Further, the route to which the ridesharing vehicle is assigned may be dynamically adjusted during transportation of the plurality of users to redirect the ridesharing vehicle to optimize one or more performance variables. For example, in one embodiment, vehicle routing module1803may direct and/or redirect the ridesharing vehicle along one or more routes based on the capacity status of the ridesharing vehicle and/or one or more additional variables. In one embodiment, the ridesharing vehicle may be directed along a first route resulting in a first set of arrival times for the plurality of users e.g., if the capacity status of the ridesharing vehicle is below the capacity threshold). The ridesharing vehicle may also be directed along a second route resulting in a second set of arrival times for the plurality of users (e.g., if the capacity threshold is met). In some embodiments, the second set of arrival times may be earlier than the first set of arrival times. This may occur, for example, because the second route includes a toll road that is more direct than a non-toll road, a highway that is faster than side streets or streets with traffic signals, or any other factor affecting trip length. In one embodiment, each of the respective arrival times of each respective passenger may be earlier for the second route than the first route. However, in other embodiments, the second route may result in a set of arrival times that are generally earlier than the first route. That is, each respective arrival time of each passenger is not necessarily earlier on the second route than the first route, but at least one passenger may arrive earlier for the second route than the first route. In one embodiment, the ridesharing vehicle may be directed or redirected to the second route when the capacity of the ridesharing vehicle is below the capacity threshold but imminent demand is predicted in an area near at least one drop-off location associated with at least one passenger. For example, only one passenger may be in a vehicle that can accommodate three passengers. However, the current passenger's destination may be a concert hall near a sports arena that is hosting a sporting event that is expected to end within a few minutes of the passenger's estimated arrival time at the concert hall. Accordingly, in order to better meet the expected demand proximate the sports arena, ridesharing management system100may direct the ridesharing vehicle to take the second, faster route. In another embodiment, the ridesharing vehicle may be directed or redirected to the second route when the capacity status of the ridesharing vehicle is below the capacity threshold but another ridesharing vehicle is driving along a similar route to the first route. A “similar route” may be any route that partially or fully overlaps with the first route. For example, if the first route includes driving on portions streets A, B, C, and D, and another route includes driving on the same portions of streets A and B, the routes may be similar routes. The foregoing feature may enable greater efficiencies in the ridesharing system because the likelihood that multiple vehicles are traveling along the same or similar routes may be reduced, thus enabling duplicative routes to be reduced or eliminated. Database1805may be configured to store any type of information of use to modules1801-1804, depending on implementation-specific considerations. For example, in embodiments in which vehicle routing module1803is configured to access one or more prior-stored maps of geographical areas, database1805may store the geographical maps. Further, modules1801-1804may be implemented in software, hardware, firmware, a mix of any of those, or the like. For example, if the modules are implemented in software, they may be stored in memory320. However, in some embodiments, any one or more of modules1801-1804and data associated with database1805, may, for example, be stored in processor310and/or located on server ridesharing management server150, which may include one or more processing devices. Processing devices of server150may be configured to execute the instructions of modules1801-1804. In some embodiments, aspects of modules1801-1804may include software, hardware, or firmware instructions (or a combination thereof) executable by one or more processors, alone or in various combinations with each other. For example modules1801-1804may be configured to interact with each other and/or other modules of server150to perform functions consistent with disclosed embodiments. FIG.19illustrates a schematic of an example environment including a geographical area1900in which autonomous ridesharing vehicle130F and user-driven vehicle1902(not shown to scale for illustrative purposes) are dispatched and under control of ridesharing management system100. In the illustrated embodiment, processor310has received ride requests from the plurality of users130A-C via communications devices120A-C. In the illustrated example, user130A has requested a ride from the starting point at house1904to desired destination at school1906. Further, user130B has requested a ride from the starting point at house1908to desired destination at library1910. Similarly, user130C has requested a ride from the starting point at house1912to desired destination at park1914. In the illustrated example, ridesharing management system100has determined a first route1920resulting in a first set of arrival times for users130A-C. For example, first route1920may result in user130A arriving at school1906at 9:00 am, user130B arriving at library1910at 9:05 am, and user130C arriving at park1914at 9:10 am. Similarly, ridesharing management system100has determined a second route1922resulting in a second set of arrival times for users130A-B. For example, second route1922may result in user130A arriving at school1906at 8:55 am, user130B arriving at library1910at 9:00 am, and user130C being picked up by another ridesharing vehicle. In this embodiment, the second set of arrival times may be earlier than the first set of arrival times because user130A arrives at 8:55 am when route1922is taken but at 9:00 am when route1920is taken. Similarly, the second set of arrival times may be earlier than the first set of arrival times because user130A arrives at 9:00 am when route1922is taken but at 9:05 am when route1920is taken. In some embodiments, processor310may selectively direct vehicle1902along the first route1920and/or the second route1922based on the capacity status of the vehicle1902. For example, vehicle1902in the illustrated embodiment includes driver130D having mobile communications device120E mounted in vehicle1902. A passenger1924is being transported by vehicle1902. However, seats1926and1928, as well as a non-illustrated middle seat in the back, remained unoccupied. Therefore, vehicle1902may be determined to be below a capacity threshold for the vehicle, which may be set at 4 passengers. Therefore, vehicle1902may be determined to have availability to pick up 3 passengers to fill the remaining seats in vehicle1902. Thus, in one embodiment, vehicle1902may be directed along the first route1920to pick up passengers130A-C. In another embodiment, however, vehicle1902may be directed along second route1922. For example, passenger1924may have a large bag or suitcase taking up one seat in vehicle1902such that only two additional passengers can be transported simultaneously with passenger1924and his belongings. In such an embodiment, vehicle1902may be directed along second route1922to pick up passengers130A and13013, but not passenger130C. As another example, vehicle1902may be directed along second route1922when the capacity status of vehicle1902is below the capacity threshold but at least one of the users130A-C booked an expedited ride (e.g., by paying a higher price for the ride). For example, user130A may book an expedited ride to school1906such that vehicle1902is directed along second route1922to get user130A to school1906more quickly than if first route1920was taken. In another embodiment, vehicle1902may be selectively directed or redirected along the first route1920and the second route1922based on feedback received from one or more tracking systems. For example, in one embodiment, ridesharing management system100may receive real time traffic data (e.g., data from governmental traffic tracking cameras) updated every 30 seconds, every minute, every 5 minutes, every 10 minutes, etc. as traffic changes. In one embodiment, vehicle1902may be directed along the first route1920when the capacity threshold is met but traffic congestion is identified along the second route1922. In some embodiment, the traffic congestion may be atypical for the given day, time, and/or route. In another embodiment, vehicle1902may be directed along the second route1922based on real time traffic data. For example, vehicle1902may be directed to the second route when the capacity status of vehicle1902is below the capacity threshold and traffic congestion is identified along first route1920. In some embodiments, the traffic congestion along first route1920may be atypical for the given day, time, and/or route. In some embodiments, ridesharing management system100may receive inputs from mobile communications devices associated with users130A-C, vehicle130F, and/or vehicle1902throughout the continuous operation of the ridesharing dispatch system. These inputs may be used to direct one or more of the ridesharing vehicles in the system to different routes, to pick up different passengers, etc. For example, in one embodiment, processor310is configured to continuously receive location information from the mobile communications device120E associated with the ridesharing vehicle1902to estimate arrival time of vehicle1902at the pick up location1904of one of the plurality of users130A and to reassign a different ridesharing vehicle (e.g.,130F) when the processor310predicts a delay in arrival of the ridesharing vehicle1902at the pick-up location1904. For example, a delay may be expected because vehicle1902needs to stop for fuel, has encountered traffic, got into an accident, experienced mechanical malfunctions, got pulled over by the police, made a wrong turn, etc. In another embodiment, the processor310is configured to continuously receive location information from mobile communications device120A associated with one of the plurality of users130A, to estimate arrival time at the pick-up location1904of the user130A and to reassign a different ridesharing vehicle (e.g.,130F) when the processor310estimates that that user130A will arrive after the ridesharing vehicle1902. For example, if user130A requests a ride while visiting a neighbor, mobile communications device120A may reflect that user130A is not at pickup location1904. Processor310may further compute that it will take user130A more time to reach house1904than vehicle1902given the current locations of user130A and vehicle1902. Therefore, processor310may send a different vehicle, such as autonomous vehicle130F to pick up user130A. In another embodiment, the processor310is configured to continuously receive location information from a communications device (e.g. mobile communications device120A) associated with the ridesharing vehicle1902, to estimate arrival time at the pick-up location1904and to reassign a different ridesharing vehicle (e.g.,130F) when the processor estimates that the ridesharing vehicle1902will arrive at the pick-up location1904later than a time originally estimated. In some embodiments, processor310may be configured to track one or more variables that impact the capacity status of the ridesharing vehicle(s) and change the capacity status and/or capacity threshold accordingly. For example, in one embodiment, processor310may track the luggage of one or more passengers that can have an effect on the capacity status of the ridesharing vehicle and change the capacity status and/or capacity threshold accordingly. Luggage of one or more passengers may be tracked in any suitable manner, such as user input, sensor input, etc. For example, in one embodiment, the passenger and/or driver may input that he/she has a suitcase, bicycle, musical instrument, or other space-consuming item to transport. For further example, in another embodiment, processor310is further configured to track a passenger's physical condition that can have an effect on the capacity status of the ridesharing vehicle and to change the capacity status and/or capacity threshold accordingly. For instance, the passenger, driver, and/or one or more sensors may indicate that the passenger has a wheel chair, a baby, or any other item that takes up space, or that the passenger is a large size that will take up more than one seat. In some embodiments, vehicle1902may be directed or redirected along the first route1920when the capacity threshold is met but an additional user is assigned to the ridesharing vehicle1902with a pick-up location along the first route1920. For example, in one embodiment, the capacity threshold for vehicle1902may be set to 3 adult passengers so no adult passengers are assigned to sit in the middle back seat. However, the capacity threshold may be overridden if a user (e.g., user130A) has a pickup location (e.g., house1904) along the first route1920. FIG.20illustrates a flow chart of an exemplary method2000for dispatching at least one ridesharing vehicle, in accordance with some embodiments of the present disclosure. For exemplary purposes only, method2000for dispatching at least one ridesharing vehicle is described with respect to processing device310cooperating with memory320to execute modules1801-1804. In accordance with method2000, processor310may receive ride requests from a plurality of users, as described in detail above, at block2002. Further, processor310may receive location information fronts one or more communications devices associated with a ridesharing vehicle at block2004. These inputs may be collected as discussed above with respect to input data collection module1801. Capacity status determination module1802may determine a capacity status of the ridesharing vehicle available to pick up the plurality of users at block2006. For example, processor310may receive an input regarding the known passenger capacity of the ridesharing vehicle (e.g., the vehicle is an extended SUV that seats 6 passengers, a four-door car that seats 4 passengers, etc.). Processor310may then receive an input regarding the current number of passengers in the ridesharing vehicle. Based on these inputs, processor310may determine the capacity status of the vehicle. The capacity status may be expressed in terms of absolute numbers (e.g., vehicle can accommodate 2 passengers), a binary representation (e.g., availability or no availability), or any other suitable representation. Vehicle routing module1803may send the ridesharing vehicle to pick up the plurality of users at block2008. Processor310may then query whether the capacity status of the ridesharing vehicle is below the capacity threshold at block2010. Based on the result of the query at block2010, processor310may direct the ridesharing vehicle along a selected route. For example, in the illustrated embodiment, if the capacity status of the ridesharing vehicle is not below the capacity threshold, the ridesharing vehicle is directed along the second route1922at block2012. However, if the capacity status of the ridesharing vehicle is below the capacity threshold, the ridesharing vehicle is directed along the first route1920at block2014. In this way, the ridesharing vehicle may be directed to a route to pick up additional passengers when the capacity status indicates room in the vehicle for additional passengers or to an alternate route to drop off the existing passengers when the capacity status indicates lack of room for additional passengers. Further, method2000includes a query as to whether a rerouting condition has been met at block2016. The rerouting condition may be any factor that renders the current route of the ridesharing vehicle more or less desirable at a given point in time. For example, the rerouting condition may be a traffic report, identified construction, road hazard, weather alert, imminent expected demand in a nearby area, user request for an expedited ride, driver request to stop driving, etc. If the rerouting condition is met, the ridesharing vehicle may be redirected to an alternate route at block2020. However, if the rerouting condition is not met, processor310may not make any change to the directed route at block2018. For instance, in one embodiment, vehicle1902may be directed along first route1920at block2014. However, after the ride has begun, imminent demand may be predicted near library1910due to a reading group ending soon. Accordingly, vehicle1902may be redirected along an alternate route that includes library1910to meet the expected imminent demand. Purposefully Selecting Longer Routes to Improve User Satisfaction In the context of ridesharing, the quality of the passengers' user experience is typically not simply a function of the arrival time, simply because there are other available transportation alternatives that may be faster than ridesharing. Instead, a complicated combination of factors other than the fastest time of arrival may affect the satisfaction of a typical ridesharing passenger. In one example, as discussed above with reference toFIGS.6-8, automated ridesharing dispatch system300may avoid using the full capacity of its vehicles in regular operation. This enhances the experience of users who might otherwise feel cramped in vehicles at or near capacity. In another example, as discussed in greater detail below, ridesharing management system100may avoid directing the ridesharing vehicle to a fastest route when the fastest route would violate a principle associated with expected user satisfaction. Specifically, certain navigation maneuvers (e.g., backtracking, U-turns, traversing certain roads) can negatively impact the user experience, even if resulting in a faster arrival time. Therefore, as described below, ridesharing management system100may determine a driving route longer than alternative driving routes, but nevertheless determine a route that is substantially devoid of navigation maneuvers that negatively impact the user experience. FIG.21Aillustrates an exemplary embodiment of a memory2100containing software modules consistent with the present disclosure. In particular, as shown, memory2100may include a data capture module2102, a traffic module2104, a vehicle selection module2106, a route determination module2108, a transmission module2110, a database access module2112, and a database2114. Modules2102,2104,2106,2108,2110, and2112may contain software instructions for execution by at least one processing device, e.g., processor310, included with automated ridesharing dispatch system300. Data capture module2102, traffic module2104, vehicle selection module2106, route determination module2108, transmission module2110, database access module2112, and database2114may cooperate to perform multiple operations. For example, data capture module2102may receive ride requests from a plurality of users and receive indications of current locations of the plurality of ridesharing vehicles. Traffic module2104may receive real time traffic data and enables estimation of the durations of alternatives driving routes. Vehicle selection module2106may select a ridesharing vehicle to pick up the plurality of users. Route determination module2108may determine a route for the selected ridesharing vehicle. Transmission module2110may use a communications interface for sending information to the plurality of users about the pick-up location, and for sending driving directions to the selected ridesharing vehicle based on the determined route. Database access module2112may interact with database2114which may store a plurality of rules for determining the driving route and any other information associated with the functions of modules2102-2112. The plurality of rules may include a fastest route for guiding a ridesharing vehicle and a rule for reducing backtracking even in instances where backtracking would result in shorter travel time. In some embodiments, memory2100may be included in, for example, memory320. Alternatively or additionally, memory2100may be stored in an external database170(which may also be internal to ridesharing management server150) or external storage communicatively coupled with ridesharing management server150, such as one or more database or memory accessible over network140. Further, in other embodiments, the components of memory2100may be distributed in more than one server. In some embodiments, data capture module2102may receive ride requests from a plurality of users, and each ride request may include a starting point and a desired destination. A starting point may refer to a current location of the user, as input by each user through an input device of an associated user device, or as determined by a location service application installed on the user device. A desired destination may refer to a location where the user desires to be taken to, for example, an entrance of a shopping center. In some embodiments, data capture module2102may also receive from a plurality of communication devices associated with a plurality of ridesharing vehicles indications of current locations of the plurality of ridesharing vehicles. The current location of the plurality of ridesharing vehicles may be determined by a location service application installed on a driver device, a driving-control device, or by a location determination component in the ridesharing management system100, which may be a part of or separate from ridesharing management server150. For example, the indications of current locations of the plurality of ridesharing vehicles may include global positioning system (GPS) data generated by at least one GPS component associated with each ridesharing vehicle. In some embodiments, traffic module2104may include instructions configured to receive historical and/or real time traffic data, including information about at least one of street blockages and atypical congestion. Traffic data may include real-time traffic data regarding a certain geographical region, and may be used to, for example, calculate estimate time of arrival for pick-up locations. The traffic data may also be used for determining the driving route for a particular ride. Real-time traffic data may be received from a real-time traffic monitoring system, which may be integrated in or independent from ridesharing management system100. In one embodiment, traffic module2104may determine the real time traffic data from information received from the plurality of ridesharing vehicles. In some embodiments, traffic module2104may also identify an existence of an area of traffic obstruction in a vicinity of the driving route. Traffic obstructions may include scheduled event (e.g., a parade, an infrastructure repair, construction work, etc.) and an unscheduled event (e.g., a road closure, an accident, a public safety incident, or any related environmental condition, such as a fallen tree or powerline, etc.). In another embodiment, traffic module2104is configured to predict traffic conditions based on historic traffic data records. In some embodiments, vehicle selection module2106may select a ridesharing vehicle to pick up the plurality of users. In other words, vehicle selection module2106may assign the plurality of users to a common ridesharing vehicle. For example, ride service parameters may be transmitted to ridesharing management server150for processing the ride request and selecting an available ridesharing vehicle based on one or more ride service parameters. The ride service parameters may include user preference parameters regarding a vehicle ridesharing service, for example, a maximum walking distance from a starting point to a pick-up location, a maximum walking distance from a drop-off location to a desired destination, a total maximum walking distance involved in a ride, a maximum number of subsequent pick-ups, maximum delay of arrival/detour incurred by subsequent pick-ups during a ride, and a selection whether to permit toll road usage during the ride, etc. Ridesharing management server150may further be configured to receive user input from user devices (e.g., user devices120A-120C) as to various ride service parameters and may select a ridesharing vehicle to pick up the user, accordingly. In some embodiments, route determination module2108may determine a route for the selected ridesharing Vehicle. The determined route may include a plurality of pick-up and drop-off locations associated with the starting points and desired destinations of the plurality of users. Consistent with the present disclosure, determining the driving route may include selecting pick-up locations and drop-off locations for each of the plurality of users (commonly referred to as “virtual bus stops”), and determining the path between the virtual bus stops. The determined route may pass between all the determined pick-up points. When selecting a virtual bus stop, route determination module2108may confirm that the pick-up location is within a maximum walking distance (e.g., 300 meters or less) from the starting point, and that the drop-off location is within a maximum walking distance (e.g., 500 meters or less) to a desired destination. The virtual bus stops for the plurality of users and the driving route may be determined to minimize at least one of: a time duration of each user spends in the ridesharing vehicle, a time duration each user spends waiting in the pick-up location, a distance each user needs to walk from the starting point to the pick-up location, a distance each user needs to walk from the drop-off location to the desired destination, and the number of empty seats in the ridesharing vehicle. When determining the path between two virtual bus stops, route determination module2108may determine a route for the ridesharing vehicle other than the fastest route. Specifically, route determination module2108may determine a reduced-backtracking route. The term “reduced-backtracking route” means a route in which nonessential deviations from a trajectory of an average direction of the passengers' desired destinations are minimized Although an absolute non-backtracking route may yield the most user satisfaction, in some cases (e.g., at an exit from a highway, at a specific road formation, due to certain traffic rules and/or conditions, etc.), selecting an absolute non-backtracking route may not be a feasible option. The reduced-backtracking route is a route in which unnecessary navigation maneuvers (e.g., U-turns, three consecutive left turns, three consecutive right turns, and more) are avoided compared to alternative driving routes. An example of a non-reduced-backtracking route is depicted inFIG.21Band an example of a reduced-backtracking route is depicted inFIG.21C. In some embodiments, although route determination module2108may give more weight to the rule for reducing backtracking than the rule for fastest route, in some cases, driving route determination module2108may override the backtracking rule. In some embodiments, transmission module2110may communicate, based on instructions from vehicle selection module2106, with ridesharing management server150to send to the selected ridesharing vehicle, via a communications interface (e.g., communications interface360), driving directions according to the determined route. As discussed above, communications interface360may include a modem, Ethernet card, or any other interface configured to exchange data with a network, such as network140inFIG.1. For example, ridesharing management server150may include software that, when executed by a processor, provides communications with network140through communications interface360to one or more mobile communications devices120A-F. In some embodiments, transmission module2110may further send to the user, via the communications interface, information that causes a display of walking directions from a starting point to a pick-up location and from a drop-off location to a desired destination. Transmission module2110may further send (e.g., via a communications interface) messages to the passengers of a ridesharing vehicle when a route other than the reduced-backtracking route has been selected. The messages may appear in different formats, for example, a text message, an audio message, or a graphical image, which may include text. The messages may specify how much time each passenger is estimated to save by selecting the driving route other than the reduced-backtracking route. In some embodiments, database access module2112may cooperate with database2114to retrieve a plurality of rules for determining the driving route, map information, traffic data, environmental condition data, and/or any associated stored data or metadata. For example, database access module2112may send a database query to database2114which may be associated with database170. Database2114may be configured to store any type of information of use to modules2102-2112, depending on implementation-specific considerations. For example, route determination module2108may be configured to determine a route for the ridesharing vehicle using a plurality of rules stored in database2114. Route determination module2108may further be configured to determine the driving route for the ridesharing vehicle using data stored in database2114. The stored data may be prior-collected information. Prior-collected information may include ride request information from users and indications of locations of plurality of ridesharing vehicles received from data capture module2102. Prior-collected information may also include received real time traffic data and information providing a description of the nature, time, and/or date of any traffic conditions and/or environmental conditions received from traffic module2104 In some embodiments, database2114may include separate databases, including, for example, a vector database, raster database, tile database, viewport database, and/or a user input database, configured to store data. The data stored in database2114may be received from modules2102-2112, ridesharing management server150, from user devices120A-F and/or may be provided as input using data entry, data transfer, or data uploading. The data stored in the database2114may represent multiple data forms including, for example, general mapping and geographic information, latitude and longitude (Lat/ton) values, world coordinates, tile coordinates, pixel coordinates, Mercator and/or other map projection data, user identifier data, driver identifier data, vehicle identifier data, device type data, viewport data, device orientation data, user input data, geographical scale data, and a variety of other electronic data. Database2114may also include, for example, street, city, state, and country data including landmark identifiers and other related information. Database2114may also include search logs, cookies, web pages, and/or social network content, etc. Modules2102-2112may be implemented in software, hardware, firmware, a mix of any of those, or the like. For example, if the modules are implemented in software, the modules may be stored in a server (e.g., ridesharing management server150) or distributed over a plurality of servers. In some embodiments, any one or more of modules2102-2112and data associated with database2114, may, for example, be stored in processor310and/or located on ridesharing management server150which may include one or more processing devices. Processing devices of ridesharing management server150may be configured to execute the instructions of modules2102-2112. In some embodiments, aspects of modules2102-2112may include software, hardware, or firmware instructions (or a combination thereof) executable by one or more processors, alone or in various combinations with each other. For example, modules2102-2112may be configured to interact with each other and/or other modules of server150and/or a ridesharing management system100to perform functions consistent with disclosed embodiments. FIG.21BandFIG.21Care schematic illustrations of an example map including different driving route alternatives for a ridesharing vehicle2150, according to disclosed embodiments. The map includes a pick-up location2156for a passenger with a starting point2158and a drop-off location2152for a passenger with a desired destination2154. Drop-off location2152is not necessarily for the same passenger that was picked up at pick-up location2156. The map inFIG.21Bincludes a route2160that, based on traffic conditions, is the fastest route, while the map inFIG.21Cincludes a route2162that, based on traffic conditions, would take more time than route2160but consistent with the present disclosure is considered as a reduced-backtracking route. In some embodiments, ridesharing management system100is configured to identify multiple alternative route segments from pick-up location2156to drop-off location2152. Thereafter, ridesharing management system100is configured to use real-time traffic data to calculate a time estimation for each of the alternative route segments. In the example illustrated inFIGS.21B and21C, route2160is longer (distance wise) from route2162, but ridesharing management system100estimates that it would take 5 minutes to drive the route segment from drop-off location2152to pick-up location2156using route2160, and 8 minutes to drive the route segment from drop-off location2152to pick-up location2156using route2162. The reason route2160estimated to be is faster than route2162is due to traffic congestion determined from, for example, traffic data received by ridesharing management system100. However, consistent with the present disclosure, route2162is considered as a reduced-backtracking route compared to route2160because it does not have three consecutive left turns. In one embodiment, ridesharing management system100may be configured to select the appropriate route for ridesharing vehicle2150based on a plurality of parameters. In one example, when ridesharing vehicle2150is carrying five passengers, ridesharing management system100is configured to direct ridesharing vehicle2150via route2162because route2160has three consecutive left turns and it may negatively impact the user experience. In another example, when ridesharing vehicle2150is carrying only the passenger picked up at pick-up location2156, ridesharing management system100is configured to direct ridesharing vehicle2150via route2160, which is faster. With reference to the example described above,FIG.22depicts a flowchart of an example process2200used by ridesharing management system100to select between the different route alternatives. Process2200begins when ridesharing management system100receives ride requests from a plurality of users (block2202), receives GPS locations of the plurality of ridesharing vehicles (block2204), and receives real time traffic data (block2206). Thereafter, and based on the received data, ridesharing management system100may identify a plurality routes associated with different travel-times (block2207), and assign the plurality of users to ridesharing vehicle2150. In this example, ridesharing management system100has identified only two relevant routes: the fastest route (e.g., route2160) and the reduced-backtracking route (e.g., route2162). However, a person skilled in the art would recognize that ridesharing management system100may identify more than two route alternatives, and that process2200specified below may be modified to enable selection between numerous of routes. As mentioned above, the selection of the appropriate route for ridesharing vehicle2150is based on different parameters. Process2200continues when ridesharing management system100determines whether the number of passengers currently riding ridesharing vehicle2150is less than a certain threshold, e.g., two (decision block2208). In some embodiments—in the context of process2200, a group of passengers riding to the same drop-off location are considered as a single passenger. When less than, e.g., two passengers riding ridesharing vehicle2150, ridesharing management system100may direct ridesharing vehicle2150along the fastest route (block2218). On the other hand, when two or more passengers are riding ridesharing vehicle2150, ridesharing management system100may determine whether an expected travel-time change is greater than a backtracking threshold. In other words, ridesharing management system100may be configured to estimate how much faster the fastest route is. The backtracking threshold may be predetermined (e.g., 10 minutes) or dynamic (e.g., 5 minutes in rush hour and 10 minutes in regular hours). For example, at 3:00 am when traffic is typically light, passengers may mind that the ridesharing vehicle backtracks, and the backtracking threshold may be lower (e.g., two minutes). When the expected travel-time change is greater than a backtracking threshold, ridesharing management system100may direct ridesharing vehicle2150along the fastest route (block2218). In the example above, when ridesharing vehicle2150carried five passengers route2160was estimated to be 3 minutes faster than route2162, and ridesharing management system100determined to use the reduced-backtracking route. But if, for example, due to road construction on street route2160was estimated to be 24 minutes faster than route2162, ridesharing management system100may determine to use the faster route. Process2200continues when ridesharing management system100determines whether there is an indication of imminent high demand (decision block2212). In one embodiment, the imminent high demand for ridesharing vehicles may be the result of an inclement weather condition. When there is an indication of imminent high demand, ridesharing management system100may direct ridesharing vehicle2150along the fastest route (block2218). On the other hand, when there is no an indication of imminent high demand, ridesharing management system100may determine if there is a report or other indication of an event that would affect the traffic on the reduced-backtracking route. In one embodiment, the event may be a scheduled event, for example, when the reduced-backtracking route passes next to a school, ridesharing management system100may direct ridesharing vehicle2150along the fastest route near the end of the school day. In another embodiment, the event may be an unscheduled event, for example, when the reduced-backtracking route passes next to a building that is on fire, and ridesharing management system100may direct ridesharing vehicle2150along the fastest route. Accordingly, when ridesharing management system100determines that the number of passengers currently riding ridesharing vehicle2150is greater than two, the expected travel-time change may be less than a backtracking threshold, and when there is no an indication of imminent high demand and there is no report or other indication of an event that would affect the traffic on the reduced-backtracking route, the system may direct ridesharing vehicle2150along the reduce-backtracking route. Reference is now made toFIG.23, which depicts an exemplary method2300for managing a fleet of ridesharing vehicles consistent with the present disclosure. In one embodiment, the steps of method2300may be performed by automated ridesharing dispatch system300. In the following description, reference is made to certain components of ridesharing management server150for purposes of illustration. It will be appreciated, however, that other implementations are possible and that other components may be utilized to implement the exemplary method. It will be readily appreciated that the illustrated method can be altered to modify the order of steps, delete steps, or further include additional steps. At step2302, a communications interface (e.g., communications interface360) may receive ride requests from a plurality of users. Consistent with the present disclosure, each ride request may include a starting point and a desired destination. As mentioned above, the starting point may refer to a current location of the user, as input by the user through an associated user device, or as determined by a location service application installed on the associated user device. In some embodiments, the starting point may be a location different from the current location of the user, for example, a location where the user will subsequently arrive at (e.g., entrance of a building). A desired destination may refer to a location where the user requests to be taken to. In another embodiment, each ride request may include additional information, for example, information identifying the user, a selection of a type ridesharing service, an indication of a maximum walking distance, etc. At step2304, the communications interface may receive from a plurality of communication devices associated with a plurality of ridesharing vehicles, indications of current locations of the plurality of ridesharing vehicles. Consistent with the present disclosure, the indications of the current locations of the plurality of ridesharing vehicles may include global positioning system (GPS) data generated by at least one GPS component associated with each ridesharing vehicle. In one example, the plurality of communication devices may include mobile devices such as tablets or smartphones that belong to the drivers of the ridesharing vehicles. In another example, the plurality of ridesharing vehicles includes multiple smart vehicles and each communication device may be a component of a smart vehicle. The term “smart vehicles” refers to vehicles (autonomously and/or manually-driven) with computing resources, location determination components, and communication devices. A smart vehicle may communicate with ridesharing management system100independently to the driver. At step2306, a processing device (e.g., processor310) may access memory database170) configured to store a plurality of rules including a rule to select a fastest route for guiding a ridesharing vehicle and a rule for reducing backtracking, even in instances where backtracking would result in a shorter travel time. In addition to the fastest route rule and the reduced-backtracking rule, the memory is further configured to store additional rules for determining the driving route for the ridesharing vehicle. For example, a rule to avoid specific roads, a rule to minimize a time duration in which each assigned user spends in the ridesharing vehicle, a rule to minimize a time duration in which each assigned user spends waiting, a rule to minimize a distance each assigned user needs to walk from the starting point to the pick-up location, a rule to minimize a distance each assigned user needs to walk from the drop-off location to the desired destination, and a rule to minimize the number of empty seats in the ridesharing vehicle. At step2308, the processing device may assign the plurality of users to a common ridesharing vehicle (e.g., ridesharing vehicle2150). Consistent with the present disclosure, the processing device is further configured to assign a user to the ridesharing vehicle based on at least some of the following parameters: a location of the ridesharing vehicle, a driving direction of the ridesharing vehicle, a driving route of the ridesharing vehicle, a number of passengers riding the ridesharing vehicle, a number of users assigned to the ridesharing vehicle and scheduled to be picked up, the virtual bus stops that the ridesharing vehicle is scheduled to stop, the desired destinations of all the users assigned to the ridesharing vehicle, real time traffic data, the user's starting point, the user's desired destination, the user's personal preferences, and the type of service the user selected. At step2310, the processing device may use the stored plurality of rules to determine a route for the ridesharing vehicle other than the fastest route. The determined route including a plurality of pick-up and drop-off locations associated with the starting points and desired destinations of the plurality of users. Consistent with the present disclosure, the processing device may select the determined route to account for the rule for reducing backtracking. In one embodiment, applying the rule for reducing backtracking means routing the ridesharing, vehicle in a manner avoiding a trajectory opposite to an average direction of the plurality of users' desired destinations. For example, in some instances, the system may avoid a trajectory of 120-180 degrees away from the user's desired destination. In another embodiment, applying the rule for reducing backtracking means routing the ridesharing vehicle in a manner avoiding three consecutive left turns. In another embodiment, applying the rule for reducing backtracking means routing the ridesharing vehicle in a manner avoiding three consecutive right turns. In another embodiment, applying the rule for reducing backtracking means routing the ridesharing vehicle in a manner reducing U-turns. In another embodiment, applying the rule for reducing backtracking means routing the ridesharing vehicle in a manner reducing unnecessary navigation maneuvers. In another embodiment, applying the rule for reducing backtracking may include routing the ridesharing vehicle in a manner that takes into consideration a combination of duration and distance. In some embodiments, the processing device may receive real time traffic data and may calculate an expected travel-time change associated with users currently riding in the ridesharing vehicle when the ridesharing vehicle is directed along a route with backtracking as compared to a route with reduced backtracking. In other words, the processing device may estimate how much time the passengers will save if the ridesharing vehicle would take the driving route with backtracking. In one embodiment, the expected travel-time change may be calculated separately for each of the users currently riding the ridesharing vehicle. Alternatively, the expected travel-time change may be calculated collectively for the users currently riding the ridesharing vehicle. Consistent with the present disclosure, the processing device may be further configured to override the backtracking rule when the received traffic data is indicative of at least one of street blockages and atypical congestion. For example, the traffic data may be indicative of a road closure, a parade, an accident, a public safety incident, and an infrastructure repair. Additionally, the processing device may be further configured to override the backtracking rule in response to a received indication of imminent high demand for rides. Additionally, the processing device may be further configured to override the backtracking rule when an expected travel-time change is higher than a backtracking threshold. As described above, the value of the backtracking threshold may be dynamic and may be determined based on at least one of a time of day and a type of users currently riding the ridesharing vehicle (e.g., regular users, VIP users, students, and more). At step2312, the processing device may direct the ridesharing vehicle along the determined route other than the fastest route in order to reduce backtracking. In the example above, ridesharing vehicle2150may be directed via route2162and not2160. However, as discussed above, consistent with disclosed embodiments the processing device may be further configured to override the backtracking rule. In one scenario, the processing device may determine an updated route along which to direct the ridesharing vehicle, and to change at least one drop-off location of the plurality of users after determining the updated route. In a similar scenario, the processing device may be further configured to override the backtracking rule, determine an updated route along which to direct the ridesharing vehicle, and to reassign a user scheduled to be picked up by the ridesharing vehicle to another ridesharing vehicle. Typically, these scenarios happen when one of the parameter described above has changed. For example, with reference toFIGS.21B and21C, the following scenario may happen, processing device may receive traffic information that an accident occurred on a particular street and that it would take 18 minutes to drive from drop-off location2152to pick-up location2156using route2162(instead on 8 minutes). Accordingly, the processing device may override the backtracking rule and either change at least one drop-off location of the plurality of users after determining the updated route or reassign a user scheduled to be picked up by the ridesharing vehicle to another ridesharing vehicle. In some embodiments, the processing device may receive at least one additional ride request from at least one additional user and change the determined route to pick-up at the least one additional user. Route Planning Based on Environmental Conditions In some embodiments, ridesharing management server150may receive a ride request from, for example, user130A sent through user device120A. The ride request may include a starting point and a desired destination of the user. When processing such rides requests in order to timely navigate the user from the starting point to the desired destination, the system may need to take into account traffic or other environmental conditions. For example, existing systems may not be equipped to determine an optimum route that avoids traffic or environmental conditions. Moreover, although existing systems may provide a choice to a user of different route options, given the increasing frequency of environmental disturbances, may nevertheless fail to provide real-time route planning based on traffic or environmental conditions. Presently disclosed embodiments, on the other hand, address this technical problem by providing route planning based on received user ride requests and detected or anticipated environmental conditions. For example, in one embodiment, ridesharing management server150may receive real time traffic data, including information about at least one of street blockages and atypical congestion from a plurality of driver devices (e.g., driver devices120D and120E) associated with drivers (e.g., drivers130D and130E) operating vehicles. Ridesharing management server150may be configured to send, based on the real time traffic data, ride service assignments (e.g., including pick-up and drop-off location information) to the plurality of driver devices associated with the drivers, and/or a driving-control device (e.g., driving-control device120F) associated with an autonomous vehicle (e.g., autonomous vehicle130F), to substantially avoid the street blockages and atypical congestion. In some embodiments, ridesharing management server150may identify a change in the area of traffic obstruction, based on detected traffic data and related environmental conditions, and may send updated driving directions to a user, driver device and/or a driving-control device associated with an autonomous vehicle to substantially avoid the area of traffic obstruction. Ridesharing management server150may determine an alternative pick-up location to accommodate the change in the area of traffic obstruction. Ridesharing management server150may send to the user information about the alternative pick-up location and update the driving directions to accommodate the change in the area of traffic obstruction. In some examples, ridesharing management server150may also predict an area that may have traffic obstruction in the near future, based on traffic data and environmental conditions stored, for example, in database170, and may determine pick-up and drop-off locations and send corresponding walking instructions to one or more users sending ride requests to ridesharing management server150. A user may then follow the walking instructions to move to a pick-up location that avoids the anticipated traffic obstruction. As discussed above, user devices120A-120C, driver devices120D and120E, and driving-control device120F may respectively be installed with a user side ridesharing application, and a corresponding driver side ridesharing application. Mobile communications device200may be installed with the user side ridesharing application, and the corresponding driver side ridesharing application, and/or other software to perform one or disclosed embodiments, such as on mobile communications devices120A-120F. Mobile communications device200may retrieve GPS/navigation instructions268from memory250and may facilitate GPS and navigation-related processes or routes associated with drivers130D and130E in communication with ridesharing management server150. Ridesharing management server150may receive real time traffic data including any traffic obstruction in a vicinity of a user's starting point, select a vehicle-for-hire to pick up the user, identify a pick-up location, send to user130A-C information about the pick-up location, and send to driver devices120D and120E associated with drivers130D and130E, and driving-control device120F driving directions to the pick-up location, as described in greater detail below. In some embodiments, ridesharing management server150may transmit information to user device120A-C, which may be, for example, a smartphone or tablet having a dedicated application installed therein. A graphical user interface (GUI) including a plurality of user-adjustable feature user side or driver-side ridesharing application settings may be included on a display of mobile communications devices120A-120C to visibly output information to one or more users130A-C and drivers130D and130E. FIG.24illustrates an exemplary embodiment of a memory2400containing software modules consistent with the present disclosure. In particular, as shown, memory2400may include a data capture module2402, a traffic module2404, a vehicle and pick-up location selection module2406, a transmission module2408, a database access module2410, and a database2412. Modules2402,2404,2406,2408, and2410may contain software instructions for execution by at least one processing device, e.g., processor310, included with automated ridesharing dispatch system300. Data capture module2402, traffic module2404, vehicle and pick-up location selection module2406, transmission module2408, database access module2410, and database2412may cooperate to perform multiple operations. For example, data capture module2402may receive a ride request from a user and receive indications of current locations of the plurality of vehicles-for-hire. Traffic module2404may receive real time traffic data and identify an existence of an area of traffic obstruction. Vehicle and pick-up location selection module2406may select a vehicle-for-hire to pick up the user and identify a pick-up location. Transmission module2408may send to the user, via a communications interface, information about the pick-up location, and may send to the selected vehicle-for-hire, via a communications interface, driving directions to the pick-up location. Database access module2410may interact with database2412which may store any information associated with the functions of modules2402-2408. In some embodiments, memory2400may be included in, for example, memory320storing programs330including, for example, server app(s)332, operating system334, and data340, and a communications interface360discussed above. Alternatively or additionally, memory2400may be stored in an external database170(which may also be internal to ridesharing management server150) or external storage communicatively coupled with ridesharing management server150, such as one or more database or memory accessible over network140. Further, in other embodiments, the components of memory2400may be distributed in more than one server. In some embodiments, data capture module2402may receive a ride request from a user, and the ride request may include a starting point and a desired destination. A starting point may refer to a current location of the user, as input by the user through an input device of an associated user device, or as determined by a location service application installed on the user device. In some embodiments, the starting point may be a location different from the current location of the user, for example, a location where the user will subsequently arrive at (e.g., an entrance of a building after walking a predetermined distance). A desired destination may refer to a location where the user requests to be taken to, including for example, a drop off-point located at or near a particular destination point (e.g., an entrance of a different building). In some embodiments, data capture module2402may also receive from a plurality of communication devices associated with a plurality of vehicles-for-hire indications of current locations of the plurality of vehicles-for-hire. The current location of the plurality of vehicles-for-hire may be determined by a location service application installed on a driver device, a driving-control device, or by a location determination component in the ridesharing management system100, which may be a part of or separate from ridesharing management server150. In some embodiments, data capture module2402may also include software instructions for categorizing data obtained by ridesharing management server150(and obtained from other servers and/or user devices120A-C over network140) into a plurality of categories including, for example, ride request route information and vehicle-for-hire locations. Data received may include audio and image data, captured, by, for example, an image sensor or a microphone associated with vehicle-for-hires and a plurality of user devices. Data received from ridesharing management server150may also include GPS data and/or other user130A-C or driver130D-E device identifiers related to mobile communication devices120A-F. In some embodiments, image data, audio data, GPS data, and user130A-C or driver130D-E data may be preprocessed by data capture module2402. Preprocessing may include, for example, sorting, filtering, and or automatically storing or categorizing data relating to ride requests, route information, and/or vehicle-for-hire locations in database170. In some embodiments, traffic module2404may include instructions configured to receive historical and/or real time traffic data, including information about at least one of street blockages and atypical congestion. Traffic data may include real-time traffic data regarding a certain geographical region, and may be used to, for example, calculate estimate pick-up and drop-off times, and determine an optimal route for a particular ride. Real-time traffic data may be received from a real-time traffic monitoring system, which may be integrated in or independent from ridesharing management system100. Traffic module2404may determine real time traffic data information received from the plurality of communication devices associated with the plurality of vehicles-for-hire. In some embodiments, traffic module2404may also identify an existence of an area of traffic obstruction in a vicinity of the user's starting point. Traffic obstructions may include a road closure, a parade, an accident, a public safety incident, an infrastructure repair, a car accident, construction work, or any related environmental condition, such as a fallen tree or powerline. The area of traffic obstruction may be a region where traffic flow is slower than in an adjacent region. Other types of obstructions contemplated by one of ordinary skill in the art are consistent with the disclosed embodiments. In some embodiments, vehicle and pick-up location selection module2406may select a vehicle-for-hire to pick up the user. For example, ride service parameters may be transmitted to ridesharing management server150for processing the request and selecting an available vehicle-for-hire based on the ride service parameters. Ride service parameters may include user preference parameters regarding a vehicle ridesharing service, for example, a maximum walking distance from a starting point to a pick-up location, a maximum walking distance from a drop-off location to a desired destination, a total maximum walking distance involved in a ride, a maximum number of subsequent pick-ups, maximum delay of arrival/detour incurred by subsequent pick-ups during a ride, and a selection whether to permit toll road usage during the ride, etc. Ridesharing management server150may further be configured to receive user input from user devices (e.g., user devices120A-120C) as to various ride service parameters and may select a vehicle-for-hire to pick up the user, accordingly. In some embodiments, vehicle and pick-up location selection module2406may also identify a pick-up location, which may be remote from the user's starting point, and peripheral to the area of traffic obstruction. Vehicle and pick-up location selection module2406may select the pick-up location such that a path from a current location of the selected vehicle-for-hire to the pick-up location avoids the area of traffic obstruction. For example, a ride request may be associated with a maximum walking distance (e.g., 300 meters) from a starting point to a pick-up location that is remote from the user's starting point, as discussed above. When selecting an available vehicle to pick up the user, vehicle and pick-up location selection module2406may also include in the assignment an assigned pick-up location within the maximum walking distance (e.g., 300 meters or less from the starting point). Similarly, a ride request may be associated with a maximum walking distance (e.g., 500 meters) from a drop-off location to a desired destination. When selecting an available vehicle to pick up the user, vehicle and pick-up location selection module2406may also include in the assignment an assigned drop-off location within the maximum walking distance (e.g., 500 meters or less from the desired destination). For requests associated with a maximum total walking distance relative to both the pick-up location and the drop-off location (e.g., a user is willing to walk up to a combined distance of one kilometer to both reach the pick-up location and to reach a desired destination from the drop-off location), when assigning an available vehicle to pick up the user, vehicle and pick-up location selection module2406may select an assigned pick-up location and an assigned drop-off location accordingly (e.g., the combined distance from the user's starting point to the pick-up location and from the drop-off location to the desired destination is equal to or less than one kilometer). In some embodiments, transmission module2408may communicate, based on instructions from vehicle and pick-up location selection module2406, with ridesharing management server150to send to the user, via a communications interface (e.g., communications interface360), information about the pick-up location. As discussed above, communications interface360may include a modem, Ethernet card, or any other interface configured to exchange data with a network, such as network140inFIG.1. For example, ridesharing management server150may include software that, when executed by a processor, provides communications with network140through communications interface360to one or more mobile communications devices120A-F. In some embodiments, transmission module2408may also send to the selected vehicle-for-hire, via the communications interface, driving directions to the pick-up location. The transmitted driving directions may substantially avoid an area of traffic or other obstruction or environmental condition. In some embodiments, transmission module2408may further send to the user, via the communications interface, walking directions from the drop-off location to the desired destination. In some embodiments, transmission module2408may also communicate with ridesharing management server150to send via a communications interface a first message to user device120A to cause an indication of a calculated estimated pick-up time to appear on a display of user device120A. Transmission module2408may also send a second message to user device120A walking directions from the drop-off location to the desired destination. Transmission module2408may further send via a communications interface in a message to the selected vehicle-for-hire driving directions and an estimated time of travel to the pick-up location. The messages may appear in different formats, for example, a text message including an estimated pick-up time, an audio message, or a graphical image, which may include text. Transmission module2408may also communicate confirmation messages and notification and/or alerts based on detected changes in real-time traffic data. Transmission module2408may also transmit selected maps for mobile devices120A-F in accordance with instructions determined by vehicle and pick-up location selection module2406. In some embodiments, database access module2410may cooperate with database2412to retrieve map information, traffic data, environmental condition data, and/or any associated stored data or metadata. For example, database access module2410may send a database query to database2412which may be associated with database170. Database2412may include a map vector-based database or a map raster-based database, and database access module2410may be configured to extract a map image from a larger pre-assembled map image, which may be delivered to, for example, user device120A-C or driver device120D-F for display. In some embodiments, instead of a vector-based or raster-based system, a tile-based system may be implemented from database2412. For example, database access module2410may instruct processor310to send a request for map data to an external map tile server, and mobile devices120A-F may receive a set of map tiles corresponding to a ride request. In other embodiments, database access module2410may instruct a tile maker program module to divide raster images into a plurality of discrete map tiles from a painter library or rich map engine library that is commercially available. Database access module2410may instruct processor310to compile a received set of cut map tiles in a grid, position the tile grid with respect to a clipping shape, and may output the grid as a single map as part of a user or driver side ridesharing application displayed within a GUI or web browser of mobile devices120A-F. Database access module2410may select map information in accordance with GPS data and determined pick-up and drop-off locations specified by user120A-C, vehicle-for-hire driver130D-E locations, and identified locations of traffic obstructions. Database2412may be configured to store any type of information of use to modules2402-2410, depending on implementation-specific considerations. For example, in embodiments in which traffic module2404is configured to provide information about traffic conditions to the driver of a vehicle-for-hire, database2412may also retrieve stored prior-collected information Prior-collected information may include ride request information from users and indications of locations of plurality of vehicles-for-hire received from data capture module2402. Prior-collected information may also include received real time traffic data and information providing a description of the nature, time, and/or date of any traffic conditions and/or environmental conditions received from traffic module2404. The description may include words and/or images (e.g., photographs, icons, symbols, etc.) representing the conditions. In some embodiments, database2412may store one or more images received from traffic module2402that include traffic data including congestion and/or any environmental conditions. Prior-collected information may also include pick-up locations received from vehicle and pick-up location selection module2406or any transmitted information received from transmission module2410. In some embodiments, database2412may include separate databases, including, for example, a vector database, raster database, tile database, viewport database, and/or a user input database, configured to store data. The data stored in database2412may be received from modules2402-2410, ridesharing management server150, from user devices120A-F and/or may be provided as input using data entry, data transfer, or data uploading. The data stored in the database2412may represent multiple data forms including, for example, general mapping and geographic information, latitude and longitude (Lat/Lon) values, world coordinates, tile coordinates, pixel coordinates, Mercator and/or other map projection data, user identifier data, driver identifier data, vehicle identifier data, device type data, viewport data, device orientation data, user input data, geographical scale data, and a variety of other electronic data. Database2412may also include, for example, street, city, state, and country data including landmark identifiers and other related information. Database2412may also include search logs, cookies, web pages, and/or social network content, etc. Modules2402-2410may be implemented in software, hardware, firmware, a mix of any of those, or the like. For example, if the modules are implemented in software, the modules may be stored in a server (e.g., ridesharing management server150) or distributed over a plurality of servers. In some embodiments, any one or more of modules2402-2410and data associated with database2412, may, for example, be stored in processor310and/or located on ridesharing management server150, which may include one or more processing devices. Processing devices of ridesharing management server150may be configured to execute the instructions of modules2402-2410. In some embodiments, aspects of modules2402-2410may include software, hardware, or firmware instructions (or a combination thereof) executable by one or more processors, alone or in various combinations with each other. For example, modules2402-2410may be configured to interact with each other and/or other modules of server150and/or a system100to perform functions consistent with disclosed embodiments. FIG.25is a schematic illustration of an example of a map2500including map information used for ridesharing purposes according to a disclosed embodiment. Map2500may include a map implemented on user-side and/or drive-side ridesharing applications. In the example shown inFIG.25, map2500includes a map of the city of Las Vegas including McCann Airport, Las Vegas Blvd., and Downtown Las Vegas. Map2500may include a user2502located, for example, on Las Vegas Blvd. near McCann. Airport and a traffic obstruction2504located nearby on Las Vegas Blvd. prior to the intersection of Flamingo Rd. Traffic obstruction2504may be representative of traffic-based environmental conditions including, for example, construction work, traffic, and/or both. In some embodiments, other environmental conditions (not shown) such as at least one of a road closure, a parade, an accident, a public safety incident, and an infrastructure repair may be included as icons or graphics displayed on map2500as alternatives to or in addition to traffic obstruction2504. Further, although the indication of the traffic obstruction shown inFIG.25is an icon, in some embodiments, in addition to the icon or as an alternative to the icon, the indication of the traffic obstruction may include words describing the traffic obstruction and/or images of the traffic obstruction. Map2500may be displayed as part of a GUI visible on, for example, mobile devices120A-F. In the example shown inFIG.25, user2502may send a request for pick up at a present location along Las Vegas Blvd. near McCarron Airport and may request to be dropped off at a requested destination, such as on Las Vegas Blvd. at the intersection of Desert Inn Rd. Alternatively, as another example, user2502may request to be dropped off at another destination, such as Downtown Las Vegas. Consistent with the disclosure, different user pick-up points and different destination points may be inputted by user2052operating user device120A-C and sent to ridesharing management server150. In some embodiments, a plurality of vehicles for hire located in the vicinity of user2502may be displayed and/or updated in real-time on map2500. For example, map2500shows a vehicle2508located on Sahara Ave. at the time user2502submits a ride request. In some embodiments, data related to distances of one or more vehicles-for-hire to user2502and/or estimated times to reach user2502may be displayed and/or updated in real-time on map2500according to preferences of user2502. In addition, map2500may be zoomed-in or zoomed-out based on preferences of user2502. In some embodiments, map2500may identify a user location, allow a user to subsequently identify a desired pick-up or drop-off point for ridesharing, and/or identify a traffic or environmental condition in relation to the pick-up and drop off points so as to avoid the obstruction during ridesharing. Additionally, map2500may include buttons (not shown) for user input to facilitate a pick-up request based on a user's location. For example, based on user2502selection of a button, a prompt to a user may ask for permission to access the current GPS location of smartphone120A. In response to user approval enabling access to a current GPS location, processor310may then zoom the displayed map2500image to fit the map data to the boundaries of smartphone120A viewport, and/or may surround the map data around an origin aligned with current geographic location of smartphone120A. In some embodiments, selection of buttons may provide a dialogue box to a user to allow for entry of text indicating a desired zip code (e.g., 88901) or a geographic area (e.g., Las Vegas), or a current location or landmark (e.g., McCarron Airport) and an intended destination address (e.g., Downtown Las Vegas). Further, the user may then identify a desired pick-up and/or drop off point within the confines of the displayed map2500image by making selections (e.g., selections made on a touch screen) and/or through user input (e.g., spoken commands, text, etc.). In some embodiments, at least one processor (e.g., processor310) may be configured to receive information from an external source, predict an area that will have traffic obstruction in the near future, and use the predicted area in determining the pick-up location. For example, in some embodiments, the viewport of smartphone120A may be configured to implement maps from other sources available over the network or from another digital mapping software application. GUIs may include display of a web browser including a search tool bar (not shown) configured to receive and process search queries related to displayed map data received from an external source or other sources available over the network. The search tool bar may allow for user2502to search for a displayed map area for one or more landmarks (e.g., Caesar's Palace), including but not limited to hotels, gas stations, etc. In some embodiments, a scale may be displayed on map2500and may indicate distance between streets and landmarks. In some embodiments, a request for map data may include a request based on a selection of button to use the current location of smartphone120A, as discussed above. The request for map data may further include a request for at least one of a road map, satellite map, hybrid map, and terrain type map formats. As discussed above, selection module2404may determine one or more routes based on a ride request, including preferred pick-up and drop-off locations, and detected traffic and/or environmental conditions in the vicinity. For example, selection module2404may determine a direct route for navigating vehicle2508to user2502by traveling on Sahara Ave. to Las Vegas Blvd, and proceeding to McCarran Airport. However, when taking into account traffic obstruction2504, consistent with disclosed embodiments, ridesharing management sever150may instead determine a different pick-up point in order to avoid traffic obstruction2504. For example, ridesharing management server150may determine a different pick-up point completely remote from the user's starting point, which in this example, may be peripheral to the area of traffic obstruction2504. In some embodiments, transmission module2406may instruct ridesharing management server150to send to user2502, via the GUI communications interface of map2500, information about the pick-up location, which may be represented by icon2506(e.g., an “X” marking the pick-up location) on map2500. Icon2506designates a location that is within walking distance to the current location of user2502. As shown, the pick-up location at icon2506is past traffic obstruction2504. Walking instructions to the pick-up point may be provided. Ridesharing management server150may also send driver instructions along route2514to avoid traffic so as to pick up user2502in a more expedited fashion. Further, as discussed earlier, ridesharing management server150may determine the pick-up point taking into account a maximum walking distance according to a preference of user2502. In some embodiments, vehicle and pick-up location selection module2406may select a vehicle-for-hire in accordance with real time traffic data received at traffic module2404. For example, if a vehicle has a closest route to a desired destination, but has to back track to avoid a traffic or environmental condition, vehicle and pick-up location selection module2406may instead select another vehicle, and have it take a longer route. Alternatively, vehicle and pick-up location selection module2406may select a vehicle based on other service parameters, including for example, a maximum walking distance from the starting point to a pick-up location, a maximum walking distance from a drop-off location to a desired destination, a total maximum walking distance involved in a ride, a maximum number of subsequent pick-ups, maximum delay of arrival/detour incurred by subsequent pick-ups during a ride, and a selection whether to permit toll road usage during the ride, as discussed above. FIG.26is a flowchart of an example of a method2600for directing a vehicle-for-hire and a prospective passenger to a remote pick-up location to avoid traffic congestion. Steps of method2600may be performed by one or more processors of ridesharing management server150and/or memory320and memory modules2400. Further, as discussed earlier, although the following example is in the context of traffic congestion, the disclosed embodiments may determine a pick-up location to avoid any traffic and/or environmental condition. At step2610, data capture module2402may receive a ride request from a user130A-C. The ride request may include a starting point and a desired destination. For example, data capture module2402may receive a ride request from user130A-C including a starting point (e.g., Las Vegas Blvd. at McCarran Airport) and a desired destination (e.g., Downtown Las Vegas) and current GPS locations of a plurality of vehicles-for-hire. Data received from ridesharing management server150may include GPS data and/or other user130A-C or driver130D-E device identifiers related to mobile communication devices120A-F. In some embodiments, pick-up locations, drop-off locations, associated GPS data, and user130A-C or driver130D-E data may be processed by data capture module2402. At step2612, data capture module2402may receive current locations of vehicles of a plurality of vehicles-for-hire. The current locations of the plurality of vehicles-for-hire may be determined by a location service application installed on a driver device, a driving-control device, or by a location determination component in the ridesharing management system100, which may be a part of or separate from ridesharing management server150. At step2614, traffic module2404may receive real-time traffic data and may identify an area of traffic obstruction2504. Traffic data may include real-time traffic data regarding a certain geographical region, and may be used to, for example, calculate estimate pick-up and drop-off times, and determine an optimal route for a particular ride. For example, real-time traffic data may be received from a real-time traffic monitoring system, which may be integrated in or independent from ridesharing management system100. Based on detection of atypical congestion, traffic module2404may include software instructions for receiving data indicative of an area of a traffic obstruction front ridesharing management server150. In some embodiments, image data, audio data, GPS data, and/or user data may be processed and/or analyzed by traffic module2404in order to identify the traffic obstruction. At step2616, vehicle and pick-up location selection module2406may select a vehicle-for-hire to pick up the user. For example, ride service parameters may be transmitted to ridesharing management server150for processing the request and selection of an available vehicle-for-hire based on the ride service parameters, as discussed above. Ridesharing management server150may further be configured to receive user input from user devices120A-120C as to various ride service parameters and may select a vehicle-for-hire to pick up the user according to the specified parameters. Ridesharing management server150may communicate with devices associated with one or more of drivers130D-E and a plurality of vehicle-for-hires across network140, and may select a vehicle closest to the pick-up point of the user and/or located on a route that avoids the traffic obstruction. Other parameters of vehicle selection may be considered, including, for example, the location and availability of other vehicles-for-hire in the vicinity of the user, and including, for example, that other users may be simultaneously sending pick-up requests in a location proximate to user. Additionally, in some embodiments, ridesharing management server150may assign a plurality of users to concurrently share a vehicle-for-hire, and/or may determine differing pick-up locations and differing drop-off locations for the plurality of users. At step2618, vehicle and pick-up location selection module2406may identify a pick-up or drop-off location apart from the user's starting point. For example, vehicle and pick-up location selection module2406may identify a pick-up location that is remote from the user's starting point and peripheral to the area of the traffic obstruction. In some embodiments, a ride request may be associated with a maximum walking distance (e.g., 300 meters, 500 meters, etc.) from a starting point to a pick-up location that is remote from the user's starting point, as discussed above. In some embodiments, a ride request may be associated with a walking distance which is higher than maximum thresholds in case a close point cannot be found due to one or more conditions of the congested area. The pick-up location may or may not be inputted by the user or may be determined based on avoiding the traffic obstruction. In some embodiments, the pick-up location may be selected in an area currently walkable from a determined GPS position of the user's current location when sending a request for pick-up. For example, vehicle and pick-up location selection module2406may identify a pick-up location along a route that avoids the traffic obstruction. At step2620, transmission module2408may send to the user information about the pick-up location. For example, server150may transmit to user device120A-C information relating to the pick-up location which may be displayed as an icon on a map, as discussed above. As pail of the displayed information, walking directions to the pick-up location may be provided in visual, textual, and/or audio form so that the user can easily find the pick-up point. Consistent with this disclosure, transmission module2408may communicate with ridesharing management server150to send a first message to a user device120A-C to provide information about pick-up location to display of user device120A-C. The message may appear in different formats, for example, a text message including the estimated pick-up time, an audio message, or an image. Transmission module2408may communicate confirmation messages and notification and/or alerts based on detected changes in real-time traffic data, which may then change the pick-up and/or drop-off locations for the user to alternative locations. For example, ridesharing management server150may send information about the pick-up location to the user including walking directions to a location that is, for example, at least one block away from the user's starting point. Further, ridesharing management server150may select the pick-up location and the driving directions so that the user arrives at the pick-up location before arrival of the vehicle-for-hire. At step2622, transmission module2408may send to the selected vehicle-for-hire driving directions to the pick-up-location. For example, ridesharing management server150may transmit to a vehicle for hire using any number of electronic devices120information relating to the pick-up location. The information relating to the pick-up location may be displayed as an icon on a map, as discussed above. As part of the displayed information, driving directions to the pick-up location may also be provided in visual, textual, and/or audio form so that the driver may easily drive to the pick-up point. Consistent with this disclosure, transmission module2408may be configured to send, based on real time traffic data, ride service assignments (for example, including pick-up and drop-off location information) to the plurality of driver devices120D and120E associated with drivers130D and130E and/or driving-control device120F, to substantially avoid the traffic obstruction. Detecting the Number of Vehicle Passengers In some embodiments, ridesharing management server150may receive ride request for a plurality of users and schedule more than one user to share the same vehicle-for-hire. In some situations, existing systems may encounter the technical problem of how to process the ride requests while taking into vehicle occupancy levels in order to transport passengers without exceeding a capacity of a ridesharing vehicle. For example, existing systems may have difficulty accurately detecting a changing occupancy of a vehicle-for-hire as it travels from one location to another, at which passengers may enter and/or exit. Some systems may provide different vehicle types to account for different occupancies, given that the number of vehicle passengers may unpredictably change, but fail to provide real-time detection of ridesharing vehicle passengers. Presently disclosed embodiments, on the other hand, address this problem by providing capacity information to a server based on detected sensor information. For example, in some embodiments, ridesharing management server150may receive from at least one sensor associated with ridesharing vehicles operated by drivers130D and130E, and/or driving-control device120F, information indicative of a current number of passengers or users130A-130C in the ridesharing vehicles. Ridesharing management server150may then determine whether to assign additional users to the ridesharing vehicles based on the received information from the sensor and capacity thresholds associated with the ridesharing vehicles. In some embodiments, ridesharing management server150may compare the sensor data associated with a ridesharing vehicle with a capacity threshold of the ridesharing vehicle, and may determine whether a number of actual passengers within the ridesharing vehicle exceeds a capacity threshold of the ridesharing vehicle. If, based on at least the sensor data, the number of detected passengers exceeds the capacity threshold of the ridesharing vehicle, ridesharing management server150may reassign one or more subsequent passengers. In some embodiments, a threshold block that prevents assignment of the additional users to the ridesharing vehicle when the ridesharing vehicle's current utilized capacity is above a threshold being less than the ridesharing vehicle's capacity threshold may also be implemented. In some embodiments, ridesharing management server150may determine a discrepancy between an actual number of passengers entering a ridesharing vehicle at a specific pick-up location and the number of passengers expected to enter the ridesharing vehicle at a specific pick-up location, and may inform ridesharing management server150of the discrepancy, thereby causing a change in the route of the ridesharing vehicle. As discussed above, user devices120A-120C, driver devices120D and120E, and/or driving-control device120F may respectively be installed with a user side ridesharing application, and a corresponding driver side ridesharing application. Mobile communications device200may be installed with a user side ridesharing application, and a corresponding driver side ridesharing application, and/or other software to perform one or disclosed embodiments described in the present disclosure, such as on mobile communications devices120A-120F. Mobile communications device200may retrieve GPS/navigation instructions268from memory250and may facilitate GPS and navigation-related processes or routes associated with drivers130D and130E in communication with ridesharing management server150. Ridesharing management server150may receive a ride request from one or more users130A-130C, may receive data from sensors including a number of passengers currently located in or expected to occupy a ridesharing vehicle, and may take an action, such reassigning passengers or changing a vehicle route, based on a comparison between the actual number of passengers detected and the vehicle occupancy, as described in greater detail below. In some embodiments, ridesharing management server150may transmit information to user device120A-C, which may be, for example, a smartphone or tablet having a dedicated application installed therein. A graphical user interface (GUI) including a plurality of user-adjustable feature user side or driver side ridesharing application settings may be included on a display of mobile communications devices120A-120C to visibly output infotmation to one or more users130A-C and/or drivers130D and130E in relation to anticipated or detected vehicle occupancy. FIG.27illustrates an exemplary embodiment of a memory2700containing software modules consistent with the present disclosure. In particular, as shown, memory2700may include a ride request module2702, a route module2704, a detection and assignment module2706, a transmission module2708, a database access module2710, and a database2712. Modules2702,2704,2706,2708, and2710may contain software instructions for execution by at least one processing device (e.g., processor310), included with automated ridesharing dispatch system300. Ride request module2702, route module2704, detection and assignment module2706, transmission module2708, database access module2710, and database2712may cooperate to perform multiple operations. For example, ride request module2702may include a communications interface configured to electronically receive ride requests from a plurality of users, may access memory, and may process the received ride requests. Route module2704may determine a route for the ridesharing vehicle. Detection and assignment module2706may receive information indicative of a current number of passengers in the ridesharing vehicle, and may determine whether to assign additional users to the ridesharing vehicle. Transmission module2708may send instructions to pick up and drop off users based on an assignment front detection and assignment module2706. Database access module2710may interact with database2712which may store any information associated with the functions of modules2702-2708. In some embodiments, memory2700may be included in, for example, memory320storing programs330including, for example, server app(s)332, operating system334, and data340, and a communications interface360, discussed above. Alternatively or additionally, memory2700may be stored in an external database170(which can also be internal to ridesharing management server150) or external storage communicatively coupled with ridesharing management server150(not shown), such as one or more database or memory accessible over network140. Further, in other embodiments, the components of memory2700may be distributed in more than one server. In some embodiments, ride request module2702may receive, via a communications interface, ride requests from a plurality of users. As discussed above, the communications interface e.g., communications interface360) may include a modem, Ethernet card, or any other interface configured to exchange data with a network, such as network140. Communications interface360may receive ride requests from a plurality of users130A-C, and ride requests may include multiple pick-up and drop-off locations, and may be initiated from a user side ridesharing application on one or more user devices120A-C. In some embodiments, ride request module2702may also access memory320to store a capacity threshold for each of a plurality of ridesharing vehicles. The capacity threshold may include a total available number of seats present in a ridesharing vehicle. Alternatively, the capacity threshold may include a total amount of volumetric space available to accommodate a plurality of passengers in a ridesharing vehicle. In some embodiments, ride request module2702may process the ride requests received from the communications interface and assign to a ridesharing vehicle the plurality of users for pick up at a plurality of differing pick-up locations and for delivery to a plurality of differing drop-off locations. Ride request module2702may also receive software instructions for clustering a plurality of users to a single ridesharing vehicle based on a common pick-up point or a common drop-off point. Data received from ridesharing management server150may include GPS data indicating a location of a plurality of users, user devices120A-C including user identifier data, and vehicle-for hire data indicating a plurality of vehicle-for-hire available for ridesharing. Data received from ridesharing management server150may also include a stored threshold capacity of each of a plurality of vehicles-for-hire. In some embodiments, route module2704may determine, based on processed ride requests from ride request module2702, a route for the ridesharing vehicle. For example, route module2704may determine an optimal route for a particular ride based on the number of differing pick-up locations and differing drop-off locations, and based on map information and environmental conditions, including for example, traffic or congestion. Route module2704may also calculate potential routes and guide users to a pick-off or drop-off location based on the received and processed ride requests. Route module2704may also utilize GPS/navigation instructions268to facilitate GPS and navigation-related processes and instructions and plan an optimum route for a plurality of passengers occupying a single ridesharing vehicle. In some embodiments, detection and assignment module2706may receive from at least one sensor within the ridesharing vehicle, information indicative of a current number of passengers in the ridesharing vehicle. For example, detection and assignment module2706may detect, based on received sensor information, a current number of passengers positioned in each of the ridesharing vehicles, and may determine whether a number of identified passengers exceeds a stored threshold or capacity for a particular vehicle. Detection and assignment module2706may also determine whether to assign additional users to a particular ridesharing vehicle based on the received information from sensors and the capacity threshold associated with the ridesharing vehicle. Detection and assignment module2706may also determine whether to assign existing passengers to another ridesharing vehicle. In some embodiments, detection and assignment module2706may detect a discrepancy between an actual and an expected number of passengers entering a particular ridesharing vehicle. Detection and assignment module2706may also calculate a difference and, based on the difference, change a route based on route module2704instructions of a particular vehicle so to allow for or prevent pick-up of additional passengers. For example, a route of a particular vehicle may be extended to allow for pick-up of additional passengers when an actual number of passengers entering a vehicle detected by detection and assignment module2706is less than an expected number. Conversely, a route of a particular vehicle may be shortened to prevent pick-up of additional passengers when an actual number of passengers entering a vehicle detected by detection and assignment module2706exceeds an expected number. Other route variations and changes to allow for passenger drop-off may also be contemplated. In some embodiments, detection and assignment module2706may receive information from a plurality of sensors to detect vehicle occupancy and entry of passengers into a ridesharing vehicle. For example, detection and assignment module2706may receive from ridesharing management server150audio and image data, captured by, for example, an image sensor or a microphone associated with a ridesharing vehicle. Image and audio data may be used to determine an actual number of vehicle occupants and may be configured to detect the current number of passengers in the ridesharing vehicle. In some embodiments, the ridesharing vehicle may include one or more a plurality of sensors that may also include LIDAR, proximity sensors, seat pressure sensors, thermal sensors, and/or other sensors to collect information related to vehicle occupancy. Detection and assignment module2706may receive detected information from each sensor placed internally or externally to a ridesharing vehicle and may determine vehicle occupancy based on any combination of sensor information. Detection and assignment module2706may then make a vehicle assignment corresponding to the detected vehicle occupancy. In some embodiments, transmission module2708may communicate, based on assignment instructions from detection and assignment module2706, to ridesharing management server150a message to pick up and drop off users. For example, transmission module2708may communicate pick-up and drop-off locations. In some examples, transmission module2708may communicate with ridesharing management server150to send a first message to a user device120A to cause an indication of a calculated estimated pick-up time to appear on a display of user device120A. The message may appear in different formats, for example, a text message including the estimated pick-up time, an audio message, or an image. Transmission module2708may communicate confirmation messages and notification and/or alerts based on detected changes vehicle assignment so that users can be notified that they are assigned to a different ridesharing vehicle. Transmission module2708may also transmit selected vehicle assignment and reassignment instructions for mobile devices120A-C in accordance with detection and assignment module2706instructions. In some embodiments, database access module2710may cooperate with database2712to retrieve information. Database2712may be configured to store any type of information of use to modules2702-2710, depending on implementation-specific considerations. For example, in embodiments in which database access module2710is configured to provide a recommendation to add users, remove users (based on detection and assignment module2706), and/or reroute a vehicle (based on route module2704and) based on a detected discrepancy amongst a number of vehicle passengers, database access module2710may retrieve prior-collected vehicle or map information from database2712in order to reassign passengers or change the route of a ridesharing vehicle (or request re-assignment of at least one of the plurality of users scheduled to be picked up by the ridesharing vehicle to a different ridesharing vehicle). The change in route may also include a change in pick-up or drop-off location. Further, database2712may store metadata associated with pick-up or drop-offs (based on ride request module2702). In some embodiments, database2712may store one or more images of the plurality of captured images and/or receive sensor data from a plurality of sensors. Modules2702-2710may be implemented in software, hardware, firmware, a mix of any of those, or the like. For example, if the modules are implemented in software, they may be stored in a server or one or more servers. However, in some embodiments, any one or more of modules2702-2710and data associated with database2712, may, for example, be stored in processor310and/or located on ridesharing management server150, which may include one or more processing devices. Processing devices of ridesharing management server150may be configured to execute the instructions of modules2702-2710. In some embodiments, aspects of modules2702-2710may include software, hardware, or firmware instructions (or a combination thereof) executable by one or more processors, alone or in various combinations with each other. For example, modules2702-2710may be configured to interact with each other and/or other modules of ridesharing management server150and/or a system100to perform functions consistent with disclosed embodiments, FIG.28Ais a schematic illustration of an example of an interior of a vehicle2800used for ridesharing purposes according to a disclosed embodiment. Vehicle2800may include a plurality of seats to accommodate multiple vehicle passengers. One vehicle passenger may be the driver. In some embodiments, vehicle2800may include an autonomous driving vehicle (e.g., autonomous vehicle130F) without a driver and/or seat designated for a driver. The interior of a vehicle2800may include a plurality of sensors2802and2804to detect a current vehicle occupancy level. For example, sensors2802and2804may include one or more imaging and/or proximity sensors. Interior of vehicle2800may also include specific sensors2802a-dcorresponding to each of the plurality of seats. For example, sensors2802a-dmay include pressure sensors or thermal sensors that are activated when a passenger sits down and occupies a seat. Other types of sensors are contemplated. As shown inFIG.28A, vehicle2800is empty and does not include any passengers. Accordingly, sensors2802,2804, and2802a-dmay communicate to server150a state that no vehicle passengers are detected. In some embodiments, the plurality of sensors2802a-dequipped with each seat may include weight sensors, thermometers to measure body temperatures, or may include seat belt sensors. The seat belt sensors (not shown) may include sensors located on seat belt buckles provided for each seat in the vehicle. The seat belt sensors may determine whether a seat belt is fastened or unfastened to detect if a passenger seat is occupied. In some examples, cameras may be mounted on each seat or each headrest, and heart pulse sensors, electric field sensors, or other biometric sensors may be positioned on each seat to determine individual seat occupancy. Detection and assignment module2706may aggregate information for each seat including signals of detection and non-detection to calculate a number of ridesharing occupants. FIG.28Bis a schematic illustration of an example of an interior of a vehicle used for ridesharing purposes according to a disclosed embodiment. As shown inFIG.28B, vehicle2800includes four passengers2812a-dpositioned in the interior of the vehicle and each occupying an individual seat. Sensors2802,2804, and/or2802a-dmay communicate detection of each the four passengers2812a-dto ridesharing management server150. Although the example shown includes four passenger seats in an autonomous vehicle, as discussed earlier, in some embodiments, a driver may instead occupy one of the seats. Further, any appropriate number of passenger seats (e.g., 1, 2, 3, 4, 5, etc.) and/or seat configurations (e.g., additional or fewer seats and/or additional fewer rows of seats) are consistent with the disclosed embodiments. In some embodiments, sensors2802and2804may be configured to be positioned at the exterior of the vehicle in order to detect vehicle passengers in the proximity of and entering vehicle2800. Such a detection may inform ridesharing management server150of an anticipated number of vehicle passengers2812a-dthat may enter vehicle2800. Detection and assignment module2706may also determine an actual number of users entering the ridesharing vehicle by communicating with mobile devices of the users. A short-range transceiver configured to determine an actual number of users entering the ridesharing vehicle by communicating with mobile devices of the users may also be contemplated. Detection and assignment module2706may also compare the detection of vehicle passengers2812a-dexternal and internal to the vehicle to identify any discrepancies and change a route or vehicle2800trajectory. Consistent with this disclosure, detection and assignment module2706may aggregate information for each seat2802a-dincluding signals of detection and non-detection to calculate a number of ridesharing passengers2812a-d. As shown inFIG.28B, each of passengers2812a-dmay also have mobile devices. Detection of the mobile devices corresponding to each of vehicle passengers2812a-dmay be contemplated as a means to detect vehicle2800occupancy. For example, the radio-frequency (RF) signals emitted by electrically powered mobile radio-emitting devices such as smartphones phones or similar personal wearable communication devices may be detected by a sensor included in vehicle2800. In some embodiments, a sensor, such as an image sensor associated with a mobile communications device within the ridesharing vehicle, may be configured (e.g., via software instructions including in a ride sharing application) to detect the current number of passengers in the ridesharing vehicle and to transmit the detected number to, for example, ridesharing management server150. In some embodiments, each passenger may not be directly positioned in each seat. For example, in some cases, vehicle passengers may be standing, sharing a seat, sitting on another's lap, or otherwise not confined to a seat. In such examples, other means of detection are contemplated. For example, a LIDAR system implemented either internal or external to vehicle2800may calculate distance and/or location data of passengers2812a-dto determine occupancy. LIDAR systems may include a transmitter and a receiver emitting light pulses internal to or through windows of vehicle2800to gather distance data for passengers2812a-dlocated internal to vehicle2800. LIDAR wavelengths may include infrared, near-infrared, or ultraviolet wavelengths, and may include periodic or continuous pulses. In some embodiments, the LIDAR system may emit light pulses that reflect on seats, headrests, a dashboard, steering wheel, and passengers. Captured distance information based on reflected light may indicate an existence of absence of passengers inside vehicle2800or external to vehicle2800, and detection and assignment module2706may incorporate this detected information to identify occupancy and make a passenger assignment. In some embodiments, an occupant can be identified based a point cloud image. For example, a LIDAR system may obtain a point cloud image, and detection and assignment module2706may compare the point cloud image to a default template including a silhouette of at least one passenger positioned internal to a vehicle to determine whether the point cloud image identifies a vehicle passenger. If there is a match or a substantially close match between the template and the point cloud image, then a passenger is determined to be inside vehicle2800. Alternatively, if there is no match between a point cloud image and a template, then no passenger is determined to be within vehicle2800. In some embodiments, a three-dimensional image volume in conjunction with a volumetric threshold may be determined and computed to determine the occupancy of vehicle2800. For example, an average volume for each of seats2802a-dand a corresponding volume for a typical vehicle passenger2812a-dmay be determined. When one or more volumes exceed a predetermined threshold, a plurality of vehicle passengers2812a-dmay be detected in vehicle2800. When one or more volumes fall below a predetermined threshold or threshold blocks, one or more passengers2812a-dmay not be in vehicle2800. Alternately, a series of volumetric ranges may be contemplated and correspond to a particular number of passengers2812a-d. For example, a low volume range may only indicate a single vehicle passenger2812a-d, whereas a high volume range may include three or four vehicle passengers2812a-d. In some embodiments, detection may be temporal in nature. For example occupancy detection may begin when a driver begins or stops driving. If a detected vehicle speed is almost zero (e.g., vehicle2800is stationary), then a vehicle occupancy may not be detected. However, when a vehicle speed is detected at a value greater than zero (e.g., vehicle2800is in motion), then a vehicle occupancy may be detected. Such temporal detection constraints may enable detection and assignment module2706to eliminate false detection results when passengers are moving in and out of vehicle before a ride starts. FIG.29Ais a flowchart of an example of a method2900for automatically dispatching ridesharing vehicles. Steps of method2900may be performed by one or more processors of server150and/or memory320and memory modules2700, which may receive data from one or more user devices and one or more sensors. At step2902, ride request module2702may electronically receive ride requests from a communications interface (e.g., communications interface360) from a plurality of users. Ride requests may indicate a plurality of differing pick-up and drop-off locations from the plurality of users. The plurality of users may send ride requests from multiple user devices120A-C. Ride request module2702may include software instructions for receiving data from ridesharing management server150, and may include software instructions for receiving ride requests from a user side ridesharing application installed on each of the multiple user devices120A-C. In some embodiments, ride request module2702may also access memory320to retrieve a stored capacity threshold for each of a plurality of ridesharing vehicles in response to electronically receiving ride requests from the plurality of users. At step2904, ride request module2702may process the ride requests received from the communications interface and assign to a single ridesharing vehicle the plurality of users for pick up at a plurality of differing pick-up locations and for delivery to a plurality of differing drop-off locations. Ride request module2702may also receive software instructions for clustering a plurality of users to a single ridesharing vehicle based on a common pick-up point or a common drop-off point. Data received from ridesharing management server150may include GPS data indicating a location of a plurality of users, user devices120A-C including user identifier data, and vehicle-for hire data indicating a plurality of vehicle-for-hire available for ridesharing. Data received from ridesharing management server150may also include a stored capacity threshold for each of a plurality of ridesharing vehicles. At step2906, route module2704may determine, based on the processed ride requests, an optimum route for the ridesharing vehicle. As discussed above, route module2704may also calculate potential routes and guide users to a pick-off or drop-off location based on the received and processed ride requests. Route module2704may also use GPS/navigation instructions268to facilitate GPS and navigation-related processes and instructions and plan an optimum route for picking up and dropping off a plurality of passengers occupying a single ridesharing vehicle. At step2908, detection and assignment module2706may receive from at least one sensor within the ridesharing vehicle, information indicative of a current number of passengers in the ridesharing vehicle. For example, detection and assignment module2706may detect a current number of passengers positioned in each of the ridesharing vehicles, and may determine whether a number of identified users exceeds a stored threshold or capacity for a particular vehicle. This information may be stored in database2712. In some embodiments, this information may be based on one or more sensors (e.g., sensors2802a-d,2802, and2804). As discussed earlier, the sensors may be proximity sensors, pressure sensors, thermal sensors, image sensors, audio sensors, LIDAR-based sensors, or any other detection mechanism. Sensor data may be transmitted to server150and may be compared to occupancy data in database2712. At step2910, detection and assignment module2706may determine whether to assign additional users to the ridesharing vehicle. In some embodiments, detection and assignment module2706may compare the sensor data from the particular vehicle with the capacity threshold of the particular vehicle and may determine whether to assign existing passengers to another ridesharing vehicle based on this comparison. For example, if the detected number of passengers is the same as the capacity threshold, as shown inFIG.28B, then no additional users will be assigned to the ridesharing vehicle. Conversely, if the detected number of passengers is less than the capacity threshold, then additional users may be assigned to the ridesharing vehicle. Transmission module2708may communicate that additional users may be assigned to a ridesharing vehicle. FIG.29Bis a flowchart of an example of another method2920for automatically dispatching ridesharing vehicles. Steps of method2920may be performed by one or more processors of server150and/or memory320and memory modules2700, which may receive data from one or more user devices and one or more sensors. At step2922, ride request module2702may store a capacity threshold for each of a plurality of ridesharing vehicles. As discussed earlier, the capacity threshold may include a total available number of seats present in a ridesharing vehicle, as shown inFIG.28A. Alternatively, the capacity threshold may include a total amount of volumetric space available to accommodate a plurality of passengers in a ridesharing vehicle. For example, a capacity threshold may include 4 seats, such as seats2802a-dshown inFIG.28A. At step2924, ride request module2702may receive ride requests from a plurality of users. As discussed earlier at step2902, ride request module2702may electronically receive ride requests from a communications interface360from a plurality of users. Ride requests may indicate a plurality of differing pick-up and drop-off locations from the plurality of users. Ride request module2702may include software instructions for receiving data from ridesharing management server150, and may include software instructions for receiving ride requests from a user side ridesharing application installed on each of the multiple user devices120A-C. As shown inFIG.28B, passengers2812a-dmay supply ride requests to ride request module2702, and are occupying vehicle2800. At step2926, ride request module2702may process the ride requests received from the communications interface and assign to a single ridesharing vehicle the plurality of users for pick up at a plurality of differing pick-up locations and for delivery to a plurality of differing drop-off locations, as discussed earlier in step2904. As shown inFIG.28B, passengers2812a-dwere assigned to share a single ridesharing vehicle2800. Processed data received from ridesharing management server150may include GPS data indicating a location of a plurality of users, user devices120A-C including user identifier data, and vehicle-for hire data indicating a plurality of vehicle-for-hire available for ridesharing to enable ride request module2702to assign a ridesharing vehicle. At step2928, route module2704may determine, based on processed ride requests from ride request module2702, an optimum route for the ridesharing vehicle. As discussed earlier in step2906, route module2704may also use GPS/navigation instructions268to facilitate GPS and navigation-related processes and instructions and plan an optimum route for a plurality of passengers occupying a single ridesharing vehicle. At step2930, detection and assignment module2706may receive information indicative of a current number of passengers in the ridesharing vehicle. As discussed earlier in step2908, detection and assignment module2706receive from at least one sensor within the ridesharing vehicle, information indicative of a current number of passengers in the ridesharing vehicle. In some embodiments, the at least one sensor may include LIDAR, proximity sensors, pressure sensors, thermal sensors, and/or other sensors to collect information related to vehicle occupancy. At step2932, detection and assignment module2706may compare sensor data from a particular vehicle with the capacity threshold data from the particular vehicle. Detection and assignment module2706and ridesharing management server150may compare the sensor data associated with ridesharing vehicles operated by drivers130D and130E, and driving-control device120F with the capacity threshold of the ridesharing vehicles. Sensor data may be transmitted to server150and may be compared to occupancy data in database2710. At step2934, detection and assignment module2706may determine whether an actual number of users within the particular vehicle exceeds a number of users assigned to the particular vehicle. Detection and assignment module2704may detect a current number of passengers positioned in each of the ridesharing vehicles, and may determine whether a number of identified users exceeds a stored threshold or capacity for a particular vehicle. This information may be stored in database2710, and may be retrieved to formulate the comparison. At step2936, if the number of users within the particular vehicle exceeds the number of users assigned to the particular vehicle, detection assignment module2706may reassign one or more users to another ridesharing vehicle. Detection assignment module2706may communicate with database access module2710to retrieve prior-collected vehicle or map information from database2712in order to reassign passengers or change the route of a ridesharing vehicle. Transmission module2708may communicate the reassignment. For example, if the detected number of passengers is exceeds the capacity threshold, which is met as shown inFIG.28B, then additional users will be assigned to another ridesharing vehicle. Conversely, if the detected number of passengers does not exceed the capacity threshold, then additional users may not be assigned to another ridesharing vehicle. FIG.29Cis a flowchart of an example of a method2940for changing a route for an autonomous ridesharing vehicle. Steps of method2940may be performed by one or more processors of server150and/or memory320and memory modules2700, which may receive data from one or more user devices and one or more sensors. At step2942, ride request module2702may receive a desired route according to instructions from route module2704and based on a plurality of pick-up locations and a plurality of drop-off locations for delivering the users. The route may include a plurality of pick-up locations for picking up users, a number of the users expected to enter the ridesharing vehicle at each pick-up location, and a plurality of drop-off locations for delivering the users. As discussed earlier, route module2704may also determine, based on the received route from ride request module2702, an optimum route for the ridesharing vehicle. At step2944, detection and assignment module2706may determine a discrepancy between an actual and expected number of passengers entering a ridesharing vehicle. For example, detection and assignment module2706may determine a discrepancy between an actual number of passengers entering the ridesharing vehicle at a specific pick-up location and the number of users expected to enter the ridesharing vehicle at the specific pick-up location. Detection and assignment module2704may then calculate a difference based on received sensor information from a plurality of sensors, discussed above. Different sensors may be utilized to make the detection of an actual number of passengers entering a ridesharing vehicle at different pick-up locations. At step2946, route module2704may change a route, based on the determined discrepancy, for the ridesharing vehicle. For example, route module2704may change a route so as to allow for or prevent pick-up of additional passengers. For example, a route of a particular vehicle may be extended to allow for pick-up of additional passengers when an actual number of passengers entering a vehicle is less than an expected number. Conversely, a route of a particular vehicle may be shortened to prevent pick-up of additional passengers when an actual number of passengers entering a vehicle exceeds an expected number. Other route variations and changes to allow for passenger pick-up and drop-off may also be contemplated, and may be implemented in real-time based on dynamic usage of user side ridesharing applications implemented on user devices120A-C. Transmission module2708may communicate the change in route to vehicle passengers. The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Additionally, although aspects of the disclosed embodiments are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer readable media, such as secondary storage devices, e.g., hard disks or CD ROM, or other forms of RAM or ROM, USB media, DVD, Blu-ray, Ultra HD Blu-ray, or other optical drive media. Computer programs based on the written description and disclosed methods are within the skills of an experienced developer. The various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets. Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only. | 338,003 |
11859989 | DETAILED DESCRIPTION Overview The technology involves pullover maneuvers for vehicles operating in an autonomous driving mode. For instance, based on an upcoming destination the vehicle's computing system may continuously look for locations at which to stop. This may be done to pick up and/or drop off passengers, make a delivery, pull over for emergency vehicles, etc. Waypoints are sampled locations in the continuous physical space along the roadway. In many instances, the vehicle will have full visibility of the space (from the driving lane to the road edge). However, it is difficult to see through vehicles, around corners, behind moving cars, or through buildings. Thus, the computing system may make a lot of predictions (e.g., learned and heuristic) about where parked vehicles and moving road users will be. Using on-board sensors, maps, and real-time and historical data, the system can keep track of and predict where the vehicle can and cannot “see”, to identify suitable pullover locations. In some instances, potential pullover locations may be partially or fully occluded at a given point in time as the vehicle is operating. It is also desirable to avoid slowing prematurely for a pullover spot, as this can confuse or frustrate other drivers. Thus, according to aspects of the technology, the vehicle's computing system should adjust the vehicle's speed based on what the vehicle can and cannot detect in its environment. Candidate locations should be some minimum distance away from the vehicle, and the vehicle's dynamics should be taken into account to avoid excessively fast lateral shifting and potential hard braking of the vehicle. For instance, rather than limiting pullover locations based on fixed distance, they can instead be limited based on deceleration feasibility (e.g., the ability to decelerate to a stop at a given deceleration limit). Moreover, because these deceleration limits can be fixed in advance, they can be selected in order to ensure that passengers and other road users (other vehicles, pedestrians, bicyclists, etc.), are not made uncomfortable. The dynamics may include the vehicle's current speed, current deceleration (or acceleration), and current rate of deceleration (or acceleration). In addition, vehicle dynamics may also take into consideration additional factors such as actuator delays, current actuator state, controllability, road surface conditions, etc. As such, the vehicle dynamics may be both current and estimated. For example, the vehicle's computing system may predict how quickly the vehicle will achieve a certain deceleration in the future. Most deceleration values (e.g. −2 m/s2acceleration) cannot be achieved instantaneously, so the process of reaching a desired deceleration and the traveled distance during that time could be estimated or predicted. As one simple example, using the vehicle's current speed, deceleration, and rate of deceleration, the vehicle's computing system may estimate a distance between the vehicle and a pullover location for the vehicle, or a distance required for the vehicle to come to a complete stop within a pullover deceleration limit. Deceleration limits are typically used to determine whether certain trajectories are feasible for an autonomous vehicle. For example, when determining trajectories, the vehicle's planner system may identify a number of objects in the vehicle's environment and may generate a corresponding set of constraints for each of these objects. The planner system may then attempt to generate a trajectory that avoids the constraints (e.g., does not intersect with or come within a threshold distance to any of the identified objects), while at the same time does not exceed the deceleration limit. In some instances, if the planner system is unable to generate a trajectory, the deceleration limit may be adjusted. However, when pulling over, rather than adjusting the deceleration limit for the purposes of the pullover, the vehicle's computing devices may simply identify another pullover location. The vehicle's computing devices may utilize an appropriate pullover deceleration limit or discomfort to choose a pullover location further away that the vehicle can reach more comfortably for any passenger or tertiary persons. In this regard, the vehicle's computing devices may utilize a pullover deceleration limit which may be different from the deceleration limit for nominal driving. As the vehicle approaches a destination, it may begin to evaluate pullover locations. In some cases, pullover locations may be known in advance. In other cases, pullover locations may be identified from received sensor data, which may indicate a given spot is currently vacant or will shortly become vacant. And in still other cases, potential pullover locations may be partly or fully occluded. Each potential pullover location may be evaluated, for example, to determine which location has the lowest “cost”. In addition, for each pullover location, the vehicle's computing devices may determine a pullover start location at which the vehicle will need to begin to adjust its lateral position in the road in order to pull into the pullover location. The location may be determined based on the difference between the lateral offsets in the vehicle's current location and the lateral offset in the planned pullover location. In other words, the lateral offset may correspond to how far to the left or right the vehicle would need to adjust itself from its current location to the pullover location. The vehicle's computing devices may then determine whether a pullover location is feasible based on the vehicle dynamics, the pullover start location, and the pullover deceleration limit. For example, using the vehicle's current speed, deceleration, and rate of deceleration and various other factors as discussed further below, the vehicle's computing system may estimate a first distance required for the vehicle to come to a complete stop within the pullover deceleration limit. The vehicle's computing devices may then compare this distance with a second distance between the vehicle's current location and the pullover start location to determine whether the pullover location is feasible. The vehicle's computing devices may constantly continue looking for the “best” pullover location, such as the one with the lowest cost. This may continue, for example, until some point after the vehicle reaches a pullover start location of a selected pullover location and begins to laterally shift towards the selected pullover location. In this regard, the vehicle's computing devices may still have time to abort a pullover before the vehicle begins to laterally shift towards the selected pullover location. As such, the vehicle's computing devices will continue to assess whether the selected pullover location is feasible and/or any other nearby pullover location is a better option by identifying pullover locations and evaluating them as described above. According to aspects of the technology, using predictions and information such as the roadgraph, map, traffic predictions, parked vehicle predictions, sensor fields of view, etc., the planner module or other part of the vehicle's computing system may evaluate this situation: if the vehicle were to be able to pullover in an area it cannot currently see, how good would that pullover location be? If it would be better than the other pullover candidates that the vehicle can detect with its perception system or that are known in advance, it would be beneficial to slow slightly before approaching that occluded pullover location. This approach effectively compares the predictions of “what will likely happen” based on observed or known information versus a best-case “what could happen” given partial or no information about an occluded location. Here, when the best-case pullover at a location is determined by the vehicle to be significantly better (e.g., having a lower cost, such as 5-20% lower or more) than the other (likely to happen) predictions, then the vehicle should slow down to give additional time to explore the area of the occluded pullover location. This effectively helps the planner module or other element of the computing system to strike a balance between quickness, quality, and convenience. EXAMPLE VEHICLE SYSTEMS FIG.1Aillustrates a perspective view of an example passenger vehicle100, such as a minivan, sedan, sport utility vehicle (SUV) or other vehicle.FIG.1Billustrates a top-down view of the passenger vehicle100. The passenger vehicle100may include various sensors for obtaining information about the vehicle's external environment. For instance, a roof-top housing102may include a lidar sensor as well as various cameras, radar units, infrared and/or acoustical sensors. Housing104, located at the front end of vehicle100, and housings106a,106bon the driver's and passenger's sides of the vehicle may each incorporate lidar, radar, camera and/or other sensors. For example, housing106amay be located in front of the driver's side door along a quarter panel of the vehicle. As shown, the passenger vehicle100also includes housings108a,108bfor radar units, lidar and/or cameras also located towards the rear roof portion of the vehicle. Additional lidar, radar units and/or cameras (not shown) may be located at other places along the vehicle100. For instance, arrow110indicates that a sensor unit (112inFIG.1B) may be positioned along the rear of the vehicle100, such as on or adjacent to the bumper. And arrow114indicates a series of sensor units116arranged along a forward-facing direction of the vehicle. In some examples, the passenger vehicle100also may include various sensors for obtaining information about the vehicle's interior spaces (not shown). By way of example, each sensor unit may include one or more sensors, such as lidar, radar, camera (e.g., optical or infrared), acoustical (e.g., microphone or sonar-type sensor), inertial (e.g., accelerometer, gyroscope, etc.) or other sensors (e.g., positioning sensors such as GPS sensors). While certain aspects of the disclosure may be particularly useful in connection with specific types of vehicles, the vehicle may be any type of vehicle including, but not limited to, cars, trucks, motorcycles, buses, recreational vehicles, etc. There are different degrees of autonomy that may occur for a vehicle operating in a partially or fully autonomous driving mode. The U.S. National Highway Traffic Safety Administration and the Society of Automotive Engineers have identified different levels to indicate how much, or how little, the vehicle controls the driving. For instance, Level 0 has no automation and the driver makes all driving-related decisions. The lowest semi-autonomous mode, Level 1, includes some drive assistance such as cruise control. Level 2 has partial automation of certain driving operations, while Level 3 involves conditional (partial) automation that can enable a person in the driver's seat to take control as warranted. In contrast, Level 4 is a high automation fully autonomous level where the vehicle is able to drive without assistance in select conditions. And Level 5 is a fully autonomous mode in which the vehicle is able to drive without assistance in all situations. The architectures, components, systems and methods described herein can function in any of the semi or fully-autonomous modes, e.g., Levels 1-5, which are referred to herein as autonomous driving modes. Thus, reference to an autonomous driving mode includes both partial and full autonomy. FIG.2illustrates a block diagram200with various components and systems of an exemplary vehicle, such as passenger vehicle100, to operate in an autonomous driving mode. As shown, the block diagram200includes one or more computing devices202, such as computing devices containing one or more processors204, memory206and other components typically present in general purpose computing devices. The memory206stores information accessible by the one or more processors204, including instructions208and data210that may be executed or otherwise used by the processor(s)204. The computing system may control overall operation of the vehicle when operating in an autonomous driving mode. The memory206stores information accessible by the processors204, including instructions208and data210that may be executed or otherwise used by the processors204. The memory206may be of any type capable of storing information accessible by the processor, including a computing device-readable medium. The memory is a non-transitory medium such as a hard-drive, memory card, optical disk, solid-state, etc. Systems may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media. The instructions208may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor(s). For example, the instructions may be stored as computing device code on the computing device-readable medium. In that regard, the terms “instructions”, “modules” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The data210may be retrieved, stored or modified by one or more processors204in accordance with the instructions208. In one example, some or all of the memory206may be an event data recorder or other secure data storage system configured to store vehicle diagnostics and/or detected sensor data, which may be on board the vehicle or remote, depending on the implementation. The processors204may be any conventional processors, such as commercially available CPUs. Alternatively, each processor may be a dedicated device such as an ASIC or other hardware-based processor. AlthoughFIG.2functionally illustrates the processors, memory, and other elements of computing devices202as being within the same block, such devices may actually include multiple processors, computing devices, or memories that may or may not be stored within the same physical housing. Similarly, the memory206may be a hard drive or other storage media located in a housing different from that of the processor(s)204. Accordingly, references to a processor or computing device will be understood to include references to a collection of processors or computing devices or memories that may or may not operate in parallel. In one example, the computing devices202may form an autonomous driving computing system incorporated into vehicle100. The autonomous driving computing system may be capable of communicating with various components of the vehicle. For example, the computing devices202may be in communication with various systems of the vehicle, including a driving system including a deceleration system212(for controlling braking of the vehicle), acceleration system214(for controlling acceleration of the vehicle), steering system216(for controlling the orientation of the wheels and direction of the vehicle), signaling system218(for controlling turn signals), navigation system220(for navigating the vehicle to a location or around objects) and a positioning system222(for determining the position of the vehicle, e.g., including the vehicle's pose). The autonomous driving computing system may employ a planner/routing module223, in accordance with the navigation system220, the positioning system222and/or other components of the system, e.g., for determining a route from a starting point to a destination, planning a pullover maneuver, or for making modifications to various driving aspects in view of current or expected operating conditions. This module may be used by the computing system in order to generate short-term trajectories that allow the vehicle to follow routes. In this regard, the planner/routing module223may utilize stored detailed map information, real time traffic information (e.g., updated as received from a remote computing device), pullover spot information and/or other details when planning a route or a pullover maneuver. The trajectories may define the specific characteristics of acceleration, deceleration, speed, etc. to allow the vehicle to follow the route towards reaching a destination, pullover spot or other location. The trajectory may include a geometry component and a speed component. The geometry component may be determined based on various factors including the route from the routing system. The speed component may be determined using an iterative process using a plurality of constraints. The constraints may be based on the predicted trajectories of other objects detected in the vehicle's environment (e.g., the vehicle must not come too close to these other objects) as well as characteristics of the vehicle and other limits, such as a maximum allowable deceleration limit. The planning system may attempt to determine a speed profile by starting with a fastest allowable speed which may then be reduced in order to satisfy all constraints of the set of constraints. For instance, the planner portion of the planner/routing module is unable to find a solution, the maximum allowable deceleration limit (and/or other constraints) may be adjusted until a solution is found. The resulting trajectory may then be used to control the vehicle, for instance by controlling braking, acceleration and steering of the vehicle. The computing devices202are also operatively coupled to a perception system224(for detecting objects in the vehicle's environment), a power system226(for example, a battery and/or internal combustion engine) and a transmission system230in order to control the movement, speed, etc., of the vehicle in accordance with the instructions208of memory206in an autonomous driving mode which does not require or need continuous or periodic input from a passenger of the vehicle. Some or all of the wheels/tires228are coupled to the transmission system230, and the computing devices202may be able to receive information about tire pressure, balance and other factors that may impact driving in an autonomous mode. The computing devices202may control the direction and speed of the vehicle, e.g., via the planner module223, by controlling various components. By way of example, computing devices202may navigate the vehicle to a destination location completely autonomously using data from the map information and navigation system220. Computing devices202may use the positioning system222to determine the vehicle's location and the perception system224to detect and respond to objects when needed to reach the location safely. In order to do so, computing devices202may cause the vehicle to accelerate (e.g., by increasing fuel or other energy provided to the engine by acceleration system214), decelerate (e.g., by decreasing the fuel supplied to the engine, changing gears, and/or by applying brakes by deceleration system212), change direction (e.g., by turning the front or other wheels of vehicle100by steering system216), and signal such changes (e.g., by lighting turn signals of signaling system218). Thus, the acceleration system214and deceleration system212may be a part of a drivetrain or other type of transmission system230that includes various components between an engine of the vehicle and the wheels of the vehicle. Again, by controlling these systems, computing devices202may also control the transmission system230of the vehicle in order to maneuver the vehicle autonomously. Navigation system220may be used by computing devices202in order to determine and follow a route to a location. In this regard, the navigation system220and/or memory206may store the map information. As an example, these maps may identify the shape and elevation of roadways, lane markers, intersections, crosswalks, speed limits, traffic signal lights, buildings, signs, vegetation, or other such objects and information. The lane markers may include features such as solid or broken double or single lane lines, solid or broken lane lines, reflectors, etc. A given lane may be associated with left and/or right lane lines or other lane markers that define the boundary of the lane. Thus, most lanes may be bounded by a left edge of one lane line and a right edge of another lane line. The perception system224includes sensors232for detecting objects external to the vehicle. The detected objects may be other vehicles, obstacles in the roadway, traffic signals, signs, trees, etc. The sensors232may also detect certain aspects of weather conditions, such as snow, rain or water spray, or puddles, ice or other materials on the roadway. By way of example only, the perception system224may include one or more light detection and ranging (lidar) sensors, radar units, cameras (e.g., optical imaging devices, with or without a neutral-density filter (ND) filter), positioning sensors (e.g., gyroscopes, accelerometers and/or other inertial components), infrared sensors, acoustical sensors (e.g., microphones or sonar transducers), and/or any other detection devices that record data which may be processed by computing devices202. Such sensors of the perception system224may detect objects outside of the vehicle and their characteristics such as location, orientation, size, shape, type (for instance, vehicle, pedestrian, bicyclist, etc.), heading, speed of movement relative to the vehicle, etc. The perception system224may also include other sensors within the vehicle to detect objects and conditions within the vehicle, such as in the passenger compartment. For instance, such sensors may detect, e.g., one or more persons, pets, packages, etc., as well as conditions within and/or outside the vehicle such as temperature, humidity, etc. Still further sensors232of the perception system224may measure the rate of rotation of the wheels228, an amount or a type of braking by the deceleration system212, lateral forces as the vehicle turns to the left or right, and other factors associated with the equipment of the vehicle itself. As discussed further below, the raw data obtained by the sensors can be processed by the perception system224and/or sent for further processing to the computing devices202periodically or continuously as the data is generated by the perception system224. Computing devices202may use the positioning system222to determine the vehicle's location and perception system224to detect and respond to objects when needed to reach the location safely, e.g., via adjustments made by planner module223, including adjustments in operation to deal with occlusions and other issues. In addition, the computing devices202may perform calibration of individual sensors, all sensors in a particular sensor assembly, or between sensors in different sensor assemblies or other physical housings. As illustrated inFIGS.1A-B, certain sensors of the perception system224may be incorporated into one or more sensor assemblies or housings. In one example, these may be integrated into the side-view mirrors on the vehicle. In another example, other sensors may be part of the roof-top housing102, or other sensor housings or units106a,b,108a,b,112and/or116. The computing devices202may communicate with the sensor assemblies located on or otherwise distributed along the vehicle. Each assembly may have one or more types of sensors such as those described above. Returning toFIG.2, computing devices202may include all of the components normally used in connection with a computing device such as the processor and memory described above as well as a user interface subsystem234. The user interface subsystem234may include one or more user inputs236(e.g., a mouse, keyboard, touch screen and/or microphone) and one or more display devices238(e.g., a monitor having a screen or any other electrical device that is operable to display information). In this regard, an internal electronic display may be located within a cabin of the vehicle (not shown) and may be used by computing devices202to provide information to passengers within the vehicle. Other output devices, such as speaker(s)240may also be located within the passenger vehicle. The passenger vehicle also includes a communication system242. For instance, the communication system242may also include one or more wireless configurations to facilitate communication with other computing devices, such as passenger computing devices within the vehicle, computing devices external to the vehicle such as in another nearby vehicle on the roadway, and/or a remote server system. The network connections may include short range communication protocols such as Bluetooth™, Bluetooth™ low energy (LE), cellular connections, as well as various configurations and protocols including the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, WiFi and HTTP, and various combinations of the foregoing. EXAMPLE IMPLEMENTATIONS In view of the structures and configurations described above and illustrated in the figures, various aspects will now be described in accordance with aspects of the technology. As noted above, map information may be used by the planner/routing module.FIG.3Aillustrates an example300of map information for a section of roadway including intersection302.FIG.3Adepicts a portion of the map information that includes information identifying the shape, location, and other characteristics of lane markers or lane lines304,306,308, median areas310,312, traffic signals314,316, as well as stop lines318,320,322,324. The lane lines may also define various lanes326,328,330,332,334,336,338,340,342,344,346,348or these lanes may also be explicitly identified in the map information. In addition to these features, the map information may also include information that identifies the direction of traffic and speed limits for each lane as well as information that allows the computing device(s) to determine whether the vehicle has the right of way to complete a particular maneuver (e.g., to complete a turn or cross a lane of traffic or intersection), as well as other features such as curbs, buildings, waterways, vegetation, signs, etc. The map information may identify pullover locations which may include areas where a vehicle is able to stop and way to pick up a drop off passengers. These areas may correspond to parking spaces, waiting areas, shoulders, parking lots, etc. For instance,FIG.3Billustrates an example350depicts parking spaces352a-c,354a-b,356a-band358. In this example, the spaces352-358may all be parallel with the adjacent portion of the roadway. AndFIG.3Cillustrates another example360depicting parking spaces362a-d,364a-c,366a-cand368. In this example, the spaces362-368may all be angled with the adjacent portion of the roadway. In other examples, there may be a mix of parallel and angled spaces. For simplicity, these pullover locations may correspond to parking spaces such as the aforementioned parking spaces, but may correspond to any type of area in which a vehicle is able to stop and way to pick up a drop off passengers, such as an idling zone. Although the map information is depicted herein as an image-based map, the map information need not be entirely image based (for example, raster). For instance, the map information may include one or more roadgraphs, graph networks or road networks of information such as roads, lanes, intersections, and the connections between these features which may be represented by road segments. Each feature in the map may also be stored as graph data and may be associated with information such as a geographic location and whether or not it is linked to other related features, for example, a stop sign may be linked to a road and an intersection, etc. In some examples, the associated data may include grid-based indices of a road network to allow for efficient lookup of certain road network features. In this regard, in addition to the aforementioned physical feature information, the map information may include a plurality of graph nodes and edges representing road or lane segments that together make up the road network of the map information. Here, each edge is defined by a starting graph node having a specific geographic location (e.g., latitude, longitude, altitude, etc.), an ending graph node having a specific geographic location (e.g., latitude, longitude, altitude, etc.), and a direction. This direction may refer to a direction the vehicle100must be moving in in order to follow the edge, in other words a direction of traffic flow. The graph nodes may be located at fixed or variable distances. For instance, the spacing of the graph nodes may range from a few centimeters to a few meters and may correspond to the speed limit of a road on which the graph node is located. In this regard, greater speeds may correspond to greater distances between graph nodes. The planner/routing module may use the roadgraph(s) to determine a route from a current location (e.g., a location of a current node) to a destination. Routes may be generated using a cost-based analysis which attempts to select a route to the destination with the lowest cost. Costs may be assessed in any number of ways such as time to the destination, distance traveled (in which each edge may be associated with a cost to traverse that edge), type(s) of maneuvers required, convenience to passengers or the vehicle, discomfort to the passengers or other road users, etc. Each route may include a list of a plurality of nodes and edges which the vehicle can use to reach the destination. Routes may be recomputed periodically as the vehicle travels to the destination. The map information used for routing may be the same or a different map as that used for planning trajectories. For example, the map information used for planning routes may not only require information on individual lanes, but also the nature of lane boundaries (e.g., solid white, dash white, solid yellow, etc.) to determine where lane changes are allowed. However, unlike the map used for planning trajectories, the map information used for routing need not include other details such as the locations of crosswalks, traffic lights, stop or yield signs, etc., though some of this information may be useful for routing purposes. For example, between a route with a large number of intersections with traffic controls (such as stop signs or traffic signal lights) versus one with no or very few traffic controls, the latter route may have a lower cost (e.g., because it is faster) and therefore be preferable. Sensors, such as long and short range lidars, radar sensors, cameras or other imaging devices, etc., are used in vehicles that are configured to operate in an autonomous driving mode to detect objects and conditions in the environment around the vehicle. Each sensor may have a particular field of view (FOV) including a maximum range, and for some sensors a horizontal resolution and a vertical resolution. For instance, a panoramic lidar sensor may have a maximum scan range on the order of 70-100 meters, a vertical resolution of between 0.1°−0.3°, and a horizontal resolution of between 0.1°-0.4°, or more or less. A directional lidar sensor, for example to provide information about the nearby environment along a front, rear or side area of the vehicle, may have a maximum scan range on the order of 100-300 meters, a vertical resolution of between of between 0.05°−0.2°, and a horizontal resolution of between 0.01°-0.03°, or more or less. FIG.4Aprovides one example400of sensor fields of view relating to the sensors illustrated inFIG.1B. Here, should the roof-top housing102include a lidar sensor as well as various cameras, radar units, infrared and/or acoustical sensors, each of those sensors may have a different field of view. Thus, as shown, the lidar sensor may provide a 360° FOV402, while cameras arranged within the housing102may have individual FOVs404. A sensor within housing104at the front end of the vehicle has a forward facing FOV406, while a sensor within housing112at the rear end has a rearward facing FOV408. The housings106a,106bon the driver's and passenger's sides of the vehicle may each incorporate lidar, radar, camera and/or other sensors. For instance, lidars within housings106aand106bmay have a respective FOV410aor410b, while radar units or other sensors within housings106aand106bmay have a respective FOV411aor411b. Similarly, sensors within housings108a,108blocated towards the rear roof portion of the vehicle each have a respective FOV. For instance, lidars within housings108aand108bmay have a respective FOV412aor412b, while radar units or other sensors within housings108aand108bmay have a respective FOV413aor413b. And the series of sensor units116arranged along a forward-facing direction of the vehicle may have respective FOVs414,416and418. Each of these fields of view is merely exemplary and not to scale in terms of coverage range. With regard to the maximum sensor range of, e.g., a laser, not all laser scans (shots) are the same. For instance, some laser shots are designed to see farther away while some are designed to see closer. How far a shot is designed to see is called maximum listening range.FIG.4Billustrates a scenario450, in which a set of one or more laser shots452represented by dashed lines has a first listening range, another set of shots454represented by dash-dot lines has a second listening range, and a third set of shots456represented by solid lines has a third listening range. In this example, set452has a close listening range (e.g., 2-10 meters) because these shots are arranged to point nearby toward the ground. The set454may have an intermediate listening range (e.g., 10-30 meters), for instance to detect nearby vehicles and other road users. And the set456may have an extended listening range (e.g., 30-200 meters) for objects that are far away. In this approach, the system can save resources (e.g., time). The type of sensor, its placement along the vehicle, its FOV, horizontal and/or vertical resolution, etc., may all affect the information that the sensor obtains. Sometimes, an object or a location in the environment around the vehicle may not be detectable by the on-board sensors. This may be due to occlusion by another object, a sensor blind spot (e.g., due to sensor placement, dirt covering the sensor, etc.), environmental conditions or other situations. It is important for the on-board computer system to understand whether there may be an occlusion, because knowing this can impact driving, route planning or pullover decisions, as well as off-line training and analysis. For example,FIG.5illustrates scenario500, in which vehicle502uses a sensor, e.g., lidar or radar, to provide a 360° FOV, as shown by the circular dashed line504. Here, a motorcycle506approaching vehicle502in the opposite direction may be obscured by a sedan or other passenger vehicle508, while a truck510traveling in the same direction may be obscured by another truck512in between it and the vehicle502, as shown by shaded regions514and516, respectively. FIG.6is a top-down view600illustrating another occlusion example. Here, a vehicle operating in an autonomous driving mode may be at a T-shaped intersection looking for a parking space. Here, side sensors604aand604bmay be arranged to have corresponding FOVs shown by respective dashed regions606aand606b. In this example, truck608is approaching the intersection from the left of the vehicle602. FOV606aencompasses a front portion of the truck608. However, as illustrated by shaded region610, the truck608obscures (occludes) parking space612, which is across the street from the vehicle602. On the other side of the vehicle602, FOV606bis occluded by a shrub614, which prevents recognition of another empty parking spot616. In such situations, the lack of information about an object in the surrounding environment (e.g., the location of another road user or the presence of an available parking spot) may lead to one driving decision, whereas if the vehicle were aware of a possible occlusion it might lead to a different driving decision. For instance,FIG.7illustrates an example700in which there is an available parking spot, but it is occluded. In this example, autonomous vehicle702approaches a set of stores704, for instance to pick up a rider, a package or a food delivery. As shown, a number of vehicles706(e.g., vehicles706a, . . .706g) are parked in front of the stores704, while another area708is not a permissible parking area as shown by no-parking sign710. In this example, there is one currently vacant parking spot712, but it is occluded from view by the sensors of vehicle702. Without knowing that there is a possibly open spot, the vehicle702may prematurely slow down some distance prior to the first spot with vehicle706a. Alternatively, the vehicle702may suddenly decelerate once its sensors are able to detect that the spot712is open. Prematurely slowing down may inconvenience other road users following behind the vehicle702. And heavy braking too close to the spot712may cause discomfort to riders in vehicle702or other road users that may need to quickly brake. However, when the vehicle702is able to determine that parking spot712is occluded and likely available, and that it is a better option than others (e.g., such as driving around the block to look for another spot or to pull over and idle while waiting for any of vehicles706a-706gto vacate a spot), then the vehicle702may begin to appropriately decelerate some distance d from the spot712. If it turns out that the spot712is actually vacant, then the vehicle702will be able to easily pull over into the spot. And if it turns out that the spot712is not vacant or otherwise not suitable (e.g., due to a shopping cart, scooter, debris or other object in the spot), then the vehicle702can continue driving while looking for another pullover location without undue passenger discomfort or other road user inconvenience. EXAMPLE SCENARIOS In accordance with aspects of the technology, the vehicle's computing system is able to generate selective ranges for slow driving regions in order to be able to perform pullovers when there may be viable parking spots that are occluded. This includes approaches for selectively slowing down the vehicle to maintain pullover quality (in comparison to an encompassing slow region), communicate intent to pull over to other road users, and perform the pullover maneuver at reduced speeds, all while reducing the time from starting to plan the pullover until the vehicle reaches the final pullover position. For instance, the system can model slow region locations heuristically around the current pullover location, and potential alternative locations based on FOV and an optimistically predicted pullover cost. Such slow regions can maintain pullover quality, as potential pullover spots can remain achievable due to the shortened braking distance. The intent to pull over can be communicated to other agents by going slow for the chosen pullover spot and potential secondary locations. The slow region around the chosen pullover spot and any potential secondary pullover spots can ensure that a maneuver, even after a late change in pullover location, should be performed at or below the speed of the slow region. Generally, the system seeks to identify the potential good pullover locations along the trajectory so that the computing system (e.g., planner/routing module223) causes the vehicle to slow down only around those locations. More particularly, the output of the discussed approach can include pairs of waypoint indexes that represent segments of the trajectory that contain the potential good pullover locations. In a typical system, when performing the pullover location selection (especially for parallel parking along the street), the system may consider many factors to determine the “goodness” of a pullover location (in particular, of a pullover waypoint). Such factors include the angle cost (meaning what's final angle of the car would be like if the car pulled over in the location), the double-parking cost (meaning if the car have to double park in the location), the hydrant cost (meaning if the car is blocking fire hydrants), the spatial-information related cost (e.g., how big the space is for pullover), etc. All the different components are then combined into a final aggregated cost that represents the holistic quality of a waypoint for pullover. Then, a final pullover location may be selected by searching for the location with the lowest cost. One important factor among these is the lateral gap available between the rightmost lane (in a US-type road configuration, or the leftmost lane in a UK-type road configuration) and the road curb (supposing the road curb is empty), which can be especially important when the vehicle considers parallel parking along the road. The notation of the available lateral gap is formulated as various types of “offset”, as explained herein. However, the exact size of the lateral gap is sometimes not visible due to occlusions, which can entail some reasoning logic to work with the uncertainty in some locations. On the other hand, as discussed above the vehicle may encounter FOV limitations to some pullover locations due to occlusions. One example of this is shown in view800ofFIG.8. Here, bracketed region802indicates a side of a roadway, and bounding boxes804encompass vehicles806that are parked along the curb. Line808indicates the curb or other road edge, and line810indicates a baseline position offset some distance from the road edge, which encompasses the parked vehicles806. Line808may be known from stored map data and/or information from the vehicle's perception system. Dashed lines812are shown to indicate sensor signals (e.g., lidar, radar or image sensor signals) received by a sensor814of the vehicle (not shown). The solid (horizontal) arrows816illustrate the clearance offset at a particular location. The clearance offset is the offset around everything the vehicle's sensors can detect and otherwise assume as being unoccupied (e.g., no vehicle at that position). This represents an optimistic assumption. The dotted line818is the FOV restricted offset, which is the offset around everything the vehicle's sensors can detect and otherwise how far there is a FOV (to the right, in this situation). Here, anything to the right of the dotted line818(between dotted line818and line808) can be considered to be occluded. The portions of the arrows816to the right of the dotted line818indicate regions with possible optimistic options for pullover locations that are currently occluded. During the cost calculation, the system uses the offset concept to model a location's closeness to the road curb (e.g., line808). The spatial-information related cost can be computed from the offset for a final cost calculation, as discussed further below. FIG.9Aillustrates a process900showing an approach for whether to select an occluded pullover location instead of another pullover location, which may be performed by the planner/routing module223or another part of the computing system of the vehicle. At block902, the system obtains (e.g., calculates) clearance offsets to the side of the road/curb. In one scenario, when identifying blind-spot locations as possible pullover locations, the process may conservatively restrict the computed offsets to those locations because the true closeness to the road curb is unknown. Thus, those locations have the true potential to emerge to be good (or even better) pullover locations as the vehicle approaches closer, which may result in a reduction or elimination of an occlusion of such locations. In order to systematically identify those potentially viable locations, at block904the system first removes the FOV restrictions to the clearance offsets, based on an optimistic assumption that those out-of-sight locations can be very close to road curbs. This results in a final set of offsets. Then, from the new offset(s), the system calculates new final costs to guide the selection of the potential good pullover locations. This includes calculating spatial information at block906, and using the spatial information to calculate costs across waypoints at block908. Once the costs across the waypoints have been calculated, at block910the system calculates costs at a selected pullover waypoint. In a parallel process, at block912the spatial information is calculated from the obtained clearance offsets, but without applying the FOV restriction. From this, at block914the system calculates costs across the waypoints, providing a baseline that does not take the FOV restriction into account. Finally, at block916, a comparison is performed between the baseline from block914and the costs calculated at a selected pullover point from block910. Here, the baseline final cost at the selected pullover waypoint is used as a threshold for selection. In particular, those waypoints whose final costs from block910are lower than the threshold will be selected as the potential good pullover locations. While the cost may be marginally lower (e.g., on the order of 1-5% lower), in other instances the cost may be significantly lower (e.g., on the order of 10-30% lower). In one scenario, an occluded waypoint having a final cost marginally lower than the threshold may not be selected, while another occluded waypoint having a significantly lower final cost than the threshold would be selected (e.g., at least 7-10% lower). Alternatively, the system could tighten the baseline criterion by lowering the threshold by some fixed values. In view of this, consider a parallel parking scenario when picking up and dropping up riders. Here, the offsets determine the expected lateral gap for pullover on the side of the road. From the offsets and the system's driving capability, the system calculates how the vehicle would end up being positioned if the system decided to park the vehicle there (e.g., the final parking angle and how far the vehicle would be from the nearest lane if parking there). That information is encapsulated as the “spatial information” in block906and block912. From the spatial information, the system calculates the costs of the pullover in different aspects, for example the distance from the road edge, the parked angle, the distance from the requested pullover location, etc. The costs of such different aspects will be further aggregated into a final cost to express how good the location is. This calculation corresponds to what is performed in each of block908and block914. In one scenario, at block902three subtypes of offsets may be calculated: a clearance offset, a FOV limited offset, and a predicted offset, in which the offset finally being used for the cost calculations (at blocks908and914) is the combination of all the three subtypes of offsets. As noted above, the clearance offset is an offset around everything the vehicle can detect via its perception system and otherwise is assumed to be unoccupied. The FOV limited offset is the offset around everything that can be detected and otherwise how far to the (right) side of the roadway the vehicle has a FOV. The predicted offset is an offset around everything that can be detected by the perception system. The concept of the predicted offset is used by the system to estimate what the offset would be in the occluded waypoints. The system knows whether a particular waypoint is towards a driveway or not from the map information. Thus, if the waypoint is towards a driveway, the system can predict there are no parked cars and hence can extend the offset to the road curb. Otherwise, the system would use the detected offset that is within the FOV. Once the potential pullover location(s) has been selected, the system is able to compute one or more “slow” regions in which the vehicle will reduce speed, so that the vehicle will be able to pull into the pullover location without exceeding a pullover deceleration limit, a discomfort threshold or other criteria. The output of this computation may comprise pairs of waypoint indices that represent segments of the vehicle trajectory that are slow regions. This information can be applied to a speed solving pipeline of the planner/routing module. The system can use this information to control the steering, deceleration and/or acceleration systems of the vehicle accordingly. The computing system may constantly continue looking for the “best” pullover location using an updated current location, updated vehicle dynamics, updated sensor data, etc. This may continue, for example, until some point after the vehicle reaches a pullover start location of a selected pullover location and begins to laterally shift or otherwise turn towards the selected pullover location. Thus, the system may perform an iterative or subsequent evaluation by repeating the process ofFIG.9Aas the vehicle approaches potential pullover locations, as the vehicle should have more sensor information from the perception system. This may be done regularly (such as every 0.X seconds, e.g., every 0.2, 0.4, 0.6, 0.8 or 1.0 seconds, or more or less). Thus, a spot that was previously identified as a potential pullover location may be determined to be better than other locations. Or, a spot that was occluded but considered likely to be a very good pullover location may turn out to not be a viable option, for instance because more recent sensor information indicates that a shopping cart, scooter, debris or other object is in the spot. The process ofFIG.9Acan also be determined for all of the spots that might be good, rather than for just a single spot. The process could be performed concurrently for each spot. The system may maintain a set of potential pullover locations in memory, continually updating the set as the evaluation process described herein is repeated. FIG.9Billustrates a scenario950, in which a potential pullover location has been identified by the process ofFIG.9A. In this scenario, vehicle952is driving along a roadway and identifies spot954as possibly being available. Since all of the other visible spots are occupied, here spot954has a significantly lower cost than any other options on this portion of the roadway. Assume the speed limit on this portion of the roadway is 25 mph. Upon the selection of spot954, the system determines a slow region beginning at point956and ending at point958, which will allow the vehicle952to pull into the spot954in accordance with any deceleration or lateral turn limit, discomfort threshold or other criteria as it follows pullover trajectory960. By way of example, in the slow region the vehicle952may slow down from 25 mph to 5 mph. This may be accompanied by actuating a turn signal to indicate to other road users the intent to pull over. FIG.10illustrates a general approach1000for the speed pipeline used in pullover planning First, at block1002, a first module determines a pullover selection that contains all the cost-related information that matters for pullover location selection. For instance, in sub-block1004the system calculates additional spatial information (e.g., corresponding to block912) based on an unrestricted clearance offset. Then in sub-block1006, the system computes new final costs from the new spatial information (corresponding to block914). The output pullover selection from block1002is provided to block1008. Here, a second module computes corridor pullover information, which adds on more information about the roadgraph and the initially planned trajectory. The information output from block1008includes both the new and the old costs, which are passed to a module in block1010. Here, the system computes the slow regions by the aforementioned process (corresponding to block916). For instance, this module is configured to compute a pullover signal used by the vehicle's systems, which may abstract all of the redundant information into only geometry-related signals, such as desired lateral gaps from other road users and speed related signals such as slow regions. The pullover signal is provided to block1012, at which a module will further consolidate all speed-related signals, such as computing the overall slow region. The slow region information from generated at block1012is then passed to block1014, where the slow regions will be appropriately set up. Here, for instance, the system will put those speed signals into effect, such as putting up slow regions or stop fences that can be used by the planner/routing module to control the trajectory and route of the vehicle in an autonomous driving mode. Offboard systems may use the pullover-related information discussed above to perform autonomous simulations based on real-world or man-made scenarios, or metric analysis to evaluate pullover approaches or general speed planning that might be impacted by parking location occlusion. This information may be used in model training. It can also be shared across a fleet of vehicles to enhance the perception and route or trajectory planning for those vehicles. One such arrangement is shown inFIGS.11A and11B. In particular,FIGS.11A and11Bare pictorial and functional diagrams, respectively, of an example system1100that includes a plurality of computing devices1102,1104,1106,1108and a storage system1110connected via a network1116. System900also includes smaller vehicles such as cars1112and larger vehicles such as deliver trucks1114. Vehicles1112and/or vehicles1114may be part of a fleet of vehicles. Although only a few vehicles and computing devices are depicted for simplicity, a typical system may include significantly more. As shown inFIG.11B, each of computing devices1102,1104,1106and1108may include one or more processors, memory, data and instructions. Such processors, memories, data and instructions may be configured similarly to the ones described above with regard toFIG.2. The various computing devices and vehicles may communicate via one or more networks, such as network1116. The network1116, and intervening nodes, may include various configurations and protocols including short range communication protocols such as Bluetooth™, Bluetooth LE™, the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, WiFi and HTTP, and various combinations of the foregoing. Such communication may be facilitated by any device capable of transmitting data to and from other computing devices, such as modems and wireless interfaces. In one example, computing device1102may include one or more server computing devices having a plurality of computing devices, e.g., a load balanced server farm, that exchange information with different nodes of a network for the purpose of receiving, processing and transmitting the data to and from other computing devices. For instance, computing device1102may include one or more server computing devices that are capable of communicating with the computing devices of vehicles1112and/or1114, as well as computing devices1104,1106and1108via the network1116. For example, vehicles1112and/or1114may be a part of a fleet of vehicles that can be dispatched by a server computing device to various locations. In this regard, the computing device1102may function as a dispatching server computing system which can be used to dispatch vehicles to different locations in order to pick up and drop off passengers and/or to pick up and deliver cargo. In addition, server computing device1102may use network1116to transmit and present information to a user of one of the other computing devices or a passenger of a vehicle. In this regard, computing devices1104,1106and1108may be considered client computing devices. As shown inFIG.11Aeach client computing device1104,1106and1108may be a personal computing device intended for use by a respective user1118, and have all of the components normally used in connection with a personal computing device including a one or more processors (e.g., a central processing unit (CPU)), memory (e.g., RAM and internal hard drives) storing data and instructions, a display (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device such as a smart watch display that is operable to display information), and user input devices (e.g., a mouse, keyboard, touchscreen or microphone). The client computing devices may also include a camera for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another. Although the client computing devices may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet. By way of example only, client computing devices1106and1108may be mobile phones or devices such as a wireless-enabled PDA, a tablet PC, a wearable computing device (e.g., a smartwatch), or a netbook that is capable of obtaining information via the Internet or other networks. In some examples, client computing device1104may be a remote assistance workstation used by an administrator or operator to communicate with passengers of dispatched vehicles. Although only a single remote assistance workstation1104is shown inFIGS.11A-11B, any number of such workstations may be included in a given system. Moreover, although the workstation is depicted as a desktop-type computer, workstations may include various types of personal computing devices such as laptops, netbooks, tablet computers, etc. Storage system1110can be of any type of computerized storage capable of storing information accessible by the server computing devices1102, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, flash drive and/or tape drive. In addition, storage system1110may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system1110may be connected to the computing devices via the network1116as shown inFIGS.11A-B, and/or may be directly connected to or incorporated into any of the computing devices. In a situation where there are passengers, the vehicle or remote assistance may communicate directly or indirectly with the passengers' client computing device. Here, for example, information may be provided to the passengers regarding current driving operations, changes to the route in response to the situation, modification to pullover locations, etc. FIG.12illustrates an example method of operation1200of a vehicle. The method includes, at block1202, receiving, sensor data obtained from an external environment of a vehicle configured to operate in an autonomous driving mode, the external environment including a roadway. At block1204, the method includes obtaining, based on the received sensor data, a clearance offset to a side of the roadway. At block1206, the method includes applying a field of view restriction to the clearance offset to obtain a final offset. The field of view restriction corresponds to one or more occluded areas along the side of the roadway. At block1208, the method includes calculating a cost for each pullover location of a set of possible pullover locations based on the final offset. At block1210, the method includes comparing the cost for each pullover location against a baseline set of costs that does not take into account the field of view restriction. And at block1212, the method includes selecting, based on the comparison, a pullover location along the roadway. Finally, as noted above, the technology is applicable for various types of vehicles, including passenger cars, motorcycles, vans, buses, RVs, delivery trucks, cargo vehicles or the like. Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements. The processes or other operations may be performed in a different order or simultaneously, unless expressly indicated otherwise herein. | 61,356 |
11859990 | DESCRIPTION Examples described herein are directed to systems and methods for routing autonomous vehicles using temporal data. Temporal data is data that describes a time-dependent condition of a roadway. For example, temporal data can describe traffic conditions on the roadway, weather conditions on the roadway, construction conditions on the roadway, or other related time-dependent conditions. In an autonomous or semi-autonomous vehicle (collectively referred to as an autonomous vehicle (AV)), a vehicle autonomy system, sometimes referred to as an AV stack, controls one or more of braking, steering, or throttle of the vehicle. In a fully-autonomous vehicle, the vehicle autonomy system assumes full control of the vehicle. In a semi-autonomous vehicle, the vehicle autonomy system assumes a portion of the vehicle control, with a human user (e.g., a vehicle operator) still providing some control input. Some autonomous vehicles can also operate in a manual mode, in which a human user provides all control inputs to the vehicle. Autonomous vehicles are programmed to execute trips. An autonomous vehicle executes a trip by traversing from a trip start point to a trip end point. For some trips, the vehicle picks up a passenger or cargo at the vehicle start point and drops off the passenger or cargo at the trip end point. Also, some trips include waypoints. Waypoints are positions where the autonomous vehicle passes and/or stops between the trip start point and the trip end point. In some examples, waypoints are implemented to execute a transportation service for more than one passenger or more than one cargo. For example, passengers and/or cargo may be picked up and/or dropped off at some or all of the waypoints. A vehicle stops at one or more waypoints to pick up or drop off passengers and/or cargo or can pass through a waypoint without stopping. Examples of cargo can include, food, material goods, and the like. A routing engine can generate routes for autonomous vehicle trips. A route is a path that an autonomous vehicle takes, or plans to take, over one or more roadways to execute a trip. A routing engine can be on-board the autonomous vehicle or remote from the autonomous vehicle. In some examples, the functionality of the routing engine is split between an on-board component and a remote component. The routing engine can generate a route using a routing graph. A routing graph is a representation of roadways in a geographic area. The routing graph represents roadways as a set of route components, which are sometimes also referred to as lane segments. The routing graph indicates the connectivity of the components along with various costs. For example, the routing graph can indicate costs to traverse a particular route component and/or costs to transition from one route component to another. The routing engine applies a path planning algorithm to the routing graph to generate a route. The route includes a set of route components extending from the trip start point to the trip end point. In some examples, the selected route includes the set of route components between the trip start and end points that has the lowest total cost. Any suitable path planning algorithm can be used such as, for example, A*, D*, Focused D*, D* Lite, GD*, or Dijkstra's algorithm. A route generated by the routing engine is provided to an autonomous vehicle. The vehicle autonomy system of the vehicle controls the vehicle along the route. When routing an autonomous vehicle, it is desirable to consider temporal data to increase the predictability of vehicle routing and to avoid routing the vehicle into unfavorable traffic, weather, or other temporary conditions. For example, the lowest-cost route between a first point and a second point may ordinarily include traversing roadways that are part of an interstate. If an autonomous vehicle is to complete a route at a time when traffic conditions lower the speed of travel on the interstate, routes including the interstate may no longer be fastest. Incorporating temporal data into autonomous vehicle routing, however, can present technical challenges that may not be present in other contexts. For example, common sources of temporal data may not map directly to route components of an autonomous vehicle routing graph. Consider Global Positioning System (GPS) trace data that tracks vehicles by location (e.g., latitude and longitude). A traffic, weather, or other temporal condition of a roadway can be indicated by the vehicle speed, which, in some examples, is derived from multiple positions for the same vehicle. Translating GPS traces to a routing graph, however, can be nontrivial. For example, as described further herein, different lanes of a roadway can have different conditions, but a GPS trace may not be accurate enough to specify the lane or lanes to which it refers. Also, some sources of temporal data are referenced to an alternative routing graph or other map having components that do not map to the route components of the routing graph used to route an autonomous vehicle. For example, some temporal data is collected from vehicles (e.g., non-autonomous vehicles) that are routed with a simplified map that does not distinguish between lanes of travel. Another example difficulty for incorporating temporal data into autonomous vehicle routing is related to costing. Routing autonomous vehicles using a routing graph, as described herein, can include assigning a cost to different route components and/or transitions between route components. One example way that a routing engine can account for temporal data is to modify one or more costs in the routing graph. For example, the cost of traversing route components experiencing high traffic density can be increased. These and other challenges may be addressed by the systems and methods described herein for routing autonomous vehicles. Various examples described herein utilize temporal data that indicates times, locations, and roadway conditions. Temporal data can include data items describing a roadway condition and a location. In some examples, temporal data items also describe a time when the roadway condition was encountered. One example temporal data item includes a GPS trace from a vehicle traversing a roadway. The GPS trace can describe the location of the vehicle, a speed of the vehicle (indicating the roadway condition) and, optionally, a time when the GPS trace was collected. Another example temporal data item can include a map segment, an indication of a roadway condition at the map segment and, optionally, an associated time. Temporal data items are correlated to route components of an autonomous vehicle routing graph and used to generate a constrained routing graph, which is then used to generate routes. The constrained routing graph includes changes to a routing graph based on the temporal data. Such changes can include, for example, changes to the cost of traversing one or more route components or moving between one or more sets of route components. Such changes can also include changes to the connectivity of the routing graph. For example, if the temporal data indicates that a route component is impassable, the constrained routing graph may eliminate connections to that route component. Constrained routing graphs can be repeatedly re-generated to account for changes in temporal data. In some examples, the changes to generate a constrained routing graph are expressed as route constraints. A route constraint indicates a route component or route component property and an associated modification to the routing graph. In some examples, the generation of route constraints based on temporal data can be separated from the generating of the constrained routing graph. For example, a remote routing system can receive temporal data and generate routing constraints. The routing constraints can be provided to an on-board routing engine for generating the constrained routing graph and routing based thereon. FIG.1is a diagram showing one example of an environment100for routing autonomous vehicles using temporal data. The environment100includes a remote routing system104and an example autonomous vehicle102. The vehicle102can be a passenger vehicle, such as a truck, a car, a bus or other similar vehicle. The vehicle102can also be a delivery vehicle, such as a van, a truck, a tractor trailer, and so forth. The vehicle102is a self-driving vehicle (SDV) or autonomous vehicle (AV). For example, the vehicle102includes a vehicle autonomy system, described in more detail herein, that is configured to operate some or all the controls of the vehicle102(e.g., acceleration, braking, steering). In some examples, the vehicle102is operable in different modes where the vehicle autonomy system has differing levels of control over the vehicle102in different modes. For example, the vehicle102may be operable in a full autonomous mode in which the vehicle autonomy system has responsibility for all or most of the controls of the vehicle102. In some examples, the vehicle102is operable in a semiautonomous mode that is in addition to or instead of the full autonomous mode. In a semiautonomous mode, the vehicle autonomy system of the vehicle102is responsible for some of the vehicle controls while a human user or driver is responsible for other vehicle controls. In some examples, the vehicle102is operable in a manual mode in which the human user is responsible for all control of the vehicle102. The vehicle102includes one or more remote detection sensors106. Remote detection sensors106receive return signals from the environment100. Return signals may be reflected from objects in the environment100, such as the ground, buildings, trees, and so forth. The remote-detection sensors106may include one or more active sensors, such as light imaging detection and ranging (LIDAR), radio detection and ranging (RADAR), and/or sound navigation and ranging (SONAR) that emit sound or electromagnetic radiation in the form of light or radio waves to generate return signals. Information about the environment100is extracted from the return signals. In some examples, the remote-detection sensors106include one or more passive sensors that receive return signals that originated from other sources of sound or electromagnetic radiation. Remote-detection sensors106provide remote sensor data that describes the environment100. The vehicles102can also include other types of sensors, for example, as described in more detail herein. The example ofFIG.1includes a remote routing system104to generate routes for the autonomous vehicle102using temporal data. The remote routing system104generates routes using a routing graph120, temporal data from temporal data sources116A,116B,116N, and other constraint data such as, for example, vehicle capability data114and policy data118. As described herein, the routing graph120represents the roadways in a geographic area as a set of route components. The routing graph120indicates directionality, connectivity, and cost for the various route components making up the roadways. Directionality indicates the direction of travel in a route component. Connectivity describes possible transitions between route components. Cost describes the cost for an autonomous vehicle102to traverse a route component and/or to transition between two route components. InFIG.1, break-out window122shows example roadways that can be described by the routing graph120. Another break-out window124shows example route components making up part of the routing graph120. Route components in the break-out window124are illustrated as shapes with arrows indicating the directionality of the route components. Route components can be connected to one another according to their directionality. The temporal data sources116A,116B,116N can include any suitable computing hardware such as, for example, one or more servers. The temporal data sources116A,116B,116N can be or include any source that can provide temporal data in whole or in part. One example temporal data source116A,116B,116N can track GPS traces received from vehicles traversing roadways (e.g., autonomous and/or non-autonomous vehicles). Another example temporal data source116A,116B,116N can receive reports of roadway conditions from drivers or other observers. Another example temporal data source116A,116B,116N can include a weather reporting and/or forecasting service. Temporal data is received by a graph converter110of the remote routing system104. The graph converter110correlates temporal data items to specific route components of a routing graph120used by the remote routing system. In some examples, the graph converter110also generates routing constraints based on the temporal data items. The routing constraints describe modifications to the routing graph120in response to the temporal data. For example, if temporal data indicates that particular route components are experiencing heavy traffic, the graph converter110can generate a routing constraint identifying the route components and a modification to the routing graph120for those route components (e.g., an increase in the cost of traversing or transitioning to the route components). In some examples, a routing constraint generated from temporal data can include a change to a route component property. For example, if temporal data indicates that it is raining in a particular area, a route constraint generated by the graph converter110can update the properties of route components in the area to indicate rain. Routing constraints generated by the graph converter110are provided to a constrained routing graph engine108that generates a constrained routing graph. The routing graph120may be a general purpose routing graph that is usable to generate routes in different temporal conditions and, in some examples, for different types of autonomous vehicles. For example, the constrained routing graph engine108may generate the constrained routing graph by applying one or more routing constraints to the routing graph120. In some examples, the constrained routing graph engine108can consider routing constraints generated from temporal data as well as other routing constraints. For example, the constrained routing graph engine108can also receive vehicle capability data114, such as operational domain data (OD or ODD). Vehicle capability data114can include route constraints based on the capabilities of the vehicle102. In some examples, the remote routing system104routes multiple autonomous vehicles of different types. Different types of autonomous vehicles can have different hardware and/or different software components such as different vehicle autonomy systems, different remote or other sensors, and so forth. Accordingly, different types of autonomous vehicles can have different sets of corresponding vehicle capability data114. The constrained routing graph engine108can also receive policy data118. Policy data118can describe routing constraints that are based on human-generated policies. For example, it may be undesirable to route autonomous vehicles through school zones. Accordingly, policy data118can include a routing constraint that increases the cost of and/or removes connectivity to route components that include all or part of a school zone. In some examples, other routing constraints, such as those derived from vehicle capability data114or policy data118, are dependent on temporal data. For example, autonomous vehicles of one type may not be capable of operating in the rain. In this example, routing constraint derived temporal data may modify properties of route components where it is currently raining. A routing constraint derived from vehicle capability data114may increase the cost or eliminate connectivity to those route components for some or all autonomous vehicles routed by the remote routing system. A routing engine112receives the constrained routing graph generated by the constrained rouging graph engine108and generates routes that are provided to the vehicle102. Although one vehicle102is shown inFIG.1, the remote routing system104may be configured to provide routes to a plurality of vehicles, as described herein. In some examples, the remote routing system104can also be and/or operate in conjunction with a dispatch system to dispatch trips to autonomous vehicles. For example, the remote routing system104can generate candidate routes for a set of candidate vehicles for executing a trip. The dispatch system can select a best vehicle for the trip, for example, based on the vehicle having the lowest-cost route. The dispatch system can then cause the selected autonomous vehicle to begin traversing its route by requesting that the selected vehicle execute the trip. In some examples, the selected vehicle can decline a trip, in which case the dispatch system may offer the trip to an alternate candidate vehicle. In the example ofFIG.1, routing using temporal data is implemented by the remote routing system104. In some examples, however, routing using temporal data can be executed locally onboard the vehicle102.FIG.2is a diagram showing one example of routing using temporal data implemented onboard the vehicle102. The vehicle includes a vehicle autonomy system201. The vehicle autonomy system201includes a navigator system202, a motion planner204, and vehicle controls206. The navigator system202implements a graph converter210, constrained routing graph engine208, and routing engine212similar to the graph converter110, constrained routing graph engine108, and routing engine112described inFIG.1. Routes generated by the navigator system202are provided to a motion planner204. The motion planner204converts the routes into commands that are provided to vehicle controls206. Additional details of example navigator systems, motion planner systems, and vehicle controls are provided herein with reference toFIG.4. FIG.3is a diagram showing one example of an environment300in which routing using temporal data is divided between a remote routing system304and an on-board routing engine305. In this example, the vehicle autonomy system201includes an on-board routing engine305that is configured to communicate with a remote routing system304to execute routing using temporal data. Routing can be divided between the remote routing system304and the on-board routing engine305in any suitable manner. In some examples, the remote routing system304receives temporal data and/or other constraint data as shown inFIG.1and generates routing constraints. The routing constraints are provided to the on-board routing engine305. The vehicle autonomy system201generates routes using the on-board routing engine305and a routing graph that may be stored on the vehicle102. In another example, the remote routing system generates a constrained routing graph, for example, as described with respect toFIG.1. The remote routing system304provides the constrained routing graph to the routing engine305, which uses it to generate routes. In yet another example, the on-board routing engine305performs local routing and the remote routing system304performs remote routing. An example of this concept is illustrated in window301. The on-board routing engine305uses a routing graph stored on-board the vehicle102to generate local routes from the vehicle's location348to one or more exit points350,352,354,356. The on-board routing engine305then sends a remote route request to the remote routing system. The remote route request indicates the exit points350,352,354,356. The remote routing system304generates remote routes from the respective exit points350,352,354,356to a trip end point358. The remote routing system provides the remote routes (and/or costs associated with the remote routes) to the on-board routing engine305. The on-board routing engine305considers the remote routes and/or corresponding costs and selects a local route. For example, the on-board routing engine305can select the combination of local and remote routes having the lowest overall cost. In some examples, temporal data can be considered differently at the on-board routing engine305than it is at the remote routing system304. In some examples, the remote routing system304generates remote routes using a constrained routing graph generated with up-to-date temporal data. The on-board routing engine305can generate local routes without considering temporal data and/or while considering less up-to-date temporal data. For example, the vehicle autonomy system201may receive limited temporal data updates. FIG.4depicts a block diagram of an example vehicle400, according to example aspects of the present disclosure. The vehicle400includes one or more sensors401, a vehicle autonomy system402, and one or more vehicle controls407. The vehicle400is an autonomous vehicle, as described herein. The example vehicle400shows just one example arrangement of an autonomous vehicle. In some examples, autonomous vehicles of different types can have different arrangements. The vehicle autonomy system402includes a commander system411, a navigator system413, a perception system403, a prediction system404, a motion planning system405, and a localizer system430that cooperate to perceive the surrounding environment of the vehicle400and determine a motion plan for controlling the motion of the vehicle400accordingly. The vehicle autonomy system402is engaged to control the vehicle400or to assist in controlling the vehicle400. In particular, the vehicle autonomy system402receives sensor data from the one or more sensors401, attempts to comprehend the environment surrounding the vehicle400by performing various processing techniques on data collected by the sensors401, and generates an appropriate route through the environment. The vehicle autonomy system402sends commands to control the one or more vehicle controls407to operate the vehicle400according to the route. Various portions of the vehicle autonomy system402receive sensor data from the one or more sensors401. For example, the sensors401may include remote-detection sensors as well as motion sensors such as an inertial measurement unit (IMU), one or more encoders, or one or more odometers. The sensor data includes information that describes the location of objects within the surrounding environment of the vehicle400, information that describes the motion of the vehicle400, and so forth. The sensors401may also include one or more remote-detection sensors or sensor systems, such as a LIDAR, a RADAR, one or more cameras, and so forth. As one example, a LIDAR system of the one or more sensors401generates sensor data (e.g., remote-detection sensor data) that includes the location (e.g., in three-dimensional space relative to the LIDAR system) of a number of points that correspond to objects that have reflected a ranging laser. For example, the LIDAR system measures distances by measuring the Time of Flight (TOF) that it takes a short laser pulse to travel from the sensor to an object and back, calculating the distance from the known speed of light. As another example, a RADAR system of the one or more sensors401generates sensor data (e.g., remote-detection sensor data) that includes the location (e.g., in three-dimensional space relative to the RADAR system) of a number of points that correspond to objects that have reflected ranging radio waves. For example, radio waves (e.g., pulsed or continuous) transmitted by the RADAR system reflect off an object and return to a receiver of the RADAR system, giving information about the object's location and speed. Thus, a RADAR system provides useful information about the current speed of an object. As yet another example, one or more cameras of the one or more sensors401may generate sensor data (e.g., remote sensor data) including still or moving images. Various processing techniques (e.g., range imaging techniques such as structure from motion, structured light, stereo triangulation, and/or other techniques) can be performed to identify the location (e.g., in three-dimensional space relative to the one or more cameras) of a number of points that correspond to objects that are depicted in an image or images captured by the one or more cameras. Other sensor systems can identify the location of points that correspond to objects as well. As another example, the one or more sensors401can include a positioning system. The positioning system determines a current position of the vehicle400. The positioning system can be any device or circuitry for analyzing the position of the vehicle400. For example, the positioning system can determine a position by using one or more of inertial sensors, a satellite positioning system such as a GPS, based on Internet Protocol (IP) address, by using triangulation and/or proximity to network access points or other network components (e.g., cellular towers, WiFi access points), and/or other suitable techniques. The position of the vehicle400can be used by various systems of the vehicle autonomy system402. Thus, the one or more sensors401are used to collect sensor data that includes information that describes the location (e.g., in three-dimensional space relative to the vehicle400) of points that correspond to objects within the surrounding environment of the vehicle400. In some implementations, the sensors401can be positioned at various different locations on the vehicle400. As an example, in some implementations, one or more cameras and/or LIDAR sensors can be located in a pod or other structure that is mounted on a roof of the vehicle400while one or more RADAR sensors can be located in or behind the front and/or rear bumper(s) or body panel(s) of the vehicle400. As another example, camera(s) can be located at the front or rear bumper(s) of the vehicle400. Other locations can be used as well. The localizer system430receives some or all of the sensor data from sensors401and generates vehicle poses for the vehicle400. A vehicle pose describes a position and attitude of the vehicle400. The vehicle pose (or portions thereof) can be used by various other components of the vehicle autonomy system402including, for example, the perception system403, the prediction system404, the motion planning system405, and the navigator system413. The position of the vehicle400is a point in a three-dimensional space. In some examples, the position is described by values for a set of Cartesian coordinates, although any other suitable coordinate system may be used. The attitude of the vehicle400generally describes the way in which the vehicle400is oriented at its position. In some examples, attitude is described by a yaw about the vertical axis, a pitch about a first horizontal axis, and a roll about a second horizontal axis. In some examples, the localizer system430generates vehicle poses periodically (e.g., every second, every half second). The localizer system430appends time stamps to vehicle poses, where the time stamp for a pose indicates the point in time that is described by the pose. The localizer system430generates vehicle poses by comparing sensor data (e.g., remote sensor data) to map data426describing the surrounding environment of the vehicle400. In some examples, the localizer system430includes one or more pose estimators and a pose filter. Pose estimators generate pose estimates by comparing remote-sensor data (e.g., LIDAR, RADAR) to map data. The pose filter receives pose estimates from the one or more pose estimators as well as other sensor data such as, for example, motion sensor data from an IMU, encoder, or odometer. In some examples, the pose filter executes a Kalman filter or machine learning algorithm to combine pose estimates from the one or more pose estimators with motion sensor data to generate vehicle poses. In some examples, pose estimators generate pose estimates at a frequency less than the frequency at which the localizer system430generates vehicle poses. Accordingly, the pose filter generates some vehicle poses by extrapolating from a previous pose estimate utilizing motion sensor data. Vehicle poses and/or vehicle positions generated by the localizer system430are provided to various other components of the vehicle autonomy system402. For example, the commander system411may utilize a vehicle position to determine whether to respond to a call from a dispatch system440. The commander system411determines a set of one or more target locations that are used for routing the vehicle400. The target locations are determined based on user input received via a user interface409of the vehicle400. The user interface409may include and/or use any suitable input/output device or devices. In some examples, the commander system411determines the one or more target locations considering data received from the dispatch system440. The dispatch system440is programmed to provide instructions to multiple vehicles, for example, as part of a fleet of vehicles for moving passengers and/or cargo. Data from the dispatch system440can be provided via a wireless network, for example. The navigator system413receives one or more target locations from the commander system411and map data426. Map data426, for example, provides detailed information about the surrounding environment of the vehicle400. Map data426provides information regarding identity and location of different roadways and segments of roadways (e.g., lane segments or route components). A roadway is a place where the vehicle400can drive and may include, for example, a road, a street, a highway, a lane, a parking lot, or a driveway. Routing graph data is a type of map data426. From the one or more target locations and the map data426, the navigator system413generates route data describing a route for the vehicle to take to arrive at the one or more target locations. In some implementations, the navigator system413determines route data using one or more path planning algorithms based on costs for route components, as described herein. For example, a cost for a route can indicate a time of travel, risk of danger, or other factor associated with adhering to a particular candidate route. Route data describing a route is provided to the motion planning system405, which commands the vehicle controls407to implement the route or route extension, as described herein. The navigator system413can generate routes as described herein using a general purpose routing graph and constraint data. Also, in examples where route data is received from a dispatch system, that route data can also be provided to the motion planning system405. The perception system403detects objects in the surrounding environment of the vehicle400based on sensor data, map data426, and/or vehicle poses provided by the localizer system430. For example, map data426used by the perception system describes roadways and segments thereof and may also describe: buildings or other items or objects (e.g., lampposts, crosswalks, curbing); location and directions of traffic lanes or lane segments (e.g., the location and direction of a parking lane, a turning lane, a bicycle lane, or other lanes within a particular roadway); traffic control data (e.g., the location and instructions of signage, traffic lights, or other traffic control devices); and/or any other map data that provides information that assists the vehicle autonomy system402in comprehending and perceiving its surrounding environment and its relationship thereto. In some examples, the perception system403determines state data for one or more of the objects in the surrounding environment of the vehicle400. State data describes a current state of an object (also referred to as features of the object). The state data for each object describes, for example, an estimate of the object's current location (also referred to as position); current speed (also referred to as velocity); current acceleration; current heading; current orientation; size/shape/footprint (e.g., as represented by a bounding shape such as a bounding polygon or polyhedron); type/class (e.g., vehicle versus pedestrian versus bicycle versus other); yaw rate; distance from the vehicle400; minimum path to interaction with the vehicle400; minimum time duration to interaction with the vehicle400; and/or other state information. In some implementations, the perception system403determines state data for each object over a number of iterations. In particular, the perception system403updates the state data for each object at each iteration. Thus, the perception system403detects and tracks objects, such as other vehicles, that are proximate to the vehicle400over time. The prediction system404is configured to predict one or more future positions for an object or objects in the environment surrounding the vehicle400(e.g., an object or objects detected by the perception system403). The prediction system404generates prediction data associated with one or more of the objects detected by the perception system403. In some examples, the prediction system404generates prediction data describing each of the respective objects detected by the prediction system404. Prediction data for an object is indicative of one or more predicted future locations of the object. For example, the prediction system404may predict where the object will be located within the next 5 seconds, 20 seconds, 200 seconds, and so forth. Prediction data for an object may indicate a predicted trajectory (e.g., predicted path) for the object within the surrounding environment of the vehicle400. For example, the predicted trajectory (e.g., path) can indicate a path along which the respective object is predicted to travel over time (and/or the speed at which the object is predicted to travel along the predicted path). The prediction system404generates prediction data for an object, for example, based on state data generated by the perception system403. In some examples, the prediction system404also considers one or more vehicle poses generated by the localizer system430and/or map data426. In some examples, the prediction system404uses state data indicative of an object type or classification to predict a trajectory for the object. As an example, the prediction system404can use state data provided by the perception system403to determine that a particular object (e.g., an object classified as a vehicle) approaching an intersection and maneuvering into a left-turn lane intends to turn left. In such a situation, the prediction system404predicts a trajectory (e.g., path) corresponding to a left-turn for the vehicle400such that the vehicle400turns left at the intersection. Similarly, the prediction system404determines predicted trajectories for other objects, such as bicycles, pedestrians, parked vehicles, and so forth. The prediction system404provides the predicted trajectories associated with the object(s) to the motion planning system405. In some implementations, the prediction system404is a goal-oriented prediction system404that generates one or more potential goals, selects one or more of the most likely potential goals, and develops one or more trajectories by which the object can achieve the one or more selected goals. For example, the prediction system404can include a scenario generation system that generates and/or scores the one or more goals for an object and a scenario development system that determines the one or more trajectories by which the object can achieve the goals. In some implementations, the prediction system404can include a machine-learned goal-scoring model, a machine-learned trajectory development model, and/or other machine-learned models. The motion planning system405commands the vehicle controls based at least in part on the predicted trajectories associated with the objects within the surrounding environment of the vehicle400, the state data for the objects provided by the perception system403, vehicle poses provided by the localizer system430, map data426, and route or route extension data provided by the navigator system413. Stated differently, given information about the current locations of objects and/or predicted trajectories of objects within the surrounding environment of the vehicle400, the motion planning system405determines control commands for the vehicle400that best navigate the vehicle400along the route or route extension relative to the objects at such locations and their predicted trajectories on acceptable roadways. In some implementations, the motion planning system405can also evaluate one or more cost functions and/or one or more reward functions for each of one or more candidate control commands or sets of control commands for the vehicle400. Thus, given information about the current locations and/or predicted future locations/trajectories of objects, the motion planning system405can determine a total cost (e.g., a sum of the cost(s) and/or reward(s) provided by the cost function(s) and/or reward function(s)) of adhering to a particular candidate control command or set of control commands. The motion planning system405can select or determine a control command or set of control commands for the vehicle400based at least in part on the cost function(s) and the reward function(s). For example, the motion plan that minimizes the total cost can be selected or otherwise determined. In some implementations, the motion planning system405can be configured to iteratively update the route or route extension for the vehicle400as new sensor data is obtained from one or more sensors401. For example, as new sensor data is obtained from one or more sensors401, the sensor data can be analyzed by the perception system403, the prediction system404, and the motion planning system405to determine the motion plan. The motion planning system405can provide control commands to one or more vehicle controls407. For example, the one or more vehicle controls407can include throttle systems, brake systems, steering systems, and other control systems, each of which can include various vehicle controls (e.g., actuators or other devices that control gas flow, steering, braking) to control the motion of the vehicle400. The various vehicle controls407can include one or more controllers, control devices, motors, and/or processors. The vehicle controls407include a brake control module420. The brake control module420is configured to receive a braking command and bring about a response by applying (or not applying) the vehicle brakes. In some examples, the brake control module420includes a primary system and a secondary system. The primary system receives braking commands and, in response, brakes the vehicle400. The secondary system may be configured to determine a failure of the primary system to brake the vehicle400in response to receiving the braking command. A steering control system432is configured to receive a steering command and bring about a response in the steering mechanism of the vehicle400. The steering command is provided to a steering system to provide a steering input to steer the vehicle400. A lighting/auxiliary control module436receives a lighting or auxiliary command. In response, the lighting/auxiliary control module436controls a lighting and/or auxiliary system of the vehicle400. Controlling a lighting system may include, for example, turning on, turning off, or otherwise modulating headlines, parking lights, running lights, and so forth. Controlling an auxiliary system may include, for example, modulating windshield wipers, a defroster, and so forth. A throttle control system434is configured to receive a throttle command and bring about a response in the engine speed or other throttle mechanism of the vehicle. For example, the throttle control system434can instruct an engine and/or engine controller, or other propulsion system component, to control the engine or other propulsion system of the vehicle400to accelerate, decelerate, or remain at its current speed. Each of the perception system403, the prediction system404, the motion planning system405, the commander system411, the navigator system413, and the localizer system430can be included in or otherwise be a part of a vehicle autonomy system402configured to control the vehicle400based at least in part on data obtained from one or more sensors401. For example, data obtained by one or more sensors401can be analyzed by each of the perception system403, the prediction system404, and the motion planning system405in a consecutive fashion in order to control the vehicle400. WhileFIG.4depicts elements suitable for use in a vehicle autonomy system according to example aspects of the present disclosure, one of ordinary skill in the art will recognize that other vehicle autonomy systems can be configured to control an autonomous vehicle based on sensor data. The vehicle autonomy system402includes one or more computing devices, which may implement all or parts of the perception system403, the prediction system404, the motion planning system405and/or the localizer system430. Descriptions of hardware and software configurations for computing devices to implement the vehicle autonomy system402and/or the remote routing system104are provided herein atFIGS.12and13. FIG.5is a flowchart showing one example of a process flow500that can be executed by a routing system, such as the remote routing systems104or304ofFIGS.1and3, the navigator202ofFIG.2, and/or the routing engine305ofFIG.3to route an autonomous vehicle using temporal data. At operation502, the routing system receives temporal data. As described herein, temporal data describes a roadway condition and a location of the condition. Temporal data may also describe or imply a time when the roadway condition exists. Temporal data may be received from various different temporal data sources, such as116A,116B,116N described herein. At operation504, the routing system correlates temporal data to routing graph components. This can be performed, for example, as described in more detail herein atFIGS.6-10. Correlating temporal data to routing graph components can yield routing constraints. At operation506, the routing system generates a constrained routing graph. For example, generating the constrained routing graph can include applying one or more routing constraints to a general purpose routing graph, such as the example routing graph120. For example, generating the constrained routing graph can include modifying cost and/or connectivity of the input routing graph. At operation508, the routing system generates a route using the constrained routing graph. The routing system may generate the route, for example, using a path planning algorithm such as A*, D*, Focused D*, D* Lite, GD*, Dijkstra's algorithm, and so forth. At operation510, the routing system can cause a vehicle to begin traversing the route. In examples in which the routing system is onboard an autonomous vehicle, this can include providing the route to a motion planner. The motion planner can, in turn, generate control signals for the vehicle controls. In examples in which the routing system is not onboard the autonomous vehicle, this can include providing the route to the autonomous vehicle, for example, as a route offer. The autonomous vehicle may accept or decline the offer. When the autonomous vehicle accepts the offer, a vehicle autonomy system at the autonomous vehicle may begin to control the vehicle in accordance with the route. FIG.6is a diagram600illustrating one example of correlating sparse temporal data to route components of a routing graph. An example routing graph602is shown. An example portion604of the routing graph is shown in a window606. In the left side of the window606, a first version604A of the portion604is shown populated with temporal data indicating a speed of travel in the respective route components. Route components having associated temporal data include a number indicating the speed of travel for that route component. Route components for which no temporal data is available are shown with a dash instead of a number. In some examples, the first version604A of the portion604is generated from GPS traces gathered from one or more vehicles that previously traversed the roadway corresponding to the portion604. In this example, a first vehicle traversed route components labeled with “4” at a speed of 4 meters/second (m/s). A second vehicle traversed the route components labeled “8” at a speed of 8 m/s. In some examples, the labeled numbers indicate a combination of data from different vehicles, such as an average speed of vehicles traversing the route component, a median speed, and so forth. Route components marked with “−” do not have available temporal data in this example. For example, GPS trace data from vehicles traversing those route components may not be available. Also, in some examples, GPS trace data from the roadway corresponding to the portion604may not be accurate enough to discriminate between adjacent lanes of travel. In some examples, correlating temporal data with route components includes propagating temporal data from route components having associated temporal data to route components that lack associated temporal data. In the example ofFIG.6, route components lacking temporal data in the first version604A are correlated to adjacent route components having associated temporal data to generate the second version604B. Correlations can be parallel to or perpendicular to the direction of travel in a lane. For example, temporal data for a route component can be correlated to an adjacent route component that is perpendicular to the direction of travel in the route components. For example, speed data for one route component can be correlated to the route component corresponding to an adjacent lane. In some examples, temperature data from a route component can also be correlated to other route components along the direction of travel. In some examples, correlations are based on a threshold that depends on direction. For example, temporal data may be correlated to adjacent route components perpendicular to the direction of travel and/or correlated to route components with a threshold distance parallel to the direction of travel. FIG.7is a diagram700illustrating another example of correlating sparse temporal data to route components of a routing graph. Here, an example routing graph702is shown. A representation shows temporal data708. In the example ofFIG.7, the temporal data708is received in a format of an alternative routing graph or map that does not have a one-to-one correlation to the routing graph702. In this example, one unit of the temporal data708corresponds to more than one route component of a portion704of the routing graph. The temporal data708can be correlated to route components, as shown at706, by setting the value of more than one route component to the temporal data708value indicated by a single unit of the alternate routing graph or map. FIG.8is a workflow800showing an example implementation of a graph converter803and a constrained routing graph engine802. The example ofFIG.8can be implemented at a route routing system104, as shown inFIG.1, at an on-board navigator system202as show inFIG.2, and/or distributed between on-board and remote components as shown inFIG.3. In the example ofFIG.8, the graph converter803accesses routing graph data804describing a routing graph, event data806, and log data808. Event data806indicates events that occurred related to vehicles, such as autonomous vehicles, while the autonomous vehicles were on trips. Various different types of events can occur on a trip and be described by event data806. One example event that can occur on a trip is an intervention. An intervention occurs when the vehicle autonomy system of an autonomous vehicle ceases to control the vehicle. This can occur, for example, if the vehicle autonomy system crashes, if the autonomous vehicle encounters a road condition through which it cannot direct the vehicle, if the autonomous vehicle encounters a route component, and so forth. In some examples, the autonomous vehicle carries a human user who can assume control upon the occurrence of an intervention. Also, in some examples, the autonomous vehicle is configured to pull to a safe stopping location upon the occurrence of an intervention. Another example event that can occur on a trip is receiving a passenger rating below a passenger rating threshold. In some examples, users can provide a passenger rating for a trip (e.g., after the trip is executed). A passenger rating indicates a user's level of satisfaction with a trip. Yet another example of an event that can occur on a trip is a deviation from a planned route. For example, when a remote routing system and/or associated dispatch system requests that an autonomous vehicle execute a route, it may provide the route for the selected autonomous vehicle generated by the remote routing system. The autonomous vehicle, for various reasons, may deviate from this route. For example, the autonomous vehicle can include and/or be in communication with another routing engine and/or other component that can route the vehicle. The autonomous vehicle can disregard the received route and/or deviate from the received route. Another example event that can occur with a trip is when an autonomous vehicle declines a trip. An autonomous vehicle can decline a trip for various reasons including, for example, if the autonomous vehicle determines that the trip will cause the autonomous vehicle to traverse a route component that is impassable or otherwise unfavorable to the autonomous vehicle. A further example of an event that can occur on a trip is a deviation from an estimated time of arrival. For example, the routes generated by a routing engine can include an estimated time of arrival for the various autonomous vehicles, where the estimated time of arrival indicates when the autonomous vehicle is estimated to complete the trip. Log data808describes trips taken by one or more vehicles, which may be autonomous vehicles or non-autonomous vehicles. For example, log data808can include GPS traces of vehicles while on trips, where the location associated with a data item is indicated by the GPS reading associated with the GPS trace. In some examples, log data808includes a reference to a routing graph or map, which may or may not be the routing graph804. The graph converter803correlates the event data806and the log data808to the routing graph804, for example, as described herein. The constrained routing graph engine802receives the correlated data and generates a probability model for different events and conditions at probability model training810. The probability model training810can include, for events, expressing a probability of one or more different events occurring at different route components. For log data808, probability model training can include describing a probability of roadway conditions indicated by log data808such as, for example, traffic conditions, weather conditions, and so forth. The probabilities of events and conditions determined at probability model training810can be determined as a function of a particular route component and/or route component properties. Results of the probability model training810include probability distribution data812describing the determined probabilities for different events and conditions as well as model error data814describing errors of the models. The model error data can be viewed, for example, by an administrative user, at a diagnostic tool826. At a transfer function operation818, the constrained routing graph engine802applies transfer function configuration data816to convert the probability distribution data812to unweighted costs for route components of the routing graph804. For example, the probability of an event or condition can be converted, for example, to a cost expressed as a time. The time can be generated in any suitable manner including, for example, an estimated time lost to an event or condition multiplied by the probability that the event or condition will occur at a given route component, as indicated by the probability distribution812. The results of applying the transfer function at operation818may be unweighted costs820for the routing graph. At operation822, the constrained routing graph engine802can weigh the unweighted costs using, for example, weight configuration data824. The weight configuration data824indicates the importance of avoiding events of different types. For example, an event that is dangerous to a vehicle and/or its passengers or cargo may be weighted higher than just the time lost multiplied by the probability of the events occurrence. Weight configuration data824can be received, for example, from an administrative user. The result of applying the weights is a constrained routing graph828. FIG.9is a flowchart showing one example of a process flow900for executing the workflow800ofFIG.8. At operation902, the constrained routing graph engine802accesses log data808. For example, the constrained routing graph engine802may request the log data808from the graph converter803. At operation904, the constrained routing graph engine802accesses event data. At operation906, the constrained routing graph engine802determines event probabilities, for example, as described herein with respect to probability model training810. Event probabilities can be expressed, for example, with respect to a routing graph component or components having a particular property. In one example, routing graph components having a speed limit of greater than 35 miles per hour have an X % probability of an intervention. At operation908, the constrained routing graph engine802converts probabilities determined at operation906to time costs. For example, an X % probability of an intervention at route components having a particular property or set of properties can be expressed as adding Y seconds to the cost of traversing the route components. In some examples, converting probabilities to time costs can be performed, for example, by applying transfer function configuration data816to the probability distribution data812. Optionally, at operation910, the constrained routing graph engine802applies weightings to the time costs to generate weighted time costs. At operation912, the constrained routing graph engine802determines route segment costs for the constrained routing graph828, for example, by applying the costs determined at operations908and/or910to a routing graph. FIG.10is a flowchart showing one example of a process flow1000for generating a constrained routing graph by applying one or more routing constraints. At operation1002, a routing system considers a route component from a routing graph, such as the general purpose routing graph120described herein. At operation1004, the routing system considers a routing constraint. The routing constraint indicates a route component property or properties and a routing graph modification. In some examples, the routing constraint is generated according to the workflow800and/or process flow900. For example, the operations818and/or822can generate a cost to be added to or subtracted from a route component having a particular property or set of properties. The cost change indicates a modification to a routing graph that, in conjunction with the property of set of properties, can make up all or part of a routing constraint. At operation1006, the routing system determines whether the considered route component has the property or set of properties indicated by the considered routing constraint. If yes, the routing system, at operation1008, applies the routing graph modification indicated by the routing constraint. This can include, for example, modifying a cost associated with the route component, modifying a connectivity between the route component and other route components, and so forth. If the route component does not include the property or set of properties indicated by the routing constraint and/or upon applying the constraint, the routing system determines, at operation1010, if there are additional routing constraints to be applied. If yes, the routing system moves to the next constraint at operation1012and considers the constraint beginning at operation1004, as described herein. If there are no more constraints at operation1010, the routing system determines, at operation1014, if there are additional route components from the routing graph to be considered. If yes, the routing system moves to the next route segment at operation1016and considers the next route component beginning at operation1002, as described herein. If no route components remain to be considered, the process flow1000may be completed at operation1018. FIG.11is a flowchart showing one example of a process flow1100for generating an updated constrained routing graph based on new temporal data. At operation1102, the routing system determines if new temporal data has been received. New temporal data can be received from temporal data sources, such as sources116A,116B,116N, for example, as the conditions of roadways change. For example, a temporal data source116A,116B,116N providing traffic data may provide updated traffic data upon the detection of increased traffic congestion at a roadway or roadways. If no new temporal data is received, the process may return to operation1102, for example, periodically, to again determine whether new temporal data is received. If updated temporal data is received at operation1102, the routing system may generate updated constrained routing graph data at operation1104. This can include, for example, generating new or updated routing constraints based on the new temporal data and applying the new or updated routing constraints to the general purpose routing graph and/or to the previously-generated constrained routing graph. At operation1106, the routing system applies the updated constrained routing graph data, for example, to generate routes for one or more vehicles as described herein. FIG.12is a block diagram1200showing one example of a software architecture1202for a computing device. The software architecture1202may be used in conjunction with various hardware architectures, for example, as described herein.FIG.12is merely a non-limiting example of a software architecture1202and many other architectures may be implemented to facilitate the functionality described herein. A representative hardware layer1204is illustrated and can represent, for example, any of the above-referenced computing devices. In some examples, the hardware layer1204may be implemented according to an architecture1300ofFIG.13and/or the software architecture1202ofFIG.12. The representative hardware layer1204comprises one or more processing units1206having associated executable instructions1208. The executable instructions1208represent the executable instructions of the software architecture1202, including implementation of the methods, modules, components, and so forth ofFIGS.1-11. The hardware layer1204also includes memory and/or storage modules1210, which also have the executable instructions1208. The hardware layer1204may also comprise other hardware1212, which represents any other hardware of the hardware layer1204, such as the other hardware illustrated as part of the architecture700. In the example architecture ofFIG.12, the software architecture1202may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture1202may include layers such as an operating system1214, libraries1216, frameworks/middleware1218, applications1220, and a presentation layer1244. Operationally, the applications1220and/or other components within the layers may invoke API calls1224through the software stack and receive a response, returned values, and so forth illustrated as messages1226in response to the API calls1224. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special-purpose operating systems may not provide a frameworks/middleware1218layer, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system1214may manage hardware resources and provide common services. The operating system1214may include, for example, a kernel1228, services1230, and drivers1232. The kernel1228may act as an abstraction layer between the hardware and the other software layers. For example, the kernel1228may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services1230may provide other common services for the other software layers. In some examples, the services1230include an interrupt service. The interrupt service may detect the receipt of a hardware or software interrupt and, in response, cause the software architecture1202to pause its current processing and execute an interrupt service routine (ISR) when an interrupt is received. The ISR may generate an alert. The drivers1232may be responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1232may include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, NFC drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries1216may provide a common infrastructure that may be used by the applications1220and/or other components and/or layers. The libraries1216typically provide functionality that allows other software modules to perform tasks in an easier fashion than by interfacing directly with the underlying operating system1214functionality (e.g., kernel1228, services1230, and/or drivers1232). The libraries1216may include system libraries1234(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1216may include API libraries1236such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, and PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries1216may also include a wide variety of other libraries1238to provide many other APIs to the applications1220and other software components/modules. The frameworks1218(also sometimes referred to as middleware) may provide a higher-level common infrastructure that may be used by the applications1220and/or other software components/modules. For example, the frameworks1218may provide various graphical user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks1218may provide a broad spectrum of other APIs that may be used by the applications1220and/or other software components/modules, some of which may be specific to a particular operating system or platform. The applications1220include built-in applications1240and/or third-party applications1242. Examples of representative built-in applications1240may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. The third-party applications1242may include any of the built-in applications1240as well as a broad assortment of other applications. In a specific example, the third-party application1242(e.g., an application developed using the Android™ or iOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as iOS™, Android™, Windows® Phone, or other computing device operating systems. In this example, the third-party application1242may invoke the API calls1224provided by the mobile operating system such as the operating system1214to facilitate functionality described herein. The applications1220may use built-in operating system functions (e.g., kernel1228, services1230, and/or drivers1232), libraries (e.g., system libraries1234, API libraries1236, and other libraries1238), or frameworks/middleware1218to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems, interactions with a user may occur through a presentation layer, such as the presentation layer1244. In these systems, the application/module “logic” can be separated from the aspects of the application/module that interact with a user. Some software architectures use virtual machines. For example, systems described herein may be executed using one or more virtual machines executed at one or more server computing machines. In the example ofFIG.12, this is illustrated by a virtual machine1248. A virtual machine creates a software environment where applications/modules can execute as if they were executing on a hardware computing device. The virtual machine1248is hosted by a host operating system (e.g., the operating system1214) and typically, although not always, has a virtual machine monitor1246, which manages the operation of the virtual machine1248as well as the interface with the host operating system (e.g., the operating system1214). A software architecture executes within the virtual machine1248, such as an operating system1250, libraries1252, frameworks/middleware1254, applications1256, and/or a presentation layer1258. These layers of software architecture executing within the virtual machine1248can be the same as corresponding layers previously described or may be different. FIG.13is a block diagram illustrating a computing device hardware architecture1300, within which a set or sequence of instructions can be executed to cause a machine to perform examples of any one of the methodologies discussed herein. The hardware architecture1300describes a computing device for executing the vehicle autonomy system, described herein. The architecture1300may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the architecture1300may operate in the capacity of either a server or a client machine in server-client network environments, or it may act as a peer machine in peer-to-peer (or distributed) network environments. The architecture1300can be implemented in a personal computer (PC), a tablet PC, a hybrid tablet, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing instructions (sequential or otherwise) that specify operations to be taken by that machine. The example architecture1300includes a processor unit1302comprising at least one processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both, processor cores, compute nodes). The architecture1300may further comprise a main memory1304and a static memory1306, which communicate with each other via a link1308(e.g., bus). The architecture1300can further include a video display unit1310, an input device1312(e.g., a keyboard), and a UI navigation device1314(e.g., a mouse). In some examples, the video display unit1310, input device1312, and UI navigation device1314are incorporated into a touchscreen display. The architecture1300may additionally include a storage device1316(e.g., a drive unit), a signal generation device1318(e.g., a speaker), a network interface device1320, and one or more sensors (not shown), such as a GPS sensor, compass, accelerometer, or other sensor. In some examples, the processor unit1302or another suitable hardware component may support a hardware interrupt. In response to a hardware interrupt, the processor unit1302may pause its processing and execute an ISR, for example, as described herein. The storage device1316includes a machine-readable medium1322on which is stored one or more sets of data structures and instructions1324(e.g., software) embodying or used by any one or more of the methodologies or functions described herein. The instructions1324can also reside, completely or at least partially, within the main memory1304, within the static memory1306, and/or within the processor unit1302during execution thereof by the architecture1300, with the main memory1304, the static memory1306, and the processor unit1302also constituting machine-readable media. Executable Instructions and Machine-Storage Medium The various memories (i.e.,1304,1306, and/or memory of the processor unit(s)1302) and/or storage device1316may store one or more sets of instructions and data structures (e.g., instructions)1324embodying or used by any one or more of the methodologies or functions described herein. These instructions, when executed by processor unit(s)1302cause various operations to implement the disclosed examples. As used herein, the terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” (referred to collectively as “machine-storage medium1322”) mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media, and/or device-storage media1322include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field programmable gate array (FPGA), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms machine-storage media, computer-storage media, and device-storage media1322specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. Signal Medium The term “signal medium” or “transmission medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. Computer-Readable Medium The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and signal media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The instructions1324can further be transmitted or received over a communications network1326using a transmission medium via the network interface device1320using any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a LAN, a WAN, the Internet, mobile telephone networks, plain old telephone service (POTS) networks, and wireless data networks (e.g., Wi-Fi, 3G, 4G LTE/LTE-A, or WiMAX networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Various components are described in the present disclosure as being configured in a particular way. A component may be configured in any suitable manner. For example, a component that is or that includes a computing device may be configured with suitable software instructions that program the computing device. A component may also be configured by virtue of its hardware arrangement or in any other suitable manner. The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) can be used in combination with others. Other examples can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure, for example, to comply with 37 C.F.R. § 1.72(b) in the United States of America. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features can be grouped together to streamline the disclosure. However, the claims cannot set forth every feature disclosed herein, as examples can feature a subset of said features. Further, examples can include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate example. The scope of the examples disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. | 74,825 |
11859991 | DETAILED DESCRIPTION The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the words “may” and “can” are used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. As discussed above, urban areas with high vehicular traffic volumes and/or concentrations suffer particularly high air pollution, especially during peak traffic periods, such as rush hours. In view of this growing issue as more automobiles continue to proliferate, there is an unmet need for monitoring large numbers of vehicles to maintain reasonable traffic concentrations and to ensure compliance with emission standards. The present disclosure provides for the real time accurate monitoring of large numbers of vehicles as well as overall air quality measures for an area associated with the vehicles to thereby provide immediately actionable information on improving the air quality for the area. Among other features, the present disclosure provides for tracking service life, maintenance, and emission compliance of monitored vehicles. Additionally, the present disclosure provides for more accurate determination of noncompliant vehicles that may or may not be monitored based on data collected from monitored vehicles. In one or more example implementations, information on individual vehicles from onboard sensors is linked to ambient air quality measures from weather stations to more accurately determine the actual contributions of vehicles on the road to the air pollution (e.g., airborne pollutants and concentrations) in the area. More specifically, dispersion modeling is used among vehicle locations/emissions and station locations to accurately determine mobile sources of pollution (e.g., airborne pollutants and concentrations), including monitored and unmonitored vehicles, and their effects on ambient air pollution. The present disclosure enhances vehicle mounted real-time emission sensors by augmenting the information provided by such sensors with station-based air quality measurements and with dispersion modeling algorithms for mobile emission sources. Additionally, the accuracy of vehicle location systems—such as, onboard global positioning systems (GPS), radio triangulation systems, and the like—is enhanced by the above-described combination of emission monitoring systems, where feedback processing of the emission data provides updated location information for the vehicles associated with the emission data. Furthermore, in one or more example implementations, a linkage between vehicular emission sensors, GPS, and vehicle information (such as fuel, engine, and maintenance data) is provided to more accurately determine factors related to the monitored vehicles that contribute to ambient air pollution, such as fuel types, engine types, engine age, service history, operating durations, to name a few. Additionally, the present disclosure provides for a model that identifies the effects of each vehicle or fleet on ambient air quality, which aids regulators and policymakers on defining localized operating policies that best address traffic related air pollution. Thus, advantageously, the present disclosure provides a technique for analyzing combined granular emission data with overall air quality data to customize remedial interventions that maximize the remedial effect of such interventions, which thereby reduces the frequency and any adverse impact of such interventions. For example, by timely and accurately identifying specific noncompliant vehicles, the present disclosure reduces the need for sweeping inspections on emission standard compliance and can target the specific vehicles to mitigate release of the air pollution (e.g., airborne pollutants). FIG.1is a schematic illustration of an air quality and emission data retrieval, processing, storage, and application system100according to an example implementation of the present disclosure. It should be understood by one or ordinary skill in the art that one or more of the devices, apparatuses, and systems shown inFIG.1, and as described below, can be divided into plural entities. Conversely, the features and functionality provided by any plural entities shown inFIG.1, and as described below, can be provided by a consolidated apparatus with suitable programming and attendant hardware components to provide such features and functionality. As shown inFIG.1, system100includes a data processing apparatus101that is communicatively coupled to plural emission data sources (e.g., vehicles with onboard emission data collection assemblies)200-1,200-2, . . . ,200-mand plural air quality data sources (e.g., sensing stations with ambient air quality data collection assemblies)300-1,300-2, . . . ,300-nassociated with a monitored area107via a network120and a control device150. According to an example implementation, emission data from individual vehicles and air quality data from stationary sensing stations are acquired. In one or more example implementations, emission data retrieved from vehicles200-1. . .200-mup to hundreds of thousands (e.g., m˜10-1,000,000) and air quality data retrieved from sensing stations300-1. . .300-nin the tens of thousands (e.g., n˜10-100,000) are used. As can be appreciated by one of ordinary skill in the art, the arrangement and density of air quality data sources300-1. . .300-nin a given area107(number of sensing stations300-1. . .300-nper squared kilometer in area107) can be adjusted to improve the coverage and/or resolution of the generated air quality information. In example implementations, system100can be applicable plural areas107. In an example implementation, onboard emission data collection assemblies200-1. . .200-mcan be installed to respective vehicles as a condition for operation in area107or as a vehicle maintenance tracking feature. Air quality sensing stations (with ambient air quality data collection assemblies)300-1. . .300-nare placed at respective locations in area107and can be oriented in any known arrangements for such purposes. According to an example implementation, sensing stations300-1. . .300-nare arranged at regular intervals along traffic routes, roadways, intersections, and the like, to gather air quality information therefrom. In one or more example implementations, sensing stations300-1. . .300-ncan be placed at buildings, enclosures, and the like to retrieve air quality information on a particular zone or enclosure within area107. As illustrated inFIG.1, data processing apparatus101is a computing apparatus that incorporates a communication interface105, one or more processor devices110, and a memory115. One or more processor(s)110can include any suitable processing circuitry capable of controlling operations and functionality of data processing apparatus101, as well as facilitating communications between various components within data processing apparatus101. In some implementations, processor(s)110can include a central processing unit (“CPU”), a graphic processing unit (“GPU”), one or more microprocessors, a digital signal processor, or any other type of processor, or any combination thereof. In some implementations, the functionality of processor(s)110can be performed by one or more hardware logic components including, but not limited to, field-programmable gate arrays (“FPGA”), application specific integrated circuits (“ASICs”), application-specific standard products (“ASSPs”), system-on-chip systems (“SOCs”), and/or complex programmable logic devices (“CPLDs”). Furthermore, each of processor(s)110can include its own local memory, which can store program systems, program data, and/or one or more operating systems. Memory115can include one or more types of storage mediums such as any volatile or non-volatile memory, or any removable or non-removable memory implemented in any suitable manner to store data for data processing apparatus101. For example, information can be stored using computer-readable instructions, data structures, and/or program systems. Various types of storage/memory can include, but are not limited to, hard drives, solid state drives, flash memory, permanent memory (e.g., ROM), electronically erasable programmable read-only memory (“EEPROM”), CD ROM, digital versatile disk (“DVD”) or other optical storage medium, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other storage type, or any combination thereof. Furthermore, memory115can be implemented as computer-readable storage media (“CRSM”), which can be any available physical media accessible by processor(s)110to execute one or more instructions stored within memory115. In some implementations, one or more applications can be run by processor(s)110and can be stored in memory115. Communication interface105can include any circuitry allowing or enabling one or more components of data processing apparatus101to communicate with one or more additional devices, servers, and/or systems—for example, one or more of information system140, control device150, emission data collection assemblies200-1. . .200-m, and air quality data collection assemblies300-1. . .300-n. As an illustrative example, data recorded by data collection assemblies200-1. . .200-mand300-1. . .300-ncan be transmitted over network120to data processing apparatus101using any number of communications protocols either directly or through control device150. For example, network(s)120can be accessed using Transfer Control Protocol and Internet Protocol (“TCP/IP”) (e.g., any of the protocols used in each of the TCP/IP layers), Hypertext Transfer Protocol (“HTTP”), WebRTC, SIP, and wireless application protocol (“WAP”), are some of the various types of protocols that can be used to facilitate communications between data processing apparatus101and control device150. Various additional communication protocols can be used to facilitate communications between data processing apparatus101and control device150, include the following non-exhaustive list, Wi-Fi (e.g., 802.11 protocol), Bluetooth, radio frequency systems (e.g., 900 MHz, 1.4 GHz, and 5.6 GHz communication systems), cellular networks, FTP, RTP, RTSP, SSH, to name a few. Communications systems for facilitating network120can include hardware (e.g., hardware for wired and/or wireless connections) and/or software. In implementations, communications systems can include one or more communications chipsets, such as a GSM chipset, CDMA chipset, LTE chipset, 4G/5G/6G, Wi-Fi chipset, Bluetooth chipset, to name a few, and/or combinations thereof. Wired connections can be adapted for use with cable, plain old telephone service (POTS) (telephone), fiber (such as Hybrid Fiber Coaxial), xDSL, to name a few, and wired connections can use coaxial cable, fiber, copper wire (such as twisted pair copper wire), and/or combinations thereof, to name a few. Wired connections can be provided through telephone ports, Ethernet ports, USB ports, and/or other data ports, such as Apple 30-pin connector ports or Apple Lightning connector ports, to name a few. Wireless connections can include cellular or cellular data connections and protocols (e.g., digital cellular, PCS, CDPD, GPRS, EDGE, CDMA2000, 1×RTT, RFC 1149, Ev-DO, HSPA, UMTS, 3G, 4G, LTE, 5G, and/or 6G to name a few), Bluetooth, Bluetooth Low Energy, Wi-Fi, radio, satellite, infrared connections, ZigBee communication protocols, to name a few. Communications interface hardware and/or software, which can be used to communicate over wired and/or wireless connections, can include Ethernet interfaces (e.g., supporting a TCP/IP stack), X.25 interfaces, T1 interfaces, and/or antennas, to name a few. Computer systems—such as data processing apparatus101, information system140, and control device150—can communicate with other computer systems or devices directly and/or indirectly, e.g., through a data network, such as the Internet, a telephone network, a mobile broadband network (such as a cellular data network), a mesh network, Wi-Fi, WAP, LAN, and/or WAN, to name a few. Information system140incorporates data storage145that embodies storage media for storing data from emission data collection assemblies200-1. . .200-m(which can include operation history, maintenance history, real time location information as well as emission information for each respective vehicle), ambient air quality data collection assemblies300-1. . .300n(including gas composition and particulate matter measure data), data processing apparatus101(including results of example data processing described in further detail below), and control device150(including operation history, control parameters, location information, etc., related to data collection assemblies200-1. . .200-mand300-1. . .300n). Example storage media for data storage145correspond to those described above with respect to memory115. In example implementations, information system140incorporates one or more database servers that support Oracle SQL, NoSQL, NewSQL, PostgreSQL, MySQL, Microsoft SQL Server, Sybase ASE, SAP HANA, DB2, and the like. Information system140incorporates a communication interface (not shown) for communications with the aforementioned entities—i.e., emission data collection assemblies200-1. . .200-m, ambient air quality data collection assemblies300-1. . .300-n, data processing apparatus101, and control device150—and example implementations of which can include those described above with respect to communication interface105. In correspondence with data processing apparatus101, control device150is a computing device with one or more processor(s)155example implementations of which can include those described above with respect to processor(s)110. Memory165can include one or more types of storage mediums such as any volatile or non-volatile memory, or any removable or non-removable memory implemented in any suitable manner to store data for control device150. Example implementations of memory165can include those described above with respect to memory115. Communication interface170can include any circuitry allowing or enabling one or more components of control device150to communicate with one or more additional devices, servers, and/or systems. Example implementations of communication interface170can include those described above with respect to communication interface105. Additionally, communications interface170can use any communications protocol, such as any of the previously mentioned example communications protocols for communicating with and controlling emission data collection assemblies200-1. . .200-m, ambient air quality data collection assemblies300-1. . .300-n, data processing apparatus101, and information system140. In some implementations, control device150can include one or more antennas to facilitate wireless communications with a network using various wireless technologies (e.g., Wi-Fi, Bluetooth, radiofrequency, etc.). In yet another implementation, control device150can include one or more universal serial bus (“USB”) ports, one or more Ethernet or broadband ports, and/or any other type of hardwire access port so that communication interface170allows control device150to communicate with emission data collection assemblies200-1. . .200-m, ambient air quality data collection assemblies300-1. . .300-n, data processing apparatus101, information system140, or another control device (not shown)—for example, via network120. User interface160is operatively connected to processor(s)155and can include one or more input or output device(s), such as switch(es), button(s), key(s), a touch screen, a display, microphone, camera(s), sensor(s), etc. as would be understood in the art of electronic computing devices. Display of user interface160can be used to display the results of example processing described in further detail below. In some implementations, functionality of apparatuses101and150can be consolidated to a singular apparatus or system that is communicatively coupled to data collection assemblies (200) and (300) for collecting data therefrom and for processing and storing the data in information system140. In some implementations, information system140can also be consolidated with apparatus101and/or control device150. Additionally, in some implementations, separate and independent control devices (not shown) can be incorporated to communicate with and/or control emission data collection assemblies200-1. . .200-mand ambient air quality data collection assemblies300-1. . .300-n, respectively. In other words, computing devices and/or data processing apparatuses capable of embodying the systems and/or methods described herein can include any suitable type of electronic device including, but are not limited to, workstations, servers, desktop computers, mobile computers (e.g., laptops, ultrabooks), mobile phones, portable computing devices, such as smart phones, tablets, personal display devices, personal digital assistants (“PDAs”), virtual reality devices, wearable devices (e.g., watches), to name a few. As can be appreciated by one of ordinary skill in the art, the features and functions described herein of control device150, emission data collection assemblies200-1. . .200-m, and ambient air quality data collection assemblies300-1. . .300-ncan be performed interchangeably among these entities without departing from the spirit and scope of the present disclosure. FIG.2is a schematic illustration of an emission data collection assembly200′ that is representative of emission data collection assemblies200-1. . .200-mshown inFIG.1according to an example implementation of the present disclosure. As illustrated inFIG.2, emission data collection assemblies200′ can be incorporated to, without limitation, regular automobiles (such as family automobiles, taxis, corporate automobiles, limousines, and the like)230-a′, work and construction vehicles230-b′, cargo transportation vehicles (including, for example, fuel delivery vehicles, mail and parcel delivery vehicles, and the like)230-c′, government vehicles (including, for example, police vehicles, fire department vehicles, ambulances to name a few)230-d′, and public transportation vehicles (such as buses, trains, helicopters, airplanes to name a few)230-e′(the different types of vehicles can be collectively referred to herein as230′). Thus, the emission detection and mitigation management of the present disclosure is applicable, for example, to specific organizations, such as for fleet management of a delivery operation, and to public administration of an area (107), such as a city. As further illustrated inFIG.2, each emission data collection assembly200′ incorporates a controller205′, emission sensor210′, location system215′, communication interface220′, and data storage225′. Controller205′ incorporates one or more processors (not shown) adapted to control the operations and functionality of emission data collection assembly200′. Example implementations of controller205′ can include those described above with respect to processor(s)110. Emission sensor210′ is a composition sensor, such as an electronic carbon monoxide (CO) sensor, oxygen (O2) sensor, sulfur dioxide (SO2) sensor, nitrogen oxides (NOx) sensor, and the like, that is mounted to the exhaust systems (not shown) of vehicles230′ to determine the emission compositions of the vehicles230′. As can be appreciated by one of ordinary skill in the art, emission sensor210′ can also embody sensor(s) for detecting particulate matter concentrations from the exhaust systems of vehicles230′. Thus, according to an example implementation, the collected emission data is used to determine the contributions of the respective vehicles230′ to CO (in parts per million, or ppm) and/or particulate matter pollution in association with data collected by ambient air quality data collection assemblies300′. Based on data processing discussed below and fuel composition determinations, the collected emission data can also be used to determine contributions to ozone, sulfur dioxide, nitrogen dioxide, and/or lead concentrations according to some implementations of the present disclosure. Location system215′ includes a sensor adapted to determine the real time locations of vehicles230′. In example implementations, location system215′ is embodied by one or more of a global position system (GPS) and a radio triangulation location system, which can be integrated with an onboard system (not shown) of the vehicle (230′). In some implementations, location system215′ can include one or more cameras (not shown) that correspond to a navigation system (such as a self-driving system) of the vehicle (230′) for improved location determination. Communication interface220′ can include any circuitry allowing or enabling one or more components of emission data collection assembly200′ to communicate with one or more additional devices, servers, and/or systems. Example implementations of communication interface220′ can include those described above with respect to communication interface170. Accordingly, communications interface220′ can use any communications protocol, such as any of the previously mentioned example communications protocols for communicating with control device150, ambient air quality data collection assemblies300-1. . .300-n, data processing apparatus101, and information system140—for example, via network120. Data storage225′ can include one or more types of storage mediums such as any volatile or non-volatile memory, or any removable or non-removable memory implemented in any suitable manner to store data, including any collected emission data and associated location data, for emission data collection assembly200′. Example implementations of data storage225′ can include those described above with respect to memory115. In an example implementation, data storage225′ includes a buffer for storing collected real time emission and location data for transmission to control device150, data processing apparatus101, and/or information system140. In some implementations, emission data collection assembly200′ can be integrated with an onboard system (not shown) of its corresponding vehicle (230′), which can embody control device150shown inFIG.1. As an example, a user interface (such as a console panel and the like) on the vehicle (230′) (not shown) can be communicatively coupled to controller205′ for providing user input controls and/or display outputs to an operator of the vehicle (230′). In some implementations, controller205′ can be communicatively coupled to an ignition or starter mechanism, a fuel supply mechanism, or the like, of the vehicle (230′) (not shown) so that noncompliance with emission standards results in an inability to operate the vehicle (230′). As an example, a process of setting a time and/or distance limit from an initial determination of emission standard noncompliance can be stored at data storage225′ and executed by controller205′ so that the operator of the vehicle (230′) is provided with an opportunity to mitigate the emission noncompliance before the vehicle (230′) is rendered inoperable. FIG.3is a schematic illustration of an ambient air quality data collection assembly300′ that is representative of emission data collection assemblies300-1. . .300-nshown inFIG.1according to an example implementation of the present disclosure. As illustrated inFIG.3, ambient air quality data collection assemblies300′ can be incorporated to, without limitation, structures that are installed at regular intervals throughout an area (107) (such as weather stations and radio communication towers, and the like)330-a′, structures related to roadways and routes (such as traffic lights, street lamps, road signs and markers, bus stops to name a few)330-b′, and buildings and structures associated with an implementation of the present disclosure (such as a hospital, a work site, a transportation hub including a train station and/or an airport, to name a few)330-c′ (the different types of structures can be collectively referred to herein as330′). Thus, as noted above, the ambient air quality detection and associated emission detection and mitigation management of the present disclosure is applicable, for example, to specific organizations, such as for fleet management of a delivery operation, and to public administration of an area (107), such as a city. As further illustrated inFIG.3, each ambient air quality data collection assembly300′ incorporates a controller305′, air quality sensor310′, location system315′, communication interface320′, and data storage325′. In an example implementation, each assembly300′ incorporates a radar and/or camera327′ that is adapted to identify and locate specific vehicles (230′) that are noncompliant with respect to emission standards. Controller305′ incorporates one or more processors (not shown) adapted to control the operations and functionality of ambient air quality data collection assembly300′. Example implementations of controller305′ can include those described above with respect to processor(s)110. According to an example implementation, controller305′ can be communicatively coupled to (via communication interface320′), or integrated with, a control mechanism for a traffic light system (330-b′) for altering a traffic direction pattern based on a determination on the detected air quality and contributions from vehicles (230′) on the road. According to another example, controller305′ can be communicatively coupled to, or integrated with, bus stop display (330-b′) for showing alterations to assigned routes based on a determination on the detected air quality and contributions from vehicles (230′) on the road. Air quality sensor310′ includes a particulate matter sensor and/or a composition sensor, such as an electronic carbon monoxide (CO) sensor, sulfur dioxide (SO2) sensor, nitrogen oxides sensor (NOx) and the like, that is mounted to structures330′ to determine the particulate matter concentration and/or the ambient air composition at and around the locations of the respective structures330′. In an example implementation, air quality sensor310′ measures and provides an air quality index (AQI), which can indicate air quality of a location in association with multiple pollutants. For example, the AQI can include an indicator for a concentration, in micrograms per meters cubed (μg/m3), of PM2.5, which refers to atmospheric particulate matter (PM) that have a diameter of less than 2.5 micrometers. Other measures such as concentrations of ground-level ozone, sulfur dioxide, CO, and nitrogen dioxide (in ppm), and/or lead and particulates such as PM10 (PM having a diameter of less than 10 micrometers) (in μg/m3) and the like, can also be included. As can be appreciated by one of ordinary skill in the art, the ambient air quality data collected by assemblies300′ is used to determine the contributions of particular vehicles230′ to particulate matter and/or gaseous pollution in association with data collected by emission data collection assemblies200′. Location system315′ includes a sensor adapted to determine the locations of structures330′. In example implementations, location system315′ is embodied by one or more of a global position system (GPS) and a radio triangulation location system. In some implementations, for example for structures330′ that are radio communication towers (330-a′), location system315′ can cooperate with location systems215′ on the vehicles230′ to determine relative positions and, thus, real time positions of the vehicles230′. In an example implementation, with structures330′ being fixed to their respective locations, location information is stored in data storage325′, control device150, data processing apparatus101, and/or information system140. Communication interface320′ can include any circuitry allowing or enabling one or more components of ambient air quality data collection assembly300′ to communicate with one or more additional devices, servers, and/or systems. Example implementations of communication interface320′ can include those described above with respect to communication interface170. Accordingly, communications interface320′ can use any communications protocol, such as any of the previously mentioned example communications protocols for communicating with control device150, emission data collection assemblies200-1. . .200-m, data processing apparatus101, and information system140—for example, via network120. Data storage325′ can include one or more types of storage mediums such as any volatile or non-volatile memory, or any removable or non-removable memory implemented in any suitable manner to store data, including any collected ambient air quality data, for ambient air quality data collection assembly300′. Example implementations of data storage325′ can include those described above with respect to memory115. In an example implementation, data storage325′ includes a buffer for storing collected real time air quality data and identified vehicle data (for example, noncompliant vehicles230′ captured by camera327′) for transmission to control device150, data processing apparatus101, and/or information system140. In some implementations, radar/camera327′ can be integrated with existing traffic control systems, such as traffic violation radar and camera systems (not shown) and the like. Accordingly, based on vehicle emission plume air dispersion modelling and collected air quality measures, real time pollution contributions of respective vehicles230′ can be determined and noncompliant vehicles can be identified. According to an example implementation, in a manner similar to a traffic violation radar and camera system, radar/camera327′ captures an image of a noncompliant vehicle230′ based upon which the identification of the vehicle230′ can be used to undertake remedial and/or punitive actions, such as a citation and/or a fine. In some implementations, ambient air quality data collection assemblies300′ can be deployed on a mobile vehicle, such as vehicles230′, to determine ambient air quality at different locations at different times in area107, which is recorded and which is identifiable to the time and location at which the air quality data is recorded. The data can be stored over predetermined periods at control device150, data processing apparatus101, and/or information system140for processing and interpretation. FIG.4is a schematic illustration of a software structure400maintained at one or more of data processing apparatus101, information system140, control device150, emission data collection assemblies200′, and ambient air quality data collection assemblies300′ in accordance with an example implementation of the present disclosure.FIG.4further illustrates steps conducted by these modules in an example implementation of the present disclosure. As shown inFIG.4, software structure400includes a data collection module401, a data processing module405, and an output module420. Data collection module401includes instructions for collecting vehicle emission sensor and location data2000, which corresponds to data collected by emission data collection assemblies200′ described above. Data collection module401further includes instructions for collecting station ambient air sensor and location data3000, which corresponds to data collected by ambient air quality collection assemblies300′ described above. Thus, data collection module401, in an initial step s405, outputs the collected data2000and3000to data processing module405. Data processing module405includes instructions for incorporating vehicle emission plume air dispersion model410to process the collected vehicle emission sensor and location data2000and station ambient air sensor and location data3000. According to an example implementation, vehicle emission plume air dispersion model410includes one or more of, without limitation, Gaussian dispersion models, convective scaling, plume rise and dispersion models, Lagrangian dispersion models and equations, and the like. In example implementations, an AMS/EPA (American Meteorological Society/United States Environmental Protection Agency) Regulatory Model (AERMOD) atmospheric dispersion modeling system and/or a “California Puff Model” (CALPUFF) air quality dispersion modeling system can be incorporated in developing small scale dispersion modeling on individual vehicles, emission elements, and the like. Based on model410, data processing module405employs a pollution contribution determination and noncompliant vehicle identification model415to process the collected data2000and3000and determine the respective contributions of vehicles230′ to ambient pollution around structures330′ in area107. Based on these determinations, noncompliant vehicles are identified. According to an example implementation, a noncompliant vehicle can be one of the vehicles230′ incorporated with an emission data collection assembly200′ or can be a vehicle without emission data collection assembly200′. In a step s410, data processing module405processes the output from one or more of vehicle emission plume air dispersion model410and pollution contribution determination and noncompliant vehicle identification model415with collection data2000and3000, which can include additional collected data such as images captured by camera327′, updated air quality data based on changed conditions from output results of output module420, and the like, for feedback and confirmation of the processing results. Based on any feedback adjustments and confirmations of the processing results at step s410, models410and415are updated and improved. In some implementations, one or more machine learning based models can be used for models410,415and step s410. Next, in a step s415, data processing module405outputs confirmed processing results to output module420for intervention, punitive, or remedial actions. As discussed above, output module420can issue an instruction to a controller205′ of a noncompliant vehicle230′ to set a time and/or distance limit before rendering vehicle230′ inoperable. Correspondingly, an alert can be issued to the operator of the noncompliant vehicle230′—for example, via a display (not shown) at the vehicle230′—to perform maintenance services on the noncompliant vehicle230′. As another example, output module420can issue an instruction for generating a citation ticket for the noncompliant vehicle230′. As yet another example, output module420can issue instructions to public transportation vehicles (230-e′) to alter their assigned routes based on traffic emission conditions. Correspondingly, instructions to transportation routing displays, such as bus stops (330-b′), can be issued to show the alterations to the assigned routes. For fleet management applications, output module420can issue an instruction to a vehicle maintenance administration apparatus (not shown) to schedule maintenance services to respective vehicles230′ in a fleet based on outputs at step s415. Finally, at step s420, output module420returns feedback and confirmations to data processing module405to further improve model410and/or model415. In example implementations, the feedback at step s420can include, without limitation, citation disputes and associated evidence, operator or administrator feedback via message communications, vehicle inspection results to name a few. Confirmations at step s420can include, for example, a collected fine, a vehicle service record (of maintenance services performed following the processing results of step s415) to name a few. Example 1 FIG.5is a graphical illustration of an example implementation of the present disclosure. As shown inFIG.5, a roadway section500, which can be situated in area107shown inFIG.1, includes two ambient air quality data collection assemblies300-aand300-bmounted at respective “stations,” which can be roadside structures that are placed at regular intervals along the roadway of section500. As further illustrated inFIG.5, two vehicles with respective onboard emission data collection assemblies200-aand200-bare operating on the roadway section500. According to an example implementation, a pollutant emission plume505from a “bad actor” noncompliant vehicle5230causes assemblies300-aand300-bin its vicinity to detect elevated levels of pollution (e.g., airborne pollutants and concentrations). Based on vehicle emission plume air dispersion model410and collected emission and location data from assemblies200-aand200-b(which are in the vicinity of vehicle5230), data processing module405executing model415can determine that the source of pollutant emission plume505is from neither vehicle associated with assemblies200-aand200-b, respectively. Radar/camera327′ of one or more of assemblies300-aand300-bcan determine the presence and position of noncompliant vehicle5230. Based on the position of vehicle5230, models410and415, and data from assemblies200-a,200-b,300-a, and300-b, an image can be taken of vehicle5230to identify it as a principal source of plume505. Accordingly, the image can form a basis for issuing a citation to the operator of vehicle5230. In an example implementation, data from an onboard emission data collection assembly200′ at vehicle5230can provide further improvements to models410and415—for example, at step s410. As discussed above, vehicle5230incorporating an onboard emission data collection assembly200′ can receive an alert for display to the operator of vehicle5230and/or an instruction for rendering vehicle5230inoperable (via a control to a starter or fuel supply mechanism on vehicle5230) within a time or distance limit from the above-described identification of vehicle5230as the principal source of plume505. In other words, the present disclosure provides for identifying noncompliant vehicle5230both when vehicle5230incorporates an emission data collection assembly200′ and when it does not. Example 2 According to an example implementation, the air quality data from assemblies300′ augmented with emission information from assemblies200outputted by data processing apparatus101is used to adjust, alter, and enhance traffic direction operations via traffic lights, such as element300-2shown inFIG.1, or digital road signs (330-b′), which can be controlled via control device150. Based on detected pollution (e.g., airborne pollutant concentrations) on particular roadways, traffic on these roadways can be prioritized (e.g., with altered “stop” and “go” durations) or diverted (e.g., by altering directions on road signs) to reduce the risk of sustained elevated pollution (e.g., elevated airborne pollutant concentrations). As an example, a metropolitan area with high population and vehicle densities—and emissions (e.g., Paris)—can implement the methods, apparatuses, and systems of the present disclosure to quickly identify bad actors and/or divert potential sources of pollution away from city centers or high pollution risk areas. Portions of the methods described herein can be performed by software or firmware in machine readable form on a tangible (e.g., non-transitory) storage medium. For example, the software or firmware can be in the form of a computer program including computer program code adapted to cause the system to perform various actions described herein when the program is run on a computer or suitable hardware device, and where the computer program can be embodied on a computer readable medium. Examples of tangible storage media include computer storage devices having computer-readable media such as disks, thumb drives, flash memory, and the like, and do not include propagated signals. Propagated signals can be present in a tangible storage media. The software can be suitable for execution on a parallel processor or a serial processor such that various actions described herein can be carried out in any suitable order, or simultaneously. It is to be further understood that like or similar numerals in the drawings represent like or similar elements through the several figures, and that not all components or steps described and illustrated with reference to the figures are required for all implementations or arrangements. The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “contains”, “containing”, “includes”, “including,” “comprises”, and/or “comprising,” and variations thereof, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Terms of orientation are used herein merely for purposes of convention and referencing and are not to be construed as limiting. However, it is recognized these terms could be used with reference to an operator or user. Accordingly, no limitations are implied or to be inferred. In addition, the use of ordinal numbers (e.g., first, second, third) is for distinction and not counting. For example, the use of “third” does not imply there is a corresponding “first” or “second.” Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. While the disclosure has described several example implementations, it will be understood by those skilled in the art that various changes can be made, and equivalents can be substituted for elements thereof, without departing from the spirit and scope of the invention. In addition, many modifications will be appreciated by those skilled in the art to adapt a particular instrument, situation, or material to implementations of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular implementations disclosed, or to the best mode contemplated for carrying out this invention, but that the invention will include all implementations falling within the scope of the appended claims. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes can be made to the subject matter described herein without following the example implementations and applications illustrated and described, and without departing from the true spirit and scope of the invention encompassed by the present disclosure, which is defined by the set of recitations in the following claims and by structures and functions or steps which are equivalent to these recitations. | 43,532 |
11859992 | DESCRIPTION OF EMBODIMENTS An embodiment of the present invention will be described below with reference to the figures. A route-specific person number estimation device10for estimating a number of people on each route as an example of a route-specific traffic volume will be described below. Note, however, that the route-specific traffic volume is not limited to the number of people on each route. For example, a number of cars on each route, a number of motorbikes on each route, a number of bicycles on each route, a number of living things on each route, and so on may be used as the route-specific traffic volume. Accordingly, the route-specific person number estimation device10according to this embodiment of the present invention may be applied in a similar manner to cases in which these route-specific traffic volumes are estimated. Note that a moving object such as a person, a car, a motorbike, a bicycle, or a living thing traveling along a route may also be referred to as an “agent”. <Configuration of Route-Specific Person Number Estimation Device10> First, the configuration of the route-specific person number estimation device10according to this embodiment of the present invention will be described with reference toFIG.1.FIG.1is a view showing an example configuration of the route-specific person number estimation device10according to this embodiment of the present invention. The route-specific person number estimation device10shown inFIG.1is a computer or a computer system that generates route candidates along which the agent may pass (also referred to simply as “route candidates” hereafter) and then estimates the number of people on each route (also referred to simply as the “route-specific person number” hereafter) from observation value data and so on. A route-specific person number estimation program100is installed in the route-specific person number estimation device10shown inFIG.1. The route-specific person number estimation program100may be either a single program or a group of programs constituted by a plurality of programs or modules. The route-specific person number estimation device10shown inFIG.1generates the route candidates and estimates the number of people on each route by means of processing executed by the route-specific person number estimation program100. Definitions of Variables Here, variables used in this embodiment of the present invention are defined as follows.Xt, i: an estimated value of the number of people on each route.t: an index indicating a time. t is set at 0≤t≤T.T: the final time of a time slot set as an observation target.Ri: a route candidate (a string of nodes along which the agent may pass).i: an index of the route candidate. i is set at 1≤i≤I.I: the number of route candidates.Yt, j: an observation value of the number of passers-by passing each passer-by number observation point.St, j: an observation value of the number of visitors visiting each visitor number observation point.ΔS: variation in the number of visitors.Mj: a passer-by number observation point (a string of nodes set as an observation target).M: a list of passer-by number observation points.M′j′: a visitor number observation point (a node set as an observation target).M′: a list of visitor number observation points.j: an index of the passer-by number observation point. j is set at 1≤j≤J.j′: an index of the visitor number observation point. j′ is set at 1≤j′≤J′.J: the number of passer-by number observation points.J′: the number of visitor number observation points.A: a routing matrix.B: a visitor matrix. Here, a T-row, J-column matrix on which numbers of people Yt, jat respective passer-by number observation points are set as elements is represented by observation value data Y. The observation value data Y are data acquired by temporally and spatially measuring and tallying traffic volumes at a certain granularity. A method of measuring a traffic volume by counting the number of cars or people moving along a road, as disclosed by the Ministry of Land, Infrastructure, Transport, and Tourism in “Road Traffic Census 2015, Outline of Results of General Traffic Volume Survey”, for example, may be used as a method of acquiring the observation value data Y. FIG.2shows an example of the observation value data Y. As shown inFIG.2, in the observation value data Y, observation values Y1, j(j=1, 2, . . . , J) observed at respective observation points Mj(j=1, 2, . . . , J) over an observation time slot (an observation period) t=0 to t=1 are set as the elements of a (1, j) component. Similarly, in the observation value data Y, observation values Y2, j(j=1, 2, . . . , J) observed at the respective observation points Mj(j=1, 2, . . . , J) over an observation time slot t=1 to t=2 are set as the elements of a (2, j) component. Likewise thereafter, in the observation value data Y, observation values YT, j(j=1, 2, . . . , J) observed at the respective observation points Mj(j=1, 2, . . . , J) over an observation time slot t=T−1 to t=T are set as the elements of a (T, j) component. A missing value may exist in the observation values Yt, jincluded in the observation value data Y. In other words, a missing measurement may exist at a certain observation point Mjin a certain observation time slot. Either NULL or a predetermined value determined in advance (a predetermined code value indicating a missing value, a value that cannot be acquired as an observation value, or the like, for example) may be set as the missing observation value Yt, j. Note that the observation periods may have different time widths. For example, the time width of the observation period t=0 to t=1 may differ from the time width of the observation period t=1 to t=2. Further, observation may be performed over a different time width at each observation point Mj. For example, over the period from t=0 to t=T, T observations may be performed at the observation point M1while T/2 observations are performed at the observation point M2, and so on. In other words, the observation period may differ at each observation point Mj. In this case, observation points Mjhaving aligned observation time widths (i.e., observation points Mjhaving identical observation periods) may be grouped together so that the observation value data Y are expressed by a plurality of matrices. Furthermore, a T-row, J′-column matrix on which numbers of visitors St, j′at respective visitor number observation points are set as elements is represented by observation value data S. Similarly to Y, S may have missing values. <Hardware Configuration of Route-Specific Person Number Estimation Device10> Next, a hardware configuration of the route-specific person number estimation device10according to this embodiment of the present invention will be described with reference toFIG.3.FIG.3is a view showing an example hardware configuration of the route-specific person number estimation device10according to this embodiment of the present invention. The route-specific person number estimation device10shown inFIG.3includes an input device11, a display device12, an external I/F13, a RAM (Random Access Memory)14, a ROM (Read Only Memory)15, a CPU (Central Processing Unit)16, a communication I/F17, and an auxiliary storage device18. These pieces of hardware are connected communicably via a bus B. The input device11is a keyboard, a mouse, a touch panel, or the like, for example, and is used by a user to input various operations. The display device12is a display or the like, for example, that displays processing results acquired by the route-specific person number estimation device10. Note that the route-specific person number estimation device10does not have to include at least one of the input device11and the display device12. The external I/F13is an interface to an external device. The external device is a recording medium13aor the like. The route-specific person number estimation device10can perform reading and writing to and from the recording medium13aand so on via the external I/F13. The route-specific person number estimation program100and so on may be recorded on the recording medium13a. The recording medium13amay be a flexible disc, a CD (Compact Disc), a DVD (Digital Versatile Disc), an SD memory card (Secure Digital memory card), a USB (Universal Serial Bus) memory card, or the like, for example. The RAM14is a volatile semiconductor memory for temporarily storing programs and data. The ROM15is a nonvolatile semiconductor memory that can store programs and data even when the power supply thereof is cut off. OS settings, network settings, and so on, for example, are stored in the ROM15. The CPU16is a computation device that executed processing by reading programs and data from the ROM15, the auxiliary storage device18, and so on to the RAM14. The communication I/F17is an interface for connecting the route-specific person number estimation device10to a communication network. The route-specific person number estimation program100may be acquired (downloaded) from a predetermined server device or the like via the communication I/F17. The auxiliary storage device18is a nonvolatile storage device such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive), for example, that stores programs and data. The programs and data stored in the auxiliary storage device18include an OS, an application program for realizing various functions on the OS, the route-specific person number estimation program100, and so on, for example. By having the hardware configuration shown inFIG.3, the route-specific person number estimation device10according to this embodiment of the present invention can realize various types of processing described below. <Functional Configuration of Route-Specific Person Number Estimation Device10> Next, a functional configuration of the route-specific person number estimation device10according to this embodiment of the present invention will be described with reference toFIG.4.FIG.4is a view showing an example functional configuration of the route-specific person number estimation device10according to this embodiment of the present invention. The route-specific person number estimation device10shown inFIG.4includes a route candidate generation unit110, a routing matrix generation unit120, a route-specific person number estimation unit130, and a visitor matrix generation unit140. Each of these units is realized by processing that the route-specific person number estimation program100causes the CPU16to execute. The route candidate generation unit110generates a route candidate list R upon receipt of road network data G, node sets V and U, a magnification α, and a passer-by number observation point list M. The route candidate list R is a list of route candidates Ri. Each route candidate Riis a string of linked nodes within the road network data G. Note that the route candidate generation unit110does not have to receive at least one element among the node sets V and U, the magnification α, and the passer-by number observation point list M. In other words, the node sets V and U, the magnification α, and the passer-by number observation point list M are optional input data. The road network data G are constituted by a directed graph representing a target road network. The road network data G are expressed as G={N, E}, where N represents a set of nodes (intersections and the like, for example) belonging to the road network and E represents a set of links (roads and the like, for example) belonging to the road network. The node sets V and U are sets of nodes used to limit combinations of an origin (O) and a destination (D) (referred to hereafter as “OD combinations”). Each OD combination is a combination of a node serving as the origin and a node serving as the destination. Nodes denoting landmarks (for example, nodes where people may appear or disappear) may be used as the nodes included in the node sets V and U. More specifically, a set of nodes denoting stations, for example, may be used as the node set V. Further, a set of nodes denoting entrances to event venues, for example, may be used as the node set U. The magnification α is a value used to limit the allowable range of the length of the route candidates Ri. The value of the magnification α is set in advance by the user of the route-specific person number estimation device10or the like, for example. The passer-by number observation point list M is a list of passer-by number observation points Mj, and is used to exclude route candidates Rithat are not observed at any of the passer-by number observation points Mj. The routing matrix generation unit120generates a routing matrix A upon receipt of the route candidate list R and the passer-by number observation point list M. The visitor matrix generation unit140generates a visitor matrix B upon receipt of the route candidate list R and a visitor number observation point list M′. The route-specific person number estimation unit130estimates a route-specific person number X upon receipt of the routing matrix A, the visitor matrix B, the observation value data Y, and the observation value data S. The route-specific person number estimation unit130then outputs the estimated route-specific person number X. The route-specific person number X is a matrix expressing the volume of traffic on each route. Note that the route-specific person number estimation unit130may output the route-specific person number X to a desired output destination set in advance. For example, the route-specific person number estimation unit130may output the route-specific person number X to the display device12, output (store) the route-specific person number X to (in) the auxiliary storage device18, the recording medium13a, or the like, or output (transmit) the route-specific person number X to a server device or the like on a network via the communication I/F17. The route-specific person number estimation unit130may also output the route-specific person number X to another program (a people flow simulation program, for example). <Processing Executed by Route-Specific Person Number Estimation Device10> Next, processing executed by the route-specific person number estimation device10according to this embodiment of the present invention will be described with reference toFIG.5.FIG.5is a flowchart showing an example of the processing executed by the route-specific person number estimation device according to this embodiment of the present invention. Step S101: The route candidate generation unit110generates the route candidate list R by executing following procedures (1) to (4) on all OD combinations. When the node sets V and U are not input, the route candidate generation unit110creates OD combinations from the nodes included in the road network data G. When the node sets V and U are input, on the other hand, the route candidate generation unit110creates OD combinations from the nodes included in the node set V and the nodes included in the node set U. Creating OD combinations from the nodes included in the node set V and the nodes included in the node set U corresponds to selecting respective edges of a complete bipartite graph of the node set V and the node set U. Note that the nodes included in the road network data G, the nodes included in the node set V, and the nodes included in the node set U can all be either an origin (O) or a destination (D). In other words, an OD combination of a node N1and a node N2included in the road network data G may be either a combination in which the node N1is the origin and the node N2is the destination or a combination in which the node N2is the origin and the node N1is the destination. Similarly with respect to the node sets V and U, an OD combination of a node N1included in the node set V and a node N2included in the node set U may be either a combination in which the node Ni is the origin and the node N2is the destination or a combination in which the node N2is the origin and the node N1is the destination. (1) The route candidate generation unit110enumerates all of the routes between the node (referred to hereafter as the “node O”) indicating the origin (O) included in the OD combination and the node (referred to hereafter as the “node D”) indicating the destination (D) included in the OD combination. The enumerated routes respectively serve as the route candidates Ri. Enumerating all of the routes between the node O and the node D can be realized using Graphillion, for example. Graphillion is disclosed in “A New Approach to the Problem of Combinations using a Super-Fast Graph Enumeration Algorithm (How to Count Miracles)”, ERATO Minato Discrete Structure Manipulation System Project (author), Shinichi MINATO (editor), Morikita Publishing, 2015, for example. (2) The route candidate generation unit110retrieves the shortest route between the node O and the node D using a shortest route search algorithm, and computes the distance of the retrieved shortest route. NetworkX, which is a Python library, or the like can be used for this purpose. (3) Next, the route candidate generation unit110compares the distance (the route candidate distance di) of each of the route candidates Riacquired in (1) with the distance (the shortest distance dmin) of the shortest route acquired in (2). The route candidate generation unit110then excludes the route candidates Riwhose route candidate distance diequals or exceeds α times the shortest distance dminfrom the route candidates Ri. (4) Next, the route candidate generation unit110excludes unobserved route candidates Rifrom the route candidates Riacquired in (3). Whether or not a route candidate Riis observed can be determined using the passer-by number observation point list M. More specifically, for example, it is assumed that the passer-by number observation points Mjincluded on the passer-by number observation point list M are set as Mj=[Mj, 1, Mj, 2, . . . , Mj, n] and the route candidate Riis set as Ri=[Ri, 1, Ri 2, . . . , Ri, k]. Here, n is the number of nodes included in the observation points Mjand k is the number of nodes included in the route candidate Ri. When, at this time, one of the nodes Ri, 1, Ri 2, . . . , Ri, kis the same node as one of the nodes Mj, 1, Mj, 2, . . . , Mj, n, it is determined that the route candidate Riis observed. When, on the other hand, none of the nodes Ri, 1, Ri 2, . . . , Ri, kis the same node as any of the nodes Mj, 1, Mj, 2, . . . , Mj, n, it is determined that the route candidate Riis unobserved. As a specific example, when the first node Ri, 1of the route candidate Riis the same node as one of the nodes Mj, 1, Mj, 2, . . . , Mj, n, departure on the route candidate Riis observed. Similarly, when the last node Ri, kof the route candidate Riis the same node as one of the nodes Mj, 1, Mj, 2, . . . , Mj, n, arrival on the route candidate Riis observed. In addition, when the passer-by number observation points Mjare a substring of the route candidate Ri, passage along the route candidate Riis observed. A list of the route candidates Riacquired in (4) from all of the OD combinations serves as the route candidate list R. The route candidate generation unit110outputs the acquired route candidate list R to the routing matrix generation unit120. As noted above, at least one element among the node sets V and U, the magnification α, and the passer-by number observation point list M does not have to be input into the route candidate generation unit110. Depending on the size of the road network data G, however, the number of route candidates Riincluded on the route candidate list R may become extremely large. Therefore, the route candidates Riare preferably limited using at least one element among the node sets V and U, the magnification α, and the passer-by number observation point list M. Step S102: The routing matrix generation unit120generates a routing matrix A from the route candidate list R and the passer-by number observation point list M. When the elements of the routing matrix A are set as Aj, i, the routing matrix A is generated using Formula 1 below. [Formula1]Aj,i={1(whenapersononaroutecandidateRiisobservedattheobservationpointsMj)0(whenapersononaroutecandidateRiisnotobservedattheobservationpointsMj)Formula1 In other words, the routing matrix A is generated such that when a route candidate Riis observed at the passer-by number observation points Mj, the (j, i) element Aj, iof the routing matrix is set at 1, and when this is not the case, Aj, iis set at 0. Whether or not a route candidate Riis observed at the passer-by number observation points Mjis determined in the following manner, for example. The route candidate Riis set as Ri=[Ri, 1, Ri 2, . . . , Ri, k]. Here, k is the number of nodes included in the route candidate Ri. The passer-by number observation points Mjincluded on the passer-by number observation point list M are set as Mj=[Mj, 1, Mj, 2, . . . , Mj, n]. Here, n is the number of nodes included in the passer-by number observation points Mj. Each passer-by number observation point Mjhas one property among “departure”, “arrival”, and “passage”. When the property is “departure”, the route candidate Riis observed if the first element (the first node) Ri, 1thereof is included in the passer-by number observation points Mjand not observed if not. When the property is “arrival”, the route candidate Riis observed if the last element Ri, kthereof is included in the passer-by number observation points Mjand not observed if not. When the property is “passage”, the route candidate Riis observed if at least a part thereof is included in the passer-by number observation points Mjand not observed if not. Thus, assuming that passer-by number observation points Mjhaving at least one property among “departure”, “arrival”, and “passage” are included on the passer-by number observation point list M, whether or not the route candidates Riare observed at the passer-by number observation points Mjis determined in accordance with the respective properties, whereupon the values of the (j, i) elements Aj, iof the routing matrix are set. Each element of the routing matrix A generated in this manner expresses whether or not a moving object (a person, a car, a motorbike, bicycle, or the like, for example) traveling along each of a plurality of routes is observed at each of a plurality of passer-by number observation points. In other words, Aj, iexpresses whether or not a moving object traveling along the route candidates Rican be observed at the passer-by number observation points Mj. As a result, when estimating the numbers of moving objects traveling along the route candidates Ri, the elements of the observation value data Y that should be taken into account can be specified from the routing matrix A. Step S103: The visitor matrix generation unit140generates a visitor matrix B from the route candidate list R and the visitor number observation point list M′. When the respective elements of the visitor matrix B are set as Bj′, i, the visitor matrix B is generated using Formula 2 shown inFIG.6. More specifically, the visitor matrix B is generated such that when a moving object on the route candidate Riarrives at a visitor number observation point M′j′, the (j′, i) element Bj′, iof the visitor matrix B is set at 1, when a moving object on the route candidate Rideparts from the visitor number observation point M′j′, the (j′, i) element Bj′, iof the visitor matrix B is set at −1, and in neither of these cases, Bj′, iis set at 0. Whether a moving object on the route candidate Riarrives at or departs from the visitor number observation point M′j′is determined as follows, for example. The route candidate Riis set as Ri=[Ri, 1, Ri 2, . . . , Ri, k]. Here, k is the number of nodes included in the route candidate Ri. The visitor number observation point list M′ is set as M′=[M′1, M′2, . . . M′j′, . . . M′J′]. Here,J′is the number of nodes included on the visitor number observation point list M′. M′j′is an element of the set of nodes constituting the route. As shown inFIG.7A, if the first element (the first node) Ri, 1of the route candidate Riis the visitor number observation point M′j′, it is determined that the moving object on the route candidate Rideparts from the visitor number observation point M′j′. As shown inFIG.7B, if the last element (the last node) Ri, kof the route candidate Riis the visitor number observation point M′j′, it is determined that the moving object on the route candidate Riarrives at the visitor number observation point M′j′. In neither of these cases, it is determined that the moving object on the route candidate Rinether departs from nor arrives at the visitor number observation point M′j′. Each element of the visitor matrix B generated in this manner expresses whether a moving object (a person, a car, a motorbike, bicycle, or the like, for example) traveling along each of a plurality of routes arrives at, departs from, or neither arrives at nor departs from each of a plurality of visitor number observation points. As a result, when estimating the numbers of moving objects traveling along the route candidates Ri, the estimation can be performed in consideration of variation in the number of visitors. Step S104: The route-specific person number estimation unit130estimates the route-specific person number X by solving the problem of minimizing an objective function Obj expressed by Formula 3 inFIG.8. ΔSt, j′in Formula 3 represents variation in the number of visitors visiting the visitor number observation point M′j′at a time t, and when the observation data of the number of visitors visiting the visitor number observation point M′j′at the time t are set as St, j′, ΔSt, j′=St, j′−St-1, j′. Note that since a number of people is a non-negative, the objective function Obj is minimized so as to satisfy a limiting condition Xt, i≥0. The (t, i) element of X is an estimation result of the number of people passing along the route Riat the time t. Parameters λ1, λ2of the objective function Obj are determined in consideration of errors in the number of passers-by and the number of visitors and are stored in advance in the route-specific person number estimation unit130. A trust region reflective method algorithm or the like, for example, may be used as a method of determining X for minimizing Formula 3. A trust region reflective method algorithm is disclosed in Coleman, T. F. and Y. Li, “A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables”, SIAM Journal on Optimization, Vol. 6, Number 4, pp. 1040-1058, 1996, and so on, for example. As shown inFIG.8, the first term of the objective function Obj is a term for minimizing an error between the observed number of passers-by and the estimated number of passers-by. The second term is a term for minimizing an error between observed variation in the number of visitors and estimated variation in the number of visitors. The third term is a term for minimizing the number of appearing people (the volume of the moving object). More specifically, regarding the second term, it is evident from Bj′, iXt, ithat when Bj′, iis 1, the number of people Xt, ion the route i arrive at the corresponding node (the number of visitors increases). Further, when Bj′, iis −1, for example, this means that the number of people Xt, ion the route i arrive at the corresponding node (the number of visitors decreases). In addition, when Bj′, iis 0, for example, this means that the number of people Xt, ion the route i neither depart from or arrive at the corresponding node. In the second term, the number of people Xt, iis determined in consideration of these points so as to minimize the error between the observed variation in the number of visitors and the estimated variation in the number of visitors. Here, an example of computation of ΔSt, j′will be described with reference toFIG.9. Note that index inFIG.9corresponds to the time t used heretofore. Further, in this example, a time interval Δt is fixed at 10 minutes. As shown inFIG.9A, observation data (visitor number data) S1, S2, S3from visitor number observation points M′1, M′2, M′3are acquired at each time. As noted above, ΔSt, j′=St, j′−St-1, j′, and therefore ΔSj′at each time is computed as shown inFIG.9B. For example, at the visitor number observation point M′1, the number of visitors at a time t=1 is 20, and the number of visitors at a time t=2 is 30, and therefore ΔS2, 1=S2, 1−S1, 1=30−20=10. Step S105: The route-specific person number estimation unit130outputs the route-specific person number X acquired in step S104. As described above, the route-specific person number estimation device10according to this embodiment of the present invention can estimate and output the route-specific person number X from the observation value data Y. Moreover, the route-specific person number estimation device10according to this embodiment of the present invention can estimate the route-specific person number X even when a value is missing from the observation values included in the observation value data Y or the lengths of the observation periods of the observation values differ. Furthermore, the route-specific person number estimation device10according to this embodiment of the present invention can estimate the route-specific person number X in consideration of the number of visitors. In other words, a route-specific person number conforming to both an observed number of passers-by and an observed number of visitors can be acquired. By inputting the route-specific person number X acquired by the route-specific person number estimation device10according to this embodiment of the present invention into a people flow simulator, for example, temporal transitions in a flow of people can be reconstructed. The route-specific person number X may also be useful in ascertaining customer flows in the field of marketing, considering crowd surge strategies in the field of crowd security, and so on. Summary of Embodiment According to this embodiment, as described above, there is provided an estimation device including route generating means for generating a plurality of routes from an origin to a destination of a moving object on the basis of input road network data, routing matrix generating means for generating, on the basis of the plurality of routes generated by the route generating means and a plurality of first observation points at each of which a traffic volume of the moving object is observed, a routing matrix expressing whether or not a moving object traveling along each of the plurality of routes is observed at each of the plurality of first observation points, visitor matrix generating means for generating, on the basis of the plurality of routes generated by the route generating means and a plurality of second observation points at each of which a visitor volume of the moving object is observed, a visitor matrix expressing whether or not a moving object traveling along each of the plurality of routes departs from or arrives at each of the plurality of second observation points, and route-specific traffic volume estimating means for estimating the traffic volume of the moving object on each of the plurality of routes on the basis of the routing matrix generated by the routing matrix generating means, the visitor matrix generated by the visitor matrix generating means, first observation value data indicating the traffic volume of the moving object observed at each of the plurality of first observation points, and second observation value data indicating the visitor volume of the moving object observed at each of the plurality of second observation points. When a certain second observation point is at the start of a certain route, the visitor matrix generating means determines that a moving object traveling along the route departs from the second observation point, and when a certain second observation point is at the end of a certain route, the visitor matrix generating means determines that a moving object traveling along the route arrives at the second observation point, for example. The route-specific traffic volume estimating means estimates the traffic volume so as to minimize a sum of an error between the observed traffic volume and the estimated traffic volume, an error between observed variation in the visitor volume and estimated variation in the visitor volume, and an appearance volume of the moving object, for example. The route generating means may select an origin and a destination respectively from a first set and a second set of nodes included in the road network data, predetermined first nodes being included in the first set and predetermined second nodes being included in the second set, and generate a plurality of routes from the selected origin to the selected destination. The route generating means may compare a first distance, which is acquired by multiplying the distance of the shortest route from the origin to the destination by a predetermined value, with respective second distances of the plurality of routes, and exclude routes corresponding to second distances that equal or exceed the first distance from the plurality of routes. The route generating means may exclude a route that is not observed at any of the plurality of first observation points from the plurality of routes. The present invention is not limited to the specific embodiment disclosed above, and various alterations and modifications may be applied thereto without departing from the scope of the claims. REFERENCE SIGNS LIST 10Route-specific person number estimation device100Route-specific person number estimation program110Route candidate generation unit120Routing matrix generation unit130Route-specific person number estimation unit140Visitor matrix generation unit | 34,016 |
11859993 | DESCRIPTION OF EMBODIMENTS An embodiment of the present invention is now explained with reference to the appended drawings. First Embodiment FIG.1is a configuration diagram showing a hardware configuration of the in-vehicle apparatus according to the present invention. InFIG.1, an in-vehicle apparatus100is configured from a display device101, an operating device102, a positioning sensor103, a ROM (Read Only Memory)104, a RAM (Random Access Memory)105, an auxiliary storage device106, and a CPU (Central Processing Unit)107. The display device101is a device such as a liquid crystal display or an organic EL display which displays image information. The operating device102is a device such as buttons, switches, a keyboard or a touch panel for manually operating the in-vehicle apparatus100and which receives operations from the user. The positioning sensor103is a sensor of a GPS (Global Positioning System) or the like for positioning the current position of a vehicle (own vehicle position) based on latitude and longitude. The ROM104is a read-only storage device with control programs and the like written therein. The RAM105is a storage device for loading the programs stored in the auxiliary storage device106or temporarily storing data. The auxiliary storage device106is a storage device which stores map data (map information), audio data, guidance information, and car navigation application programs, and is configured, for example, from an HDD (Hard Disk Drive) or an SSD (Solid State Drive). The CPU107is an arithmetic/control unit which controls the respective components of the in-vehicle apparatus100and executes the application programs loaded into the RAM105. FIG.2is a configuration diagram showing the software configuration of the in-vehicle apparatus according to the present invention. InFIG.2, the in-vehicle apparatus100comprises a link deviation information holding unit201, a parking/stopping determination unit202, a parking information storage unit203, a parking/stopping disengagement determination unit204, a parking information205, an arrival place determination processing unit206, a point information table207, a parking lot entry/exit link connection processing unit208, a destination estimation unit209, a departure place/destination table210, a travel history storage unit211, a route estimation unit212, a travel history213, a recommendation unit214, and recommendation information215, and is mounted on a vehicle (not shown). Here, the link deviation information holding unit201, the parking/stopping determination unit202, the parking information storage unit203, the parking/stopping disengagement determination unit204, the arrival place determination processing unit206, the parking lot entry/exit link connection processing unit208, the destination estimation unit209, the travel history storage unit211, the route estimation unit212, and the recommendation unit214are programs (control programs) to be executed by the CPU107, and are recorded in the auxiliary storage device106. When a vehicle's deviation from the road is detected from the information of the positioning sensor103, for example, when it is detected that upon arriving at the destination set in the car navigation, the vehicle entered a facility such as a parking lot adjacent to the destination from the road (link) of the destination and the vehicle thereby deviated from the road (link), the link deviation information holding unit201holds the vehicle direction from the position information (position information of the vehicle) based on the positioning of the positioning sensor103, and holds the link ID (Identification) when the vehicle deviated from the link from the map information (map information stored in the auxiliary storage device106). The parking/stopping determination unit202uses the vehicle speed, position information, and switch OFF (ignition switch being turned OFF) and determines the parking or stopping of the vehicle as parking/stopping. In other words, the parking/stopping determination unit202determines whether the vehicle has parked/stopped. Here, the parking/stopping determination unit202determines whether the vehicle has parked, for example, when the switch is turned OFF. Moreover, the parking/stopping determination unit202determines whether the vehicle has stopped by setting a constant value to the vehicle speed or the travel distance of the vehicle, such as when the vehicle speed is near 0 for a fixed period of time or when there is hardly any movement in the position of the vehicle (there is hardly any travel distance of the vehicle) from the position information. The parking information storage unit203creates parking information205using the information held by the link deviation information holding unit201when the vehicle deviated from the link, and creates parking information205using the information of the link (link ID) on which the vehicle currently exists when the vehicle did not deviate from the link. Here, the parking information storage unit203can store, as parking information, the history of the vehicle when it parked in a parking lot within a facility in the vicinity of the destination. The parking/stopping disengagement determination unit204determines that the vehicle changed from a parked/stopped state to a traveling state from the vehicle speed, position information of the vehicle, and switch ON (ignition switch being turned ON). Here, the parking/stopping disengagement determination unit204determines that the vehicle has disengaged from its parked state, for example, when the switch is turned ON at the time that the vehicle changed from a parked state to a traveling state. Moreover, the parking/stopping disengagement determination unit204determines that the speed from a stopped state has changed to a constant value or more when the vehicle changed from a parked state to a traveling state, and determines that the vehicle changed from a parked state to a traveling state when the movement of the position of the vehicle occurs for a constant value or more. The parking information205includes a link ID, a parking direction, and a link entry direction (entry direction of the vehicle into a facility such as a parking lot from the link) when the vehicle deviated from the link. Furthermore, height information of the vehicle from the positioning sensor103or the like may be used and the height of the vehicle may also be added to the parking information205. The arrival place determination processing unit206refers to the parking information205and the point information table207and determines the parking point (parking spot). Point information as the departure place and the arrival place may also be written in the point information table207as needed. Note that the arrival place determination processing unit206may also determine whether the vehicle has arrived at the entrance of the facility including the parking lot in the vicinity of the destination of the vehicle based on the positron information (position information based on the positioning of the positioning sensor103) indicating the position of the vehicle on the map. The point information table207is a table storing the point information generated from the parking information205. Information other than the parking information205, such as the coordinate information of the point, may be written in the point information table207as needed. The parking lot entry/exit link connection processing unit208connects an entry link of the vehicle to the parking lot and an exit link of the vehicle from the parking lot and records in the travel history213, as the history of the vehicle traveling in the parking lot, information indicating from which entry link the vehicle entered and from which exit link the vehicle exited. The destination estimation unit209refers to the information of the point from the point information table207, refers to the frequency information of the departure place and destination from the departure place/destination table210, and estimates the next destination. The departure place/destination table210is a table which holds the number of times that the vehicle traveled with the departure place and destination as one set. The travel history storage unit211accumulates, in the travel history213, a number of passages of the vehicle with an entry link and an exit link as one set indicating from which entry link the vehicle entered and from which exit link the vehicle exited. The route estimation unit212estimates the route to the destination estimated by the destination estimation unit209. Here, the route estimation unit212estimates the route to the destination by using the travel history213. The travel history213is a database storing the passage frequency for each combination of the entry link/exit link. The recommendation unit214refers to the recommendation information215regarding a part of the route estimated by the route estimation unit212or the route such as the exit and entrance of the parking lot, and recommends the referenced information to the driver. The recommendation information215holds information required for making a recommendation to the driver. FIG.3is a processing flowchart showing the processing of the parking information storage unit203. InFIG.3, the parking information storage unit203starts the processing when the parking/stopping determination unit202determines that the vehicle has parked or stopped. Foremost, the parking information storage unit203determines whether there is any link information at the time of deviation (this is hereinafter referred to as the “link deviation information”) when the vehicle has parked or stopped (S101). In other words, while the link deviation information holding unit201holds the link deviation information of the vehicle when the vehicle deviated from the link, since the link deviation information holding unit201is not holding the link deviation information when the vehicle parked/stopped on the link and has not deviated from the link, existence of the link deviation information is confirmed. The parking information storage unit203proceeds to the processing of step S102upon obtaining a negative determination result in step S101(when there is no link deviation information), and proceeds to the processing of step S103upon obtaining a positive determination result in step S101(when there is link deviation information). The parking information storage unit203, in step S102, sets the current (traveling) “link ID”, “link entry direction”, and “on link”. In other words, since there is no link deviation information, the parking information storage unit203, among the information to be written in the parking information205, sets “link ID” to the “link ID” indicating the current (traveling) link, sets “link entry direction” to the “link entry direction” indicating the entry direction of the current link, sets “parking direction” to “on link”, and thereafter proceeds to the processing of step S109. Here, the information set in step S102is written as the parking information205in S109. In step S103, the parking information storage unit203acquires the link entry direction at the time of deviation (this is hereinafter referred to as the “link deviation entry direction”) and the link ID at the time of deviation (this is hereinafter referred to as the “link deviation ID”) from the link deviation information holding unit201as the information held by the link deviation information holding unit201, and sets the parking information205as “link deviation ID” and “link entry direction”. The link entry direction is set as the vehicle direction at the time of deviation or the direction of the link. Next, the parking information storage unit203calculates the relative angle of the parking direction from the link entry direction based on the information acquired in step S103(S104), and determines whether the relative angle, which is the calculation result, is equal to or greater than 0° and less than 180°, or equal to or greater than 180° and less than 360° (S105). In other words, the parking information storage unit203determines whether the deviation direction of the vehicle is the left or right of the link when viewed from the direction of the link. In step S104, until the relative angle of the parking direction is calculated from the link entry direction, foremost, with regard to the direction, east is set to 0°, the value is increased in a counterclockwise rotation, and the range of the value is set to be a value that is equal to or greater than 0° and smaller than 360°. Here, the link entry direction is subtracted from the vehicle direction when the vehicle deviated from the link and, if the subtracted value is a negative value, the product obtained by adding 360° to that value is used as the relative angle. In step S105, when the parking information storage unit203determines that the relative angle calculated in step S104is equal to or greater than 0° and less than 180°, since the deviation direction of the vehicle is left of the link, the parking information storage unit203sets the parking direction of the vehicle (vehicle's entry direction to the parking lot) as “left of link” (S106), and thereafter proceeds to the processing of step S108. In step S105, when the parking information storage unit203determines that the relative angle calculated in step S104is equal to or greater than 180° and less than 360°, since the deviation direction of the vehicle is right of the link, the parking information storage unit203sets the parking direction of the vehicle (vehicle's entry direction to the parking lot) as “right of link” (S107), and thereafter proceeds to the processing of step S108. In step S108, the parking information storage unit203sets, among the information acquired in step S103, the link deviation entry direction, the link ID, and the parking direction set in step S106or step S107respectively as the parking information, and thereafter proceeds to the processing of step S109. In step S109, the parking information storage unit203writes “link ID”, “link entry direction”, and “parking direction”, which are information set in step S102or step S108, as the parking information205, and thereafter ends the processing of this routine. Note that, when height (floor number) is added to the parking information205, the height information acquired from a GPS or the like is additionally written in step S109. FIG.4is a configuration diagram showing an example of the point information table. InFIG.4, the point information table207is a table that is created based on the parking information205, includes a point ID301, a link deviation ID302, a vehicle direction303, a forward direction/reverse direction304, a parking direction305, and a floor number (height)306, and is stored in the auxiliary storage device106. Here, while information of the lines recorded in the point information table207will increase each time that a place where the vehicle has parked/stopped increases, for a place where the vehicle has previously parked/stopped, information of the lines recorded in the point information table207will not increase. The point ID301is a point existing on the map and is an identifier for uniquely identifying each point where the vehicle has previously parked/stopped. The point ID301stores, for example, the information of “1” to “5” as the point IDs where the vehicle has previously parked/stopped. The link deviation ID302is an identifier which identifies the link when the vehicle deviated from the link. The link deviation ID302stores, for example, the information of “10” as the link ID at the time of deviation. The vehicle direction303and the forward direction/reverse direction304are information managed as the link entry direction, and record from which side of the link the vehicle entered. The vehicle direction303is used when making the determination from the vehicle direction, and the forward direction/reverse direction304is used when making the determination from the link entry direction of the map information. The vehicle direction303is information indicating the direction (direction that the vehicle is facing) when the vehicle deviated from the link. The vehicle direction303stores, for example, the information of “0” when the vehicle is facing an eastward direction. The forward direction/reverse direction304is information indicating, when the vehicle deviated from the link and enters a parking lot from the link, whether the vehicle entered the parking lot in a forward direction or in a reverse direction, and is obtained from the map information. The forward direction/reverse direction304stores, for example, the information of “forward direction” when the vehicle deviated from the link and enters the parking lot from the link in a forward direction. Note that, while both the information of the vehicle direction303and the information of the forward direction/reverse direction304may be stored in the point information table207, only one piece of information is required if it is possible to identify the link entry direction and differentiate the parking direction. The parking direction305is indicating, when the vehicle is to park, whether the parking direction is on the link or a direction on the left or right of the link when viewed from the link. The parking direction305stores, for example, when the vehicle deviated from the link and enters the parking lot from the link, the information of “left of link” when the parking lot is positioned on the left side when viewed from the link, and the information of “on link” when the vehicle parks on the link (when the vehicle parks on the street). The floor number (height)306is information indicating, when the vehicle parks in a facility such as a multilevel parking lot, the floor number thereof. The floor number (height)306stores, for example the information of “1” when the vehicle parks on the first floor of a facility such as a multilevel parking lot. Note that the floor number (height)306may also directly store the information of the actual height of a facility such as a multilevel parking lot. When storing the information of the floor number in the floor number (height)306, the range of height of each floor number is decided in advance from the height information of the GPS, and the floor number is calculated from the height and then stored. When storing the information of the height in the floor number (height)306, the height information acquired from the GPS is stored. FIG.5is a processing flowchart showing the processing flow of the arrival place determination processing unit206. InFIG.5, the arrival place determination processing unit206starts the processing when the parking/stopping disengagement determination unit204determines that the vehicle has disengaged from its parked/stopped state. Foremost, the arrival place determination processing unit206reads the “link ID (link deviation ID)”, the “link entry direction”, and the “parking direction” from the parking information205(S201), and determines, based on the read information, whether there is data of the same link ID as the parking information in the table (point information table207) (S202). In other words, in step S202, whether there is a link deviation ID that is the same as the “link ID (link deviation ID)” acquired in step201is confirmed. The arrival place determination processing unit206proceeds to the processing of S203upon obtaining a positive determination result in step S202(when there is a same link deviation ID), and proceeds to the processing of step S206upon obtaining a negative determination result in step S202(when there is no same link deviation ID). Since it has been confirmed in step S202that there is a link ID that is the same as the “link deviation ID”, in step S203, the arrival place determination processing unit206additionally determines whether there is data in which the link entry direction and the parking direction are the same in order to confirm whether the point information table207includes a line in which the “link entry direction” and the “parking direction” acquired in step S201are the same. The arrival place determination processing unit206proceeds to the processing of step S205upon obtaining a positive determination result in step S203(when the point information table207includes a line in which the “link entry direction” and the “parking direction” are the same), and proceeds to the processing of step S204upon obtaining a negative determination result in step S203(when the point information table207does not include a line in which the “link entry direction” and the “parking direction” are the same). Next, the arrival place determination processing unit206determines whether there is data of the opposing link entry direction and the same parking direction (S204). In other words, since it was determined in step S203that the point information table207does not include information that is the same as the parking information205, in step S204, determination processing when the vehicle enters the link from the opposite direction is performed. Here, the same link deviation ID is used and the link entry direction when the vehicle enters the link from the opposite direction is determined. Specifically, whether there is a value obtained by adding 180° to the link entry direction of the parking information205is confirmed, and whether the point information table207includes the same point information when, as the parking direction, “right of link” is set in cases where it is “left of link”, and “left of link” is set in cases where it is “right of link”. With regard to the parking direction, if 180° is added and the result exceeds 360°, then 360° is subtracted to obtain an appropriate value. When the point information table207is storing the parking direction, calculation of the vehicle direction is not performed, and whether “forward direction” or “reverse direction” is stored as the parking direction is confirmed. The arrival place determination processing unit206proceeds to the processing of step S205upon obtaining a positive determination result in step S204(when the point information table207includes the same point information), and proceeds to the processing of step S206upon obtaining a negative determination result in step S204(when the point information table207does not include the same point information). Next, the arrival place determination processing unit206sets the corresponding point ID (S205), and thereafter proceeds to the processing of step S207. In other words, when the point information table207includes data of the parking information205and the point information, the arrival place determination processing unit206sets the point ID in the point information table207. Meanwhile, when the arrival place determination processing unit206obtains a negative determination result in step S202or step S204(when the point information table207does not include the same point information), the arrival place determination processing unit206newly sets a point ID, writes the set point ID in the point information table207together with the information of the parking information205(S206), and thereafter proceeds to the processing of step S207. Next, the arrival place determination processing unit206transfers the acquired point ID to the destination estimation unit212(S207), and thereafter ends the processing of this routine. In other words, in step S207, the arrival place determination processing unit206transfers the point ID determined to be the same in step S205, or the point ID newly assigned in step S206, to the destination estimation unit212. Note that, when including height information in the parking information205, information related to a plurality of point IDs when the “link ID”, the “link entry direction”, and the “parking direction” are extracted is transferred to the destination estimation unit212. FIG.6is a configuration diagram showing an example of the departure place/destination table210. InFIG.6, the departure place/destination table210is a table including a departure point ID401, a destination point ID402, and a number of times403, and is stored in the auxiliary storage device106. The departure point ID401is an identifier which uniquely identifies the departure point. The departure point ID401stores, for example, “1” as the information of the point ID as the departure point. The destination point ID402is an identifier which uniquely identifies the destination point. The destination point ID402stores, for example, “2” as the information of the point ID as the destination point. The point ID of the departure point ID401and the point ID of the destination point ID402store information (for example, “1” to “5”) existing in the point ID301of the point information table207. When referring to the information of each point ID, information of the corresponding point ID is acquired from the point information table207. The number of times403is information indicating the number of times that the vehicle traveled with the departure point ID as the departure place and the destination point ID as the destination. The number of times403stores, for example, the information of “30” when the vehicle traveled 30 times from the departure place to the destination. FIG.7is a processing flowchart of the destination estimation unit209. InFIG.7, the destination estimation unit209refers to the information of the departure place/destination table210based on the point ID received from the arrival place determination processing unit206, estimates the destination, and thereafter acquires the point information of the departure place and the destination from the point information table207. Specifically, the destination estimation unit209sets the point ID designated by the arrival place determination processing unit206(point ID transferred from the arrival place determination processing unit206) as the departure point ID (S301), and then acquires the most frequent destination point ID among the departure point IDs from the departure place/destination table210(S302). In other words, in step S302, the destination estimation unit209selects the destination point ID in which the information of the number of times403is most frequent among the departure point IDs set in step S301from the departure place/destination table210. Note that, when including height information, a plurality of departure point IDs are designated in a quantity corresponding to the number of pieces of information of different heights even when the link deviation ID, the link entry direction, and the parking direction are the same. Thus, when calculating the most frequent destination point ID the number of times of the same destination point ID is totaled, and the departure point ID of the most frequent number of times is selected. For example, in the case shown inFIG.6when the departure point IDs have been set as “1”, “4”, and “5”, the destination point ID 2 is calculated as “40” as a result of totaling the information of the number of times403of the departure point ID 1 and the departure point ID 4. The destination point ID 3 is calculated as “20” as a result of totaling the information of the number of times403of the departure point ID 1 and the departure point ID 5. Consequently, as the destination point ID in which the information of the number of times403is most frequent, the destination point ID 2 in which the number of times=“40” is selected. As the departure point ID, since the same link deviation ID can be acquired in step S303among a plurality of point IDs, any point ID may be selected. Since the departure point ID and the destination point ID are decided with the processing up to step S302, the destination estimation unit209subsequently refers to the corresponding point ID from the point information table207, acquires the link deviation ID of the departure point ID and the link deviation ID of the destination point ID (S303), thereafter sets the link deviation ID of the departure point ID as the “provisional departure link” and sets the link deviation ID of the destination point ID as the “arrival link” (S30), and then ends the processing of this routine. FIG.8is a configuration diagram showing an example of the travel history213. InFIG.8, the travel history213is a database which accumulates information stored in the travel history storage unit211, is a database which accumulates, as a history, the number of times for each combination of an entry link when the vehicle entered a facility such as a parking lot and an exit link when the vehicle exited from the facility, and is stored in the auxiliary storage device106. The entry link501is an identifier which uniquely identifies the entry link when the vehicle entered a facility such as a parking lot. The entry link501stores, for example, the information of “1” as the ID for identifying the entry link when the vehicle entered the facility. The exit link502is an identifier which uniquely identifies the exit link when the vehicle exited a facility such as a parking lot. The exit link502stores, for example, the information of “2” as the ID for identifying the exit link when the vehicle exited the facility. The exited link ID is stored. The entry link501and the exit link502store information of the link ID stored in the map information, but depending on the format of the map information, information such as a mesh ID as information for uniquely identifying the link is sometimes stored in addition to the link ID. Here, the mesh ID is also separately created as column information, and the created column information is managed as information to be stored in the entry link501and the exit link502. In other words, information required for uniquely identifying the link may all be stored in the entry link501and the exit link502. FIG.9is a processing flowchart showing the processing of the route estimation unit212. InFIG.9, the route estimation unit212estimates the route between the departure place (entry link) and the destination (exit link) from the travel history213by using the departure link (provisional departure link) and the arrival link estimated by the destination estimation unit209. Specifically, the route estimation unit212foremost determines whether the parking direction stored in the parking information205is “on link” (S401), proceeds to the processing of step S402upon obtaining a positive determination result in step S401(when the parking direction is “on link”), and proceeds to the processing of step S403upon obtaining a negative determination result in step S401(when the parking direction is other than “on link” and is “left of link” or “right of link”). In step S402, since the parking direction is “on link”, the route estimation unit212sets the link ID of the designated “provisional departure link” as the link ID of the “departure link”, and thereafter proceeds to the processing of step S404. Moreover, in step S403, since the parking information is not “on link”, the route estimation unit212acquires the most frequent exit link from the travel history213based on the “provisional departure link”, sets the acquired exit link as the “departure link”, and thereafter proceeds to the processing of step S404. Here, when the provisional departure link is to be set as the entry link from the travel history213, the ID of the exit link502in which the information of the number of times503is most frequent is set as the ID of the departure link. For example, in the case shown inFIG.8, when the ID of the provisional departure link is “1”, “2” is selected as the ID of the exit link502in which the information of the number of times503is “30”. Thus, the ID of the departure link will be “2”. Next, the route estimation unit212estimates the most frequent route from the designated “departure link” to the arrival link, sets the route that was estimated as the estimated route (S404), and thereafter ends the processing of this routine. In other words, in step S404, the most frequent route from the set departure link to the arrival link is estimated, and the route that was estimated is set as the estimated route. When estimating the most frequent route, foremost, the departure link is designated as the entry link, and the ID of the exit link502in which the information of the number of times503is most frequent is acquired. Furthermore, with such exit link as the departure link, the ID of the exit link in which the information of the number of times503is most frequent is acquired, and this process is repeated from the “departure link” to the arrival link to trace the exit link in which the information of the number of times503is most frequent in order to estimate the most frequent route. Note that, in step S404, when it is determined that the information of the departure link (entry link) does not exist in the travel history213, since it is a place that was visited for the first time, estimation of the estimated route is not performed. FIG.10is a processing flowchart showing the processing of the parking lot entry/exit link connection processing unit208. InFIG.10, the parking lot entry/exit link connection processing unit208starts the processing when the parking/stopping disengagement determination unit204determines that the vehicle has disengaged from its parked/stopped state. Foremost, the parking lot entry/exit link connection processing unit208determines whether the vehicle, after disengaging from its parked/stopped state, has gotten on the link for the first time (S501). Here, the parking lot entry/exit link connection processing unit208stands by to perform processing until the vehicle gets on the link for the first time after disengaging from its parked/stopped state, and, when the vehicle got on the link for the first time after disengaging from its parked/stopped state, acquires the link deviation ID from the parking information205, and sets the acquired link deviation ID as the parking lot entry link (S502). Next, the parking lot entry/exit link connection processing unit208sets the link on which the vehicle got on in step S501as the parking lot exit link (S503). Next, the parking lot entry/exit link connection processing unit208sets the entry link and the exit link in the travel history213with the parking lot entry link as the entry link and the parking lot exit link as the exit link (S504), and thereafter ends the processing of this routine. FIG.11is a configuration diagram showing an example of the recommendation information215. InFIG.11, the recommendation information215includes a departure point ID601, a recommended floor number602, and a recommended exit link603, and is stored in the auxiliary storage device106. The departure point ID601is an identifier which uniquely identifies the departure point of the vehicle. The departure point ID601stores, for example, the information of “1” as the identifier for identifying the departure point of the vehicle. In other words, the departure point ID601stores the information of the ID stored in the point ID301of the point information table207. The recommended floor number602is information recommended as the floor number on which the vehicle should park in a facility having a multilevel parking lot. The recommended floor number602stores, for example, when the first floor is recommended, the information of “1” as the floor number of the facility having a multilevel parking lot. Note that the recommended floor number602may also store information of the height of the facility having a multilevel parking lot. Moreover, when storing the information of the floor number of the facility, the range of height of each floor number is decided in advance from the height information of a GPS, and the floor number may be calculated from the height. Furthermore, when storing the information of the height of the facility, the height information that can be acquired from the GPS may be used. The recommended exit link603is information recommended as the link when the vehicle is to exit the facility having a multilevel parking lot. For example, if it is possible to know that there is traffic on the route from the calculated exit link based on the route estimation unit212, an exit link can be used as the departure link of the route for avoiding the traffic by exiting from the link of the recommended exit link603. Note that the recommendation information215is created using a server (not shown), and downloaded to the in-vehicle apparatus100. When creating the recommendation information215with a server, the travel history213, the point information table207, and the departure place/destination table210of each in-vehicle apparatus100are uploaded. Moreover, when the server is to calculate the information of the recommended floor number602, the point information table207and the departure place/destination table210uploaded from each in-vehicle apparatus100are connected. Here, as the key upon integrating the information of the respective tables, the point ID of the point information table207and the point ID of the departure place/destination table210are used. In the connected table, the information of the most frequent floor number (306) is set as the recommended floor number602in the same link deviation ID, link entry direction, and parking direction, and the departure point ID and the recommended floor number602are used as the recommendation information215. Moreover, when calculating the information of the recommended exit link603, the information of the travel history213uploaded from each in-vehicle apparatus100is totaled in advance. In other words, the information of the number of times503in which the entry link501and the exit link502of the travel history213are the same is totaled in a quantity corresponding to the number of in-vehicle apparatuses. After calculating the information of the recommended floor number602, the information of the link deviation ID302is acquired from the point information table207in which the calculated departure point ID is the point ID the link deviation ID302is set as the entry link501from the travel history213, and the information of the exit link502in which the information of the number of times503is most frequent is used as the information of the recommended exit link603. FIG.12is a processing flowchart showing the processing of the recommendation unit214. InFIG.12, the recommendation unit214uses the recommendation information215and recommends information to the driver. Specifically, the recommendation unit214foremost determines whether there is an estimated route in order to confirm the existence of an estimated route (S601). In other words, in step S601, it is determined whether the link has been traveled by the vehicle for the first time. The recommendation unit214proceeds to the processing of step S604upon obtaining a negative determination result in step S601(when there is no estimated route and the link has been traveled by the vehicle for the first time), and proceeds to the processing of step S602upon obtaining a positive determination result in step S601(when an estimated route can be calculated and the link has been previously traveled by the vehicle). Next, the recommendation unit214uses traffic information such as VICS (registered trademark) information and determines whether there is traffic on the estimated route (S602), proceeds to the processing of step S607upon obtaining a negative determination result in step S602(when there is no traffic on the estimated route), and proceeds to the processing of step S603upon obtaining a positive determination result in step S602(when there is traffic or the estimated route). In step S603, the recommendation unit214determines whether the exit on the estimated route and the recommended exit are different, proceeds to the processing of step S604upon obtaining a positive determination result in step S603(when the exit on the estimated route and the recommended exit are different), sets the floor number (recommended floor number602) and the exit (recommended exit link603) of the recommendation information215as the recommended information, and thereafter proceeds to the processing of step S606. Note that, in S603, the recommendation unit214compares the information of the recommended exit link603in the recommendation information215and the information of the exit link on the estimated route. Here, the exit link on the estimated route means the first link of the estimated route. Moreover, the recommendation unit214proceeds to the processing of step S605upon obtaining a negative determination result in step S603(when the exit on the estimated route and the recommended exit are the same), sets a candidate exit as the recommended exit from the travel history213, and thereafter proceeds to the processing of step S606. Note that, in step S605, the second-best route after the route estimated in step S601is estimated from the travel history213, the departure link is set as the entry link, and the exit link with the next frequent number of times (exit link in which the information of the number of times503of the travel history213is next frequent) is set as the recommended exit. Next, the recommendation unit214recommends the parking floor number and the exit to the driver as the recommended information (S606), and thereafter ends the processing of this routine. Here, in step S606, the recommendation unit214informs the driver, via a speaker, the exit and the floor number (parking floor number) set in the recommended information as the guidance information. For example, an announcement to the effect of “The usual exit will lead to traffic”, “You can avoid traffic by using this exit” is output, the route is displayed on the display device101, and the driver is guided to the displayed route. Moreover, with regard to the parking floor number, an announcement to the effect of “You should park on the second floor of your destination” is output, and the driver is thereby guided to the destination. Moreover, the recommendation unit214displays, on the display device101, the floor number (parking floor number) and the exit of the recommended information as the guidance information (S607), and thereafter ends the processing of this routine. Note that, in step S607, only the floor number and the exit of the recommended information are displayed on the display device101as the guidance information, and the driver may select the notification of the information. In this embodiment, the display device101functions as a display unit for displaying the guidance information on a display screen. Here, an audio output unit which converts the guidance information into audio signals and outputs such audio signals may be built into the display device101, or mounted in the in-vehicle apparatus100. In the foregoing case, the audio output unit incorporates the guidance information stored in the ROM104via the CPU107, and can be configured from a digital/analog converter which converts the guidance information into analog signals and a speaker which outputs the analog signals as audio. According to this embodiment, it is possible to provide a vehicle with information for using a facility having a parking lot and, consequently, it is possible to improve the user-friendliness for the driver. Moreover, according to this embodiment, it is possible to propose an estimated route by giving consideration to the entrance to be used when entering the parking lot, or, when the road from the usual exit of the parking lot is congested, it is possible to inform the driver of the traffic when the usual exit is used, and propose that the driver use a different exit this time. Furthermore, it is possible to propose a more favorable parking space within the parking lot to the driver. Note that the present invention is not limited to the foregoing embodiment, and includes various modified examples. For example, the in-vehicle apparatus100of this embodiment may present, to the driver, necessary information within an area without any road links outside the parking lot of a facility. Moreover, when the arrival place determination processing206determines that the vehicle has arrived at the facility, the recommendation unit214may refer to the parking information205and output guidance information as information which guides the vehicle based on the recommendation information215and which includes at least a parking position in the parking lot. The recommendation unit214can thereby guide the vehicle (driver) to the parking position based on the guidance information when the vehicle arrives at the facility. Moreover, when it is detected that the vehicle has deviated from a road (link) connected to the facility based on the position information from the positioning of the positioning sensor103on a condition that the parking/stopping determination unit202has determined that the vehicle has parked/stopped, the link deviation information holding unit201may also function as a deviation information holding unit which holds deviation information (link deviation information) including an entry direction of the vehicle into the facility and a point and parking direction at the time of deviation (at the time of link deviation). Here, the parking information storage unit203stores the deviation information held by the deviation information holding unit as information of a part of the parking information. Moreover, the destination estimation unit212can refer to the departure place/destination table210based on information (point information) of a point that the vehicle entered the facility when the arrival place determination processing unit206determines that the vehicle has arrived at the facility, and estimate the destination point belonging to a route in which the number of times403is most frequent as the vehicle's destination within the facility. Here, the recommendation unit214can output, as information belonging to the guidance information, the destination point estimated as the vehicle's destination within the facility. The recommendation unit214can thereby guide the vehicle (driver), based on the guidance information, to the destination point estimated as the vehicle's destination within the facility when the vehicle arrives at the facility. Moreover, the route estimation unit212can refer to the travel history213when the vehicle is to exit from the parking lot of the facility, and estimate a route in which the number of times503is most frequent and which is a route from the vehicle's parking position to the exit link502. Here, the recommendation unit214can output the exit link502as information belonging to the guidance information on a condition that the route estimation unit212has estimated a route to the exit link502. The recommendation unit214can thereby guide the vehicle (driver) to the exit link based on the guidance information when the vehicle is to exit from the parking lot of the facility. Moreover, a part of the configuration of a certain embodiment may be added to, deleted from or replaced with another configuration. Furthermore, a part or all of the respective configurations, functions and the like described above may be realized, for example with hardware by being designed with an integrated circuit. Moreover, a part or all of the respective configurations, functions and the like described above may be realized with software by a processor interpreting and executing the programs which realize the respective functions. Information of programs, tables and files for realizing the respective functions may be recorded in a memory, a hard disk, an SSD (Solid State Drive) or any other recording device, or may otherwise be recorded on an IC (Integrated Circuit) card, an SD (Secure Digital) memory card, a DVD (Digital Versatile Disc) or any other recording medium. The disclosure of the following priority application is incorporated herein by reference. Japanese Patent Application No. 2018-193904 (filed on Oct. 12, 2018) REFERENCE SIGNS LIST 100in-vehicle apparatus,101display device,102operating device,103positioning sensor,106auxiliary storage device,107CPU | 48,260 |
11859994 | DETAILED DESCRIPTION In various implementations, localization of an autonomous vehicle includes generating both a first predicted location of a given landmark in an environment of the autonomous vehicle and a second predicted location of the given landmark. Local pose instances of a local pose of the autonomous vehicle can be generated based at least in part on comparing the predicted locations of the given landmark. In some versions of those implementations, the local pose instances are utilized at least part of the time (e.g., the majority of the time or even exclusively) as the localization that is used in control of the autonomous vehicle. The given landmark can include any object or surface in an environment of the autonomous vehicle that is relatively static and that can be reliably detected by one or more sensors of the autonomous vehicle. For example, the given landmark can include a curb, a road retroreflector, a pavement marker, a lane line, an entry point of an intersection, a lane divider, a roadway sign, a traffic light, a sign post, a building, or any other object or surface that can be reliably detected be reliably detected one or more of the sensors of the autonomous vehicle. The landmark can optionally include a retroreflective surface. The first predicted location of the given landmark in the environment of the autonomous vehicle can be generated based on an instance of LIDAR data generated by one or more LIDAR sensors of the autonomous vehicle. Accordingly, the first predicted location is sometimes referred to herein as a LIDAR-based predicted location of the given landmark. In implementations where the given landmark includes the retroreflective surface, a sensing cycle of one or more of the LIDAR sensors of the autonomous vehicle can include features that are indicative of the given landmark. For example, the autonomous vehicle can identify one or more saturated regions caused by the retroreflective surface of the given landmark, and a location associated with the one or more saturated regions can be utilized as the LIDAR-based predicted location of the autonomous vehicle. The second predicted location of the given landmark in the environment of the autonomous vehicle can be generated based on a determined local pose instance of the local pose of the autonomous vehicle and based on a stored mapping of the environment that includes a stored location of the given landmark. Accordingly, the second predicted location is sometimes referred to herein as a pose-based predicted location of the given landmark. Notably, the pose-based predicted location of the given landmark is generated based on non-vision sensor data. For example, the local pose instance of the autonomous vehicle can be generated based instances on IMU data generate by IMU(s) of the autonomous vehicle, wheel encoder data generated by wheel encoder(s) of the autonomous vehicle, any other non-vision-based sensor data, or any combination thereof. Accordingly, the resulting pose-based predicted location of the given landmark are also generated based on instances of non-vision-based sensor data. In some implementations, the stored location of the given landmark can include positional coordinates of the given landmark within the mapping of the environment of the autonomous vehicle. For example, the stored location of the given landmark can indicate that the given landmark is located at X1, Y1, and Z1 within the mapping of the environment of the autonomous vehicle. In some additional or alternative implementations where the landmark includes the retroreflective surface, the stored location of the given landmark can include a previously stored point cloud that includes one or more saturated regions caused by the retroreflective surface of the given landmark when mapping the environment of the autonomous vehicle. Moreover, the determined local pose instance can be generated based on an instance of second sensor data generated by second sensors of the autonomous vehicle. Further, the instance of the second sensor data may temporally correspond to the instance of the LIDAR data utilized in generating the first predicted location of the given landmark. Thus, assuming the local pose instances are accurate, there should be no difference between the first predicted location of the given landmark and the second predicted location of the given landmark. Accordingly, any difference between these predicted locations can be utilized in generating the correction instance for use in generating additional local pose instances of the autonomous vehicle. In various implementations, validating localization of a vehicle (an autonomous vehicle or non-autonomous vehicle retrofitted with sufficient sensors) includes generating a pose-based predicted location of a given landmark in an environment of the autonomous vehicle based on an instance of sensor data generated by sensor(s) of the vehicle. The instance of the sensor data can include a LIDAR data instance generated by LIDAR sensor(s) of the vehicle, a wheel encoder data instance generated by wheel encoder(s) of the vehicle, an IMU data instance generated by IMU(s) of the vehicle, or any combination thereof. The pose-based predicted location of the landmark can be compared to a stored location of the landmark that is stored in a previous mapping of the environment of the vehicle. An error between the pose-based predicted location of the landmark and the stored location of the landmark can be determined. The error can indicate whether a pose instance of a pose of the vehicle, that is generated based on at least the sensor data instance utilized in generating the pose-based predicted location of the landmark, is accurate. For example, if the error fails to satisfy an error threshold, then the pose instance of the pose of the vehicle may be classified as accurate. However, if the error satisfies the error threshold, then the pose instance of the pose of the vehicle may be classified as not accurate. Further, parameter(s) of the sensor(s) of the vehicle can be automatically adjusted, or adjusted based on user input received responsive to determining that the pose instance of the pose of the vehicle is not accurate, and the adjusted parameter(s) of the sensor(s) can be utilized by the vehicle (and optionally other vehicles) in subsequent episodes of locomotion. As used herein, the term “tile” refers to a previously mapped portion of a geographical area. A plurality of tiles can be stored in memory of various systems described herein, and the plurality of tiles can be used to represent a geographical region. For example, a given geographical region, such as a city, can be divided into a plurality of tiles (e.g., each square mile of the city, each square kilometer of the city, or other dimensions), and each of the tiles can represent a portion of the geographical region. Further, the tiles can be stored in database(s) that are accessible by various systems described herein, and the tiles can be indexed in the database(s) by their respective locations within the geographical region. Moreover, the tiles can include, for example, information contained within the tiles, such as intersection information, traffic light information, landmark information, street information, other information for the geographical area represented by the tiles, or any combination thereof. The information contained within the tiles can be utilized to identify a matching tile. As used herein, the term “pose” refers to location information and orientation information of an autonomous vehicle within its surroundings, and generally with respect to a particular frame of reference. The pose can be an n-dimensional representation of the autonomous vehicle within the frame of reference, such any 2D, 3D, 4D, 5D, 6D, or any other dimensional representation. The particular frame of reference can be, for example, based on the aforementioned tile(s), longitude and latitude coordinates, a relative coordinate system, other frame(s) of reference, or any combination thereof. Moreover, various types of poses are described herein, and different types of poses can be defined with respect different frame(s) of reference. For example, a “global pose” of the autonomous vehicle can refer to location information and orientation information of the autonomous vehicles with respect to tile(s), and can be generated based on at least an instance of first sensor data generated by first sensor(s) of an autonomous vehicle. Further, a “local pose” of the autonomous vehicle can refer to location information and orientation information of the autonomous vehicles with respect to tile(s), but can be generated based on at least an instance of second sensor data generated by second sensor(s) of an autonomous vehicle that exclude the first sensor(s) utilized in generating the global pose. As used herein, the term “online” refers to operation that are performed during an episode of locomotion by a vehicle (an autonomous vehicle or non-autonomous vehicle retrofitted with sufficient sensors). These operations can be performed locally at the vehicle, or remotely by a computer system in communication with the vehicle. Further, these operations may influence control of the vehicle during the episode of locomotion. As used herein, the term “offline” refers to operations that do not influence control of the vehicle during the episode of locomotion. These operations can be performed locally at the vehicle, but are generally performed remotely by a computer system based on driving data generated during a past episode of locomotion. Prior to further discussion of these and other implementations, however, an example hardware and software environment within which the various techniques disclosed herein may be implemented will be discussed. Turning to the drawings, wherein like numbers denote like parts throughout the several views,FIG.1illustrates an example autonomous vehicle100within which the various techniques disclosed herein may be implemented. Vehicle100, for example, is shown driving on a road101, and vehicle100may include a powertrain102including a prime mover104powered by an energy source106and capable of providing power to a drivetrain108, as well as a control system110including a direction control112, a powertrain control114, and a brake control116. Vehicle100may be implemented as any number of different types of vehicles, including vehicles capable of transporting people or cargo, and it will be appreciated that the aforementioned components102-116can vary widely based upon the type of vehicle within which these components are utilized. The implementations discussed hereinafter, for example, will focus on a wheeled land vehicle such as a car, van, truck, bus, etc. In such implementations, the prime mover104may include one or more electric motors or an internal combustion engine (among others), while energy source106may include a fuel system (e.g., providing gasoline, diesel, hydrogen, etc.), a battery system, solar panels or other renewable energy source, a fuel cell system, etc., and drivetrain108may include wheels or tires (or both) along with a transmission or any other mechanical drive components suitable for converting the output of prime mover104into vehicular motion, as well as one or more brakes configured to controllably stop or slow the vehicle and direction or steering components suitable for controlling the trajectory of the vehicle (e.g., a rack and pinion steering linkage enabling one or more wheels of vehicle100to pivot about a generally vertical axis to vary an angle of the rotational planes of the wheels relative to the longitudinal axis of the vehicle). In various implementations, different combinations of powertrains102and energy sources106may be used. In the case of electric/gas hybrid vehicle implementations, one or more electric motors (e.g., dedicated to individual wheels or axles) may be used as a prime mover104. In the case of a hydrogen fuel cell implementation, the prime mover104may include one or more electric motors and the energy source106may include a fuel cell system powered by hydrogen fuel. Direction control112may include one or more actuators or sensors (or both) for controlling and receiving feedback from the direction or steering components to enable the vehicle to follow a desired trajectory. Powertrain control114may be configured to control the output of powertrain102, e.g., to control the output power of prime mover104, to control a gear of a transmission in drivetrain108, etc., thereby controlling a speed or direction (or both) of the vehicle. Brake control116may be configured to control one or more brakes that slow or stop vehicle100, e.g., disk or drum brakes coupled to the wheels of the vehicle. Other vehicle types, including but not limited off-road vehicles, all-terrain or tracked vehicles, construction equipment, etc., will necessarily utilize different powertrains, drivetrains, energy sources, direction controls, powertrain controls and brake controls, as will be appreciated by those of ordinary skill having the benefit of the instant disclosure. Moreover, in some implementations, various components may be combined, e.g., where directional control of a vehicle is primarily handled by varying an output of one or more prime movers. Therefore, the invention is not limited to the particular application of the herein-described techniques in an autonomous wheeled land vehicle. In the illustrated implementation, autonomous control over vehicle100(which may include various degrees of autonomy as well as selectively autonomous functionality) is primarily implemented in a primary vehicle control system120, which may include processor(s)122and one or more memories124, with the processor(s)122configured to execute program code instruction(s)126stored in memory124. A primary sensor system130may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle. For example, a satellite navigation (SATNAV) sensor132, e.g., compatible with any of various satellite navigation systems such as GPS, GLONASS, Galileo, Compass, etc., may be used to determine the location of the vehicle on the Earth using satellite signals. A Radio Detection and Ranging (RADAR) sensor134and a Light Detection and Ranging (LIDAR) sensor136, as well as digital camera(s)138(which may include various types of image capture devices capable of capturing still and video imagery), may be used to sense stationary and moving objects within the immediate vicinity of a vehicle. Inertial measurement unit(s) (IMU(s))140may include multiple gyroscopes and accelerometers capable of detection linear and rotational motion of vehicle100in three directions, while wheel encoder(s)142may be used to monitor the rotation of one or more wheels of vehicle100. The outputs of sensors132-142may be provided to a set of primary control subsystems150, including, a localization subsystem152, a planning subsystem154, a perception subsystem156, a control subsystem158, and a mapping subsystem160. Localization subsystem152determines a “pose” of vehicle100. In some implementations, the pose can include the location information and orientation information of vehicle100. In some of those implementations, the pose can additionally velocity information of vehicle100, acceleration information of vehicle100, or both. More particularly, localization subsystem152generates a “global pose” of vehicle100within its surrounding environment, and with respect to a particular frame of reference. As discussed in greater detail herein, localization subsystem152can generate a global pose of vehicle100based on matching sensor data output by one or more of sensors132-142to a previously mapped portion of a geographical area (also referred to herein as a “tile”). In some additional or alternative implementations, localization subsystem152determines predicted location(s) of landmark(s) within the surrounding environment of vehicle100. Planning subsystem154plans a path of motion for vehicle100over a timeframe given a desired destination as well as the static and moving objects within the environment, while perception subsystem156detects, tracks, and identifies elements within the environment surrounding vehicle100. Control subsystem158generates suitable control signals for controlling the various controls in control system110in order to implement the planned path of the vehicle. Mapping subsystem160may be provided in the illustrated implementations to describe the elements within an environment and the relationships therebetween, and may be accessed by one or more of the localization, planning, and perception subsystems152-156to obtain various information about the environment for use in performing their respective functions. Vehicle100also includes a secondary vehicle control system170, which may include one or more processors172and one or more memories174capable of storing program code instruction(s)176for execution by processor(s)172, and be substantially similar to the primary vehicle control system120. In some implementations, secondary vehicle control system170, may be used in conjunction with primary vehicle control system120in normal operation of vehicle100. In some additional or alternative implementations, secondary vehicle control system170, may be used as a redundant or backup control system for vehicle100, and may be used, among other purposes, to continue planning and navigation, to perform controlled stops in response to adverse events detected in primary vehicle control system120, or both. Adverse events can include, for example, a detected hardware failure in vehicle control systems120,170, a detected software failure in vehicle control systems120,170, a detected failure of sensor systems130,180, other adverse events, or any combination thereof. In other words, the adverse events can include failure of subsystems150,190, sensors130,180, and other failures. Secondary vehicle control system170may also include a secondary sensor system180including various sensors used by secondary vehicle control system170to sense the conditions or surroundings of vehicle100. For example, IMU(s)182may be used to generate linear and rotational motion information about the vehicle, while wheel encoder(s)184may be used to sense the velocity of each wheel. One or more of IMU(s)182and wheel encoder(s)184of secondary sensor system180may be the same as or distinct from one or more of IMU(s)140and wheel encoder(s)142of the primary sensor system130. Further, secondary vehicle control system170may also include secondary control subsystems190, including at least localization subsystem192and controlled stop subsystem194. Localization subsystem192generates a “local pose” of vehicle100relative to a previous local pose of vehicle100. As discussed in greater detail herein, localization subsystem152can generate local pose of vehicle100by processing sensor data output by one or more of sensors182-184to generate the local pose of vehicle100. Controlled stop subsystem194is used to implement a controlled stop for vehicle100upon detection of an adverse event. Other sensors and subsystems that may be utilized in secondary vehicle control system170, as well as other variations capable of being implemented in other implementations, will be discussed in greater detail below. Notably, localization subsystem152, which is responsible for generating a global pose of vehicle100(e.g., implemented by processor(s)122), and localization subsystem192, which is responsible for generating a local pose of vehicle100(e.g., implemented by processor(s)172), are depicted as being implemented by separate hardware components. As discussed in greater detail below, localization subsystem192can generate instances of local pose of vehicle100at a faster rate than localization subsystem152can generate instances of global pose of vehicle100. As a result, multiple instances of a local pose of vehicle100can be generated in the same amount of time as a single instance of global pose of vehicle100. In general, it should be understood an innumerable number of different architectures, including various combinations of software, hardware, circuit logic, sensors, networks, etc. may be used to implement the various components illustrated inFIG.1. The processor(s)122,172may be implemented, for example, as a microprocessor and the memory124,174may represent the random access memory (RAM) devices comprising a main storage, as well as any supplemental levels of memory, e.g., cache memories, non-volatile or backup memories (e.g., programmable or flash memories), read-only memories, etc. In addition, the memory124,174may be considered to include memory storage physically located elsewhere in vehicle100(e.g., any cache memory in processor(s)122,172), as well as any storage capacity used as a virtual memory (e.g., as stored on a mass storage device or on another computer or controller). Processor(s)124,174illustrated inFIG.1, or entirely separate processors, may be used to implement additional functionality in vehicle100outside of the purposes of autonomous control (e.g., to control entertainment systems, to operate doors, lights, convenience features, and so on). In addition, for additional storage, vehicle100may also include one or more mass storage devices, e.g., a floppy or other removable disk drive, a hard disk drive, a direct access storage device (DASD), an optical drive (e.g., a CD drive, a DVD drive, etc.), a solid state storage drive (SSD), network attached storage, a storage area network, or a tape drive, among others. Furthermore, vehicle100may include a user interface199to enable vehicle100to receive a number of inputs from and generate outputs for a user or operator (e.g., using one or more displays, touchscreens, voice interfaces, gesture interfaces, buttons and other tactile controls, or other input/output devices). Otherwise, user input may be received via another computer or electronic device (e.g., via an app on a mobile device) or via a web interface (e.g., from a remote operator). Moreover, vehicle100may include one or more network interfaces198suitable for communicating with one or more networks (e.g., a LAN, a WAN, a wired network, a wireless network, or the Internet, among others) to permit the communication of information between various components of vehicle100(e.g., between powertrain102, control system110, primary vehicle control system120, secondary vehicle control system170, other systems or components, or any combination thereof), with other vehicles, computers or electronic devices, including, for example, a central service, such as a cloud service, from which vehicle100receives environmental and other data for use in autonomous control thereof. For example, vehicle100may be in communication with a cloud-based remote vehicle service including a mapping service and a log collection service. Mapping service may be used via mapping subsystem160, for example, to maintain a global repository describing one or more geographical regions of the world, as well as to deploy portions of the global repository to one or more autonomous vehicles, to update the global repository based upon information received from one or more autonomous vehicles, and to otherwise manage the global repository. Log collection service may be used, for example, to collect and analyze observations made via sensors130,180of one or more autonomous vehicles during operation, enabling updates to be made to the global repository, as well as for other purposes. The processor(s)122,172illustrated inFIG.1, as well as various additional controllers and subsystems disclosed herein, generally operates under the control of an operating system and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, etc., as will be described in greater detail below. Moreover, various applications, components, programs, objects, modules, etc. may also execute on one or more processors in another computer coupled to vehicle100via network, e.g., in a distributed, cloud-based, or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers or services over a network. Further, in some implementations data recorded or collected by a vehicle may be manually retrieved and uploaded to another computer or service for analysis. In general, the routines executed to implement the various implementations described herein, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, will be referred to herein as “program code.” Program code typically comprises one or more instructions that are resident at various times in various memory and storage devices, and that, when read and executed by one or more processors, perform the steps necessary to execute steps or elements embodying the various aspects of the invention. Moreover, while the invention has and hereinafter will be described in the context of fully functioning computers and systems, it will be appreciated that the various implementations described herein are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of computer readable media used to actually carry out the distribution. Examples of computer readable media include tangible, non-transitory media such as volatile and non-volatile memory devices, floppy and other removable disks, solid state drives, hard disk drives, magnetic tape, and optical disks (e.g., CD-ROMs, DVDs, etc.), among others. In addition, various program code described hereinafter may be identified based upon the application within which it is implemented in a specific implementation. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified or implied by such nomenclature. Furthermore, given the typically endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, API's, applications, applets, etc.), it should be appreciated that the invention is not limited to the specific organization and allocation of program functionality described herein. It will be appreciated that the collection of components illustrated inFIG.1for primary vehicle control system120and secondary vehicle control system170are merely for the sake of example. Individual sensors may be omitted in some implementations, multiple sensors of the types illustrated inFIG.1may be used for redundancy or to cover different regions around a vehicle, and other types of sensors may be used. Likewise, different types or combinations of control subsystems may be used in other implementations. Further, while subsystems152-160,192-194are illustrated as being separate from processors122,172and memory124,174, respectively, it will be appreciated that in some implementations, portions or all of the functionality of subsystems152-160,192-194may be implemented with corresponding program code instruction(s)126,176resident in one or more memories124,174and executed by processor(s)122,174and that these subsystems152-160,192-194may in various instances be implemented using the same processors and memory. Subsystems152-160,192-194in some implementations may be implemented at least in part using various dedicated circuit logic, various processors, various field-programmable gate arrays (“FPGA”), various application-specific integrated circuits (“ASIC”), various real time controllers, and the like, and as noted above, multiple subsystems may utilize common circuitry, processors, sensors and other components. Further, the various components in primary vehicle control system120and secondary vehicle control system170may be networked in various manners. Those skilled in the art will recognize that the exemplary environment illustrated inFIG.1is not intended to limit the present invention. Indeed, those skilled in the art will recognize that other alternative hardware and software environments may be used without departing from the scope of the invention. Turning now toFIG.2A, a block diagram illustrating an example implementation of using the localization subsystems referenced inFIG.1online is depicted. As shown inFIG.2A, localization subsystem152of primary vehicle control system120includes at least global pose module252, landmark module254, and online calibration module256. Further, localization subsystem192of secondary vehicle control system170includes at least local pose module292. Notably, the implementations discussed in connection withFIG.2Aare performed online. In other words, these implementations can be executed as program code by a vehicle (an autonomous vehicle (e.g., vehicle100ofFIG.1) or a non-autonomous vehicle retrofitted with sufficient sensors), by a remote system in communication with the vehicle, or by both, during an episode of locomotion of the vehicle. Data generated by the localization subsystems152,192can be transmitted between the localization subsystems152,192(or the modules included therein), can be transmitted to other subsystems described herein, or any combination thereof. The data can include, for example, global pose instance(s) of a global pose of vehicle100, local pose instance(s) of a local pose of vehicle100, correction instance(s), or any combination thereof. Further, data generated by sensors (e.g., primary sensor system130, secondary sensor system180, or both) of vehicle100can be received by the localization subsystems152,192. Global pose module252can generate global pose instances of a global pose of vehicle100. The global pose of vehicle100represents a pose of vehicle100with respect to a reference frame (e.g., tile(s)), and the global pose instances represent orientation information and location information of vehicle100at a given time instance with respect to the reference frames. Further, global pose module252can receive instances of first sensor data130A, and the global pose instances can be generated based at least in part on the instances of first sensor data130A. The first sensor data130A can include, for example, instances of LIDAR data generated by the LIDAR sensor136of primary sensor system130. For example, global pose module252can generate the global pose instances by assembling an instance of LIDAR data generated by the LIDAR sensor136into one or more point clouds, and aligning one or more of the point clouds with one or more previously stored point clouds of the surrounding environment of vehicle100(e.g., retrieved from stored mapping(s) database160A using mapping subsystem160) using one or more geometric matching techniques, such as iterative closest point (“ICP”). In various implementations, global pose module252further generates the global pose instances based on local pose instances generated by local pose module292of localization subsystem192. For example, the local pose instances can provide information to global pose module252for identifying the one or more previously stored point clouds of the surrounding environment of vehicle100. This information can include, for example, a tile in which vehicle100is located or a neighborhood of tiles surrounding the tile in which vehicle100is located. In some additional or alternative implementations, global pose module252further generates the global pose instances based on instances of second sensor data generated by second sensor(s) of vehicle100(e.g., IMU(s)140,182, wheel encoder(s)142,184, other sensors, or any combination thereof). Further, global pose module252can transmit generated global pose instances to landmark module254, online calibration module256, or both. Landmark module254can generate a first predicted location of a landmark relative to vehicle100, and a second predicted location of the landmark relative to vehicle100. The landmark can include any object or surface in a previously mapped environment that can be reliably detected by the LIDAR sensor136, including, for example, a curb, a road retroreflector, a pavement marker, a lane line, an entry point of an intersection, a lane divider, a roadway sign, a traffic light, a sign post, a building, or any other object or surface that can be reliably detected be reliably detected by the LIDAR sensor136. A given instance of the first sensor data130A can include LIDAR data that includes one or more features that are indicative of the landmark. Landmark module254can identify the landmark based on the one or more features that are indicative of the landmark. In various implementations, the landmark can include a retroreflective surface, and the given instance of the first sensor data130A can include LIDAR data that includes one or more saturated regions caused by the retroreflective surface of the landmark. Detecting landmarks in instances of LIDAR data is described in greater detail below (e.g., with respect toFIG.4). In some implementations, landmark module254can generate the first predicted location of the landmark directly based on an instance of LIDAR data included in the first sensor data130A. More particularly, landmark module254can generate the first predicted location of the landmark by assembling the instance of the LIDAR data included in the instance of the first sensor data130A into one or more point clouds, identifying, in one or more of the point clouds, one or more saturated regions caused by the retroreflective surface of the landmark, and determining the first predicted location of the landmark based on the one or more saturated regions in one or more of the assembled point clouds. The one or more saturated regions can be utilized as the first predicted location of the landmark. In these implementations, the first predicted location of the landmark can be considered a LIDAR-based predicted location of the landmark in the environment of vehicle100. Landmark module254can transmit the first predicted location of the landmark to online calibration module256. In some additional or alternative versions of those implementations, landmark module254can generate the first predicted location of the landmark based on the global pose instances generated by global pose module252. As noted above, global pose module252generates the global pose instances by assembling instances of LIDAR data included in the first sensor data130A into one or more point clouds, and aligning the one or more point clouds with one or more previously stored point clouds. In some versions of those implementations, landmark module254can analyze the one or more point clouds assembled in generating the global pose instances to identify one or more saturated regions caused by the retroreflective surface of the landmark. The one or more saturated regions can be utilized as the first predicted location of the landmark. Again, in these implementations, the first predicted location of the landmark can be considered a LIDAR-based predicted location of the landmark in the environment of vehicle100. Landmark module254can transmit the first predicted location of the landmark to online calibration module256. In some implementations, landmark module254can generate the second predicted location of the landmark directly based on a determined local pose instance of a local pose of vehicle100, and based on a stored location of the landmark (e.g., retrieved from stored mapping(s) database160A using mapping subsystem160). The determined local pose instance can be generated based on an instance of the second sensor data180A that temporally corresponds to the instance of the LIDAR data utilized to generate the first predicted location of the landmark. The instances of the sensor data180A can include sensor data generated by IMU(s)140,182, wheel encoder(s)142,184of vehicle100, or both. Generating the local pose instances in described in greater detail below (e.g., with respect to local pose module292). Notably, the instances of the second sensor data180A do not include the LIDAR data utilized in generating the first predicted location of the landmark. In some implementations, landmark module254can generate the second predicted location of the landmark by accessing the mapping of the environment of vehicle100, identifying a previously stored point cloud from the mapping that includes a stored saturated region caused by the retroreflective surface of the landmark when the environment was previously mapped, and determining the second predicted location of the landmark based on the determined local pose instance and based on the stored saturated region in the previously stored point cloud. The identified location of the stored saturated region can be utilized as the second predicted location of the landmark. In some additional or alternative implementations, landmark module254can generate the second predicted location of the landmark by accessing the mapping of the environment of vehicle100, identifying the stored location of the landmark, and determining the second predicted location of the landmark based on the determined local pose instance and the stored mapping. Notably, in these implementations, landmark module254can generate the second predicted location of the landmark without utilization of any vision sensor data (i.e., without RADAR sensor134, LIDAR sensor136, camera(s)138, other vision sensor data, or any combination thereof). Put another way, the second predicted location of the landmark can be considered a pose-based predicted location of the landmark in the environment of vehicle100. Moreover, since the first predicted location and the second predicted location of the landmark are generated using temporally corresponding sensor data (e.g., sensor data generated at the same time or within a threshold amount of time, such as 50 milliseconds, 100 milliseconds, or any other threshold amount of time), the first predicted location and the second predicted location of the landmark should be the same location, assuming the determined local pose of vehicle100is accurate. Landmark module254can transmit the second predicted location of the landmark to online calibration module256. In various implementations, the predicted locations of the landmark can be defined in n-dimensional space, relative to vehicle100, or any other space or representation that allows locations or values to be compared. Further, the predicted locations of the landmark can include orientation information of the predicted landmark. For example, the predicted locations can be defined in n-dimensional space within a given tile, where the n-dimensional space is 2-dimensional space, 2.5-dimensional, 3-dimensional space, 4-dimensional space, and can optionally include an orientation component. In this example, the first predicted location can be located at X1, Y1, and Z1 within a given tile of vehicle100, and the second predicted location can be located at X2, Y2, and Z2 within the given tile of vehicle100. As another example, the predicted locations can be defined with respect to relative to vehicle100. In this example, the first predicted location can be located at X1, Y1, and Z1 from a given point of vehicle100, and the second predicted location can be located at X2, Y2, and Z2 from the given point of vehicle100. Online calibration module256can generate a correction instance(s)256A based on comparing the first predicted location (e.g., LIDAR-based predicted location) of the landmark and the second predicted location (e.g., pose-based predicted location) of the landmark. More particularly, online calibration module256can compare the predicted locations to determine an error in the determined local pose instance of vehicle100based on a difference between the first predicted location and the second predicted location from the comparing. For example, assume the first predicted location of the landmark is located at X1, Y1, and Z1 in n-dimensional space having a first orientation, and further assume the second predicted location of the landmark is located at X2, Y2, and Z2 in n-dimensional space having a second orientation. In this example, online calibration module256can compare X1 and X2, Y1 and Y2, Z1 and Z2, and the first orientation and the second orientation, and the difference can be a positional and orientation difference determined based on these comparisons in the n-dimensional space. Moreover, the correction instance(s)256A generated by online calibration module256can be an offset generated based on the difference (i.e., error) between the predicted locations generated by landmark module254. Online calibration module256can transmit the correction instance(s)256A to local pose module292. In some additional or alternative implementations, online calibration module256can generate the correction instance(s)256A based at least in part on the global pose instance transmitted to online calibration module256from global pose module252. In some versions of those implementations, the correction instance256A can include drift rate(s) across multiple local pose instances. The drift rate(s) can indicate a first magnitude of drift, in one or more dimensions (e.g., X-dimension, Y-dimension, Z-dimension, roll-dimension, pitch-dimension, yaw-dimension, other dimensions, or any combination thereof), over a period of time, or a second magnitude of drift, in one or more of the dimensions, over a distance. Put another way, the drift rate(s) can include, for example, a temporal drift rate, a distance drift rate, or both. The temporal drift can represent a magnitude of drift, in one or more dimensions, in generating the multiple local pose instances over the period of time in generating the multiple local pose instances. Further, the distance drift rate can represent a magnitude of drift, in one or more dimensions, over a distance travelled in generating the multiple local pose instances. In some versions of those implementations, the correction instance(s)256A can include a linear combination of the temporal drift rate and the distance drift rate. In some further versions of those implementations, the correction instance(s)256A can be generated as a function of the difference in the predicted locations generated by landmark module254and the drift rate(s) determined by online calibration module256based at least in part on the global pose instances. In some versions of those implementations, the correction instance be further generated based on instances of the second sensor data180A. As shown inFIG.2A, online calibration module256can receive instances of the second sensor data180A. The second sensor data180A can include, for example, IMU data generated by one or more of IMU(s)140,182, wheel encoder data generated by wheel encoder(s)142,184, or both. The instances of the second sensor data180A can include instances of the IMU data, the wheel encoder data, or both. For example, the instances of the IMU data and the wheel encoder data, of an instance of the second sensor data180A, can be the most recently generates instances of the IMU data and the wheel encoder data. Further, online calibration module256can transmit the generated correction instance(s)256A to local pose module292. In some additional or alternative implementations, the correction instance(s)256A can include, or be limited to, one of: the offset generated based on the difference between the predicted locations generated by landmark module254, or the drift rate(s) generated based on the global pose instances generated by global pose module252or the instances of the second sensor data180A. Local pose module292can generate local pose instances of a local pose of vehicle100. Like the global pose, the local pose of vehicle100also represents a pose of vehicle100with respect to a frame of reference (e.g., tile(s)), which can be the same frame of reference as that of the global pose of vehicle100. The local pose instances represent orientation information and location information with respect to a given tile at a given time instance. However, in contrast with the global pose, the local pose is not generated based on any vision data (e.g., LIDAR data or other vision data). Rather, as shown inFIG.2A, local pose module292can receive instances of the second sensor data180A described above (e.g., IMU data generated by IMU(s)140,182, wheel encoder data generated by wheel encoder(s)142,184, or both), and the local pose instances can be generated based at least in part on instances of the second sensor data180A. Generating local pose instances without utilization of any vision data can enable the local pose instances to be generated more frequently (e.g., at a frequency that is greater than that of vision data generation) and using less computational resources. Further, generating local pose instances without utilization of any vision data can enable the local pose instances to be generated even when the vision sensor(s) generating the vision data are malfunctioning. In some implementations, the local pose instances can be further generated based on the correction instance(s)256A transmitted to local pose module292from online calibration module256. By generating the local pose instances based on the correction instance(s)256A, which is generated based on the differences (i.e., error) in the predicted locations or the global pose instances as described above, errors in generating the local pose instances can be quickly and efficiently corrected. Thus, the local pose instances more accurately reflect an actual pose of vehicle100, and the local pose instances can be utilized by various other subsystem(s) described herein to control operation of vehicle100(e.g., planning subsystem154, control subsystem158, controlled stop subsystem194, or other subsystems). As described herein, in various implementations the correction instance(s)256A are generated based temporally corresponding sensor data. Local pose module292can generate local pose instances, utilizing the correction instance(s)256A, more efficiently than if global pose instances were instead utilized in generating the local pose instances. Yet further, the correction instance(s)256A can be applicable to and utilized in generating multiple local pose instances, whereas global pose instances are only applicable to generating a single temporally corresponding local pose instance as described above. Moreover, local pose module292can transmit the generated local pose instances to other module(s), subsystem(s) described herein (e.g., with respect toFIGS.1-3), or both. Notably, and as depicted inFIG.2A, the local pose instances can be generated by local pose module292at a first frequency f1and the correction instance(s)256A can be generated by online calibration module256at a second frequency f2, where the first frequency f1is higher than the second frequency f2. Put another way, the local pose instances are generated at a faster rate than the correction instance(s)256A. In this manner, a plurality of local pose instances can be generated based on the same correction instance(s)256A, and prior to receiving, at the local pose module292, additional correction instance(s)256A that is generated based on further vision data. When the additional correction instance(s)256A is received at local pose module292, a plurality of additional local pose instances can then be generated based on the additional correction instance(s)256A. Thus, local pose module292can track relative movement of vehicle100, and errors in tracking the relative movement of vehicle100can be mitigated by periodically adjusting calculations at local pose module292via the correction instance(s)256A that is generated on differences (i.e., error) in predicted locations of a given landmark as described above with respect to landmark module254, actual locations of vehicle100as indicated by the global pose instances as described above with respect to global pose module252, or both. In various implementations, the correction instance(s)256A can be generated most of the time (or even exclusively) based on the global pose instances, and the difference (i.e., error) in the predicted locations of the landmark can be determined periodically. In this manner, the difference in the predicted locations of the landmark can be utilized to periodically check the accuracy of the generated pose instances. Further, the difference can be utilized to generate to generate an offset to be included in the correction instance(s)256A. In some additional or alternative implementations, the correction instance(s)256A can be generated most of the time based on the difference in the predicted locations for the sake of redundancy in verifying the generated pose instances of vehicle. Turning now toFIG.2B, a block diagram illustrating an example implementation of validating the localization subsystems referenced inFIG.1offline is depicted. As shown inFIG.2B, localization validation subsystem220includes at least pose module252,292, landmark module254, and validation module225. Further, localization modification subsystem230can include at least modification module235. Notably, the implementations discussed in connection withFIG.2Bare performed offline, as opposed to online like the implementations discussed above in connection withFIG.2A. In other words, these implementations can be executed as program code a remote subsequent to an episode of locomotion of the vehicle. For the sake of simplicity, an instance of the global pose module252and an instance of the local pose module292ofFIG.2Aare depicted inFIG.2Bas a single module (e.g., as indicated by pose module252,292) that is capable of generating both global pose instance(s) of a global pose of the vehicle and local pose instance(s) of a local pose of the vehicle as described above with respect toFIG.2A. Data generated by the localization validation subsystem220and the localization modification subsystem230can be transmitted between one another, other subsystems described herein, or any combination thereof. In some implementations, pose module252,292can process an instance of sensor data included in driving data generated during a past episode of locomotion of a vehicle. The driving data can be stored in driving data database298, and can include at least instances of sensor data generated the past episode of locomotion of the vehicle. The instances of the sensor data can include, for example, various instances of the first sensor data130A and the second sensor data180A that are generated during the past episode of locomotion of the vehicle. In some versions of those implementations, the pose module252,292can generate global pose instances of a global pose of the vehicle based on a given instance of the first sensor data130A from the past episode (e.g., as described above with respect to global pose module252inFIG.2A). In some additional or alternative versions of those implementations, the pose module252,292can generate local pose instances of a local pose of the vehicle based on a given instance of the second sensor data190A from the past episode (e.g., as described above with respect to local pose module292inFIG.2A). In some implementations, landmark module254can generate a predicted location of a landmark in an environment of the vehicle captured in the sensor data that was generated during the past episode of locomotion. Further, landmark module254can identify a stored location of the landmark in the environment of the vehicle from the past episode of locomotion. In some versions of those implementations, landmark module254can generate the predicted location of the landmark directly based on an instance of LIDAR data included in a given instance of the first sensor data130A from the driving data corresponding to the past episode of locomotion (e.g., as described above with respect to landmark module254inFIG.2A). In other versions of those implementations, landmark module254can generate the predicted location of the landmark based on the global pose instances generated by global pose module252(e.g., as described above with respect to landmark module254inFIG.2A). In some additional or alternative versions of those implementations, landmark module254can generate the predicted location of the landmark based on local pose instance of a local pose of vehicle100, and based on a stored location of the landmark retrieved from stored mapping(s) database160A (e.g., as described above with respect to landmark module254inFIG.2A). In these implementations, the predicted location of the landmark can be considered a pose-based predicted of the landmark. In various implementations, validation module225can compare the pose-based predicted location of the landmark to the stored location of the landmark to determine a localization error225A in generating the pose-based predicted location of the landmark (e.g., similar to the online calibration module256ofFIG.2A). For example, assume the pose-based predicted location of the landmark is located at X1, Y1, and Z1 in n-dimensional space having a first orientation, and further assume the stored location of the landmark is located at X2, Y2, and Z2 in n-dimensional space having a second orientation. In this example, validation module255can compare X1 and X2, Y1 and Y2, Z1 and Z2, and the first orientation and the second orientation, and the difference can be a positional and orientation difference determined based on these comparisons in the n-dimensional space. In some versions of those implementations, validation module225can compare the localization error225A to an error threshold. If the localization error225A fails to satisfy the error threshold, then validation module225may classify the pose-based predicted location of the landmark as accurate, discard the localization error225A, and analyze additional instances of the sensor data generated during the past episode of locomotion to continue validating localization of the vehicle. Moreover, in various implementations, classifying the pose-based predicted location as accurate may indicate that the driving data can be utilized in training additional ML model(s) (e.g., one or more of a planning ML model, a perception ML model, or other ML model(s) utilized by an autonomous vehicle). However, if the localization error225A satisfies the error threshold, then validation module225may classify the pose-based predicted location of the landmark as not accurate, and transmit the localization error225A to modification module235of localization modification subsystem230. Moreover, in various implementations, classifying the pose-based predicted location as not accurate may indicate that the driving data should not be utilized in training additional ML model(s) (e.g., one or more of a planning ML model, a perception ML model, or other ML model(s) utilized by an autonomous vehicle). In some implementations, modification module235can process the localization error225A to automatically adjust corresponding parameters of one or more sensors that generated instances of sensor data utilized in generating the pose-based predicted location of landmark that was classified as not accurate based on the localization error225A. The modification module235can access corresponding sensor parameter(s) (e.g., stored in sensor parameter(s) database) of one or more of the sensors that generated instances of sensor data utilized in generating the pose-based predicted location of landmark. The parameter(s) of the LIDAR sensor(s) can include, for example, a point density of LIDAR points, a scan pattern of the LIDAR sensor(s), a field-of-view of the LIDAR sensor(s), a duration of a sensing cycle of the LIDAR sensor(s), one or more biases of the LIDAR sensor(s), other LIDAR parameters, or any combination thereof. The parameter(s) of the wheel encoder(s) can include, for example, an encoding type, a number of pulses per inch (or other distance), a number of pulses per shaft revolution, one or more biases of the wheel encoder(s), other wheel encoder parameters, or any combination thereof. The parameter(s) of the IMU(s) can include, for example, gyroscopic parameters of the IMU(s), accelerometer parameters of the IMU(s), a sampling frequency of the IMU(s), one or more biases of the IMU(s), other IMU parameters, or any combination thereof. Further, the adjusted parameter(s) of the sensor(s) of the vehicle can be utilized in subsequent episodes of locomotion. In implementations where the pose-based predicted location of the landmark is generated based on an instance of LIDAR data, modification module235may only adjust one or more of the corresponding parameters(s) of the LIDAR sensor(s). For example, if the localization error225A satisfies a first error threshold, then a point density of the LIDAR sensor may be increased by a first amount. Further, if the localization error225A satisfies a second error threshold, then the point density of the LIDAR sensor may be increased by a second amount that is different from the first amount. As another example, if the localization error225A satisfies the error threshold for a threshold quantity of instances for the past episode of locomotion (e.g., for analyzing7of10distinct instances), then the scan pattern of the LIDAR sensor may be adjusted from a parallel scan pattern to a sinusoidal scan pattern. In implementations where the pose-based predicted location of the landmark is generated based on an instance of wheel encoder data and IMU data, modification module235may only adjust one or more of the corresponding parameters(s) of the wheel encoders(s), the IMU(s), or both. For example, if the localization error225A satisfies a first error threshold, then one or more biases of the wheel encoder(s), the IMU(s), or both may be adjusted. In some additional or alternative implementations, user input module295may receive user input that additionally or alternatively adjusts the corresponding parameters of one or more of the sensors that generated instances of sensor data utilized in generating the pose-based predicted location of landmark that was classified as not accurate based on the localization error225A. The user input may be received responsive to the determination that localization error225A satisfied the error threshold. For example, a human operator may receive a notification generated by the validation module225that indicates the localization error225A for the pose-based predicted location of the landmark satisfies the error threshold. In this example, the human operator may provide user input that is detected via the user input module295to adjust the parameter(s) of the sensor(s). In subsequent episodes of locomotion, the vehicle (and optionally other vehicles), can utilize the adjusted corresponding parameter(s) of the sensor(s). By adjusting the corresponding parameter(s) of the sensor(s) offline in this manner, subsequent pose instance(s) generated based on instances of sensor data generated by the sensor(s) that utilize the adjusted corresponding parameter(s) may be more accurate. Turning now toFIG.3, a process flow illustrating an example implementation of the localization subsystems referenced inFIG.2Ais depicted. The process flow ofFIG.3can be implemented by primary vehicle control system120and secondary vehicle control system170. In particular, modules on left side of dashed line300can be implemented by secondary vehicle control system170(e.g., via localization subsystem192), and modules on the right side of the dashed line300can be performed by primary vehicle control system120(e.g., via localization subsystem152). AlthoughFIG.3is described herein as being implemented by both the primary vehicle control system120and the secondary vehicle control system170, it should be understood that the modules can be implemented entirely, or in part, by the primary vehicle control system120, the secondary vehicle control system170, a remote computing system in communication with vehicle100over one or more networks, or any combination thereof. Local pose module292can receive instances of IMU data182A generated by one or more IMUs of vehicle100(e.g., IMU(s)182of secondary sensor system180, IMU(s)140of primary sensory system130, or any combination thereof). Further, local pose module292can also receive instances of wheel encoder data184A generated by one or more wheel encoders of vehicle100(e.g., wheel encoder(s)184of secondary sensor system180, wheel encoders(s)142of primary sensory system130, or both). The combination of the IMU data182A, the wheel encoder data184A, and any other non-vision data is sometimes referred to herein as “second sensor data” (e.g., second sensor data180A ofFIG.2A). Notably, the IMU data182A and the wheel encoder data184A can be generated at different frequencies. Local pose module292can include in propagated filter(s) that incorporates the most recent version of sensor data in instances of the second sensor data (i.e., anytime machinery). Further, local pose module292can receive a correction instance(s)256A generated by online calibration module256as described above with respect toFIG.2A. Moreover, local pose module292can process, using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques), the instance of the second sensor data (including IMU data182A and wheel encoder data184A) and optionally the correction instance(s)256A to generate output. The output can include, for example, a local pose instance292A of a local pose of vehicle100, estimated velocities of vehicle100, estimated accelerations of vehicle100, or any combination thereof. Local pose module292can then transmit the local pose instance292A to other module(s) (e.g., global pose module252, landmark module254, online calibration module256, or any combination thereof) or subsystem(s) (e.g., planning subsystem154, control subsystem158, controlled stop subsystem194, or any combination thereof) over one or more networks via network interfaces198. It should also be noted that a frequency at which local pose instances are generated can be based on the frequency at which instances of the second sensor data are generated. In some implementations, landmark module254can process an instance of LIDAR data136A generated by the LIDAR sensor136of vehicle100to generate a LIDAR-based predicted location254A of a landmark in a surrounding environment of vehicle100. The instance of the LIDAR data136A processed by landmark module254can include one or more features that are indicative of the landmark. Put another way, the LIDAR-based predicted location254A of the landmark can generated directly based on the instance of the LIDAR data136A. For example, the landmark may include a retroreflective surface, and the one or more features included in the instance of the LIDAR data136A that are indicative of the landmark may be one or more saturated regions caused by the retroreflective surface of the landmark. Landmark module254can identify a location that corresponds to the one or more saturated regions in one or more assembled point clouds from the LIDAR data136A, and can utilize the identified location as the LIDAR-based predicted location254A of the landmark. Identifying the location that corresponds to the one or more saturated regions in described in greater detail below (e.g., with respect toFIG.4). The LIDAR-based predicted location254A can be transmitted to online calibration module256. In some additional or alternative implementations, landmark module254can process a global pose instance to generate the LIDAR-based predicted location254A of the landmark in the surrounding environment of vehicle100. As described above with respect toFIG.2A, global pose module252can also assemble one or more point clouds from the LIDAR data136A, and landmark module254can generate the LIDAR-based predicted location254A from the global pose instance252A generated based on the instance of the LIDAR data136A in a similar manner as described above with respect to generating the LIDAR-based predicted location254A directly based on the instance of the LIDAR data136A. The LIDAR-based predicted location254A can be transmitted to online calibration module256. Moreover, landmark module254can process the local pose instance292A generated by local pose module292and a stored mapping of an environment of vehicle100(e.g., in stored mapping(s) database160A) to generate a pose-based predicted location254B of the landmark in the surrounding environment of vehicle100. The local pose instance292A can provide landmark module254with information that indicates a tile that vehicle100is located. Landmark module254can identify the stored mapping of the environment of vehicle100based on the information that indicates the tile that vehicle100is located. Moreover, landmark module254can utilize the local pose instance292A to determine the orientation information and location information of vehicle100within a given tile (e.g., the environment of vehicle100), and can identify a stored location of the landmark from the stored mapping based on the local pose instance292A. Further, landmark module254can identify a location of the landmark based on the stored location of the landmark with respect to the local pose instance292A of vehicle100, and the identified location can be utilized as the pose-based predicted location254B of the landmark. The stored location of the landmark can be a saturated region from one or more previously stored clouds of the tile in which vehicle100is located as indicated by the local pose instance292A. The pose-based predicted location254B can be transmitted to online calibration module256. Global pose module252can process the instance of the LIDAR data136A generated by the LIDAR sensor136of vehicle100, the local pose instance292A generated by local pose module292to generate a global pose instance252A, or both. The LIDAR data136A generated by the LIDAR sensor136can generated at a slower rate than the IMU data182A and the wheel encoder data184A. The global pose instance252A can identify a matching tile in which vehicle100is located, and orientation information and location information of vehicle100within the matching tile. In some implementations, global pose module252generates the global pose instance252A by aligning a point cloud generated based on the LIDAR data136A with one or more previously stored point clouds of a given tile (e.g., stored in stored mapping(s) database160A). In some versions of those implementations, global pose module252can align the point cloud and one or more of the previously stored point clouds using various geometric matching techniques (e.g., iterative closest point (“ICP”) or other geometry matching algorithms). The one or more previously stored point clouds can be stored in association with a given tile, and can be accessed over one or more networks (e.g., using mapping subsystem160). In some further versions of those implementations, the one or more previously stored point clouds can be identified based on a most recently generated local pose instance (e.g., local pose instance292A) based on the second sensor data (e.g., IMU data182A and wheel encoder data184A), or based on both. The one or more previously stored point clouds can be stored in association with the given tile associated with the most recently generated local pose instance (e.g., local pose instance292A), a location of vehicle determined based on the second sensor data (e.g., IMU data182A and wheel encoder data184A), or both. The global pose instance252A can be transmitted to online calibration module256. In some implementations, online calibration module256can process the LIDAR-based predicted location254A and the pose-based predicted location254B to generate the correction instance(s)256A. Online calibration module256can compare the LIDAR-based predicted location254A and the pose-based predicted location254B to generate the correction instance. For example, assume the LIDAR-based predicted location254A is located at X1, Y1, and Z1 within a given tile, and further assume the pose-based predicted location254B is located at X1, Y1, and Z1 within the given example. In this example, the correction instance(s)256A can be generated based on comparing these coordinates. Comparing the LIDAR-based predicted location254A and the pose-based predicted location254B to generate the correction instance(s)256A is described in greater detail herein (e.g., with respect toFIG.2A). In some additional or alternative implementations, online calibration module256can additionally or alternatively process historical predicted locations of the landmark, including the LIDAR-based predicted location254A and the pose-based predicted location254B to generate the correction instance256A. The historical predicted locations (both LIDAR-based and global-pose based) may be limited to those that are generated within a threshold duration of time with respect to a current time (e.g., within the last 100 seconds, 200 seconds, or other durations of time) and may be limited to temporally corresponding predicted locations of the same landmark, such that online calibration module256only considers a sliding window of the historical predicted locations for a given landmark. For example, online calibration module256can generate the correction instance(s) further as a function of comparing a historical pose-based predicted location of the landmark and a historical LIDAR-based predicted location of the landmark. In this example, the historical pose-based predicted location of the landmark can be previously generated based on a previous local pose instance that defined a previous location of vehicle100and the stored location of the landmark, and the historical LIDAR-based predicted location of the landmark can be previously generated based on a previous instance of the LIDAR data that includes one or more of the features that are indicative of the landmark. This enables online calibration module256to generate drift rate(s) based on comparing the temporally corresponding historical predicted locations of the landmark. Online calibration module256can transmit the correction instance(s)256A to local pose module292over one or more networks via network interfaces198, and multiple additional local pose instances can be generated using the correction instance(s)256A. Thus, local pose instances generated by local pose module292can be generated based on the correction instance256A as well as additional instance of the IMU data182A and additional instances of the wheel encoder data184A. In some additional or alternative implementations, online calibration module256can additionally or alternatively process an instance of the IMU data182A, an instance of the wheel encoder data184A, and the global pose instance252A to generate the correction instance(s)256A. In some implementations, online calibration module256can process, using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques), the instance of the IMU data182A, the instance of the wheel encoder data184A, and the global pose instance252A to generate output. In some versions of those implementations, the output can include, for example, estimates of wheel radii of vehicle100, sensor biases of individual sensors of vehicle100(e.g., sensor(s) included in primary sensor system130or secondary sensor system180), or both. The correction instance(s)256A can then be generated based on the estimates of the wheel radii of vehicle100, the sensor biases of individual sensors of vehicle100, or both. In other versions of those implementations, the output generated across the state estimation model can be the correction instance(s)256A, such that the state estimation model acts like a black box. Turning now toFIG.4, an example mapped environment that includes landmarks and that is being navigated by vehicle100is depicted. Vehicle100includes the LIDAR sensor affixed to a top side of vehicle100. Although vehicle100is depicted as including only a single LIDAR sensor affixed to the top side of vehicle100, it should be understood that is for the sake of example and is not meant to be limiting. For example, the LIDAR sensor136can be affixed to other locations on vehicle100, such as a hood of vehicle100, a side of vehicle100, a rear of vehicle, or any location on vehicle100. Moreover, vehicle100can include multiple LIDAR sensors affixed to one or more of the aforementioned locations on vehicle100. The LIDAR sensor136can, during a sensing cycle of the LIDAR sensor136, generate an instance of LIDAR data. The instance of the LIDAR data generated during a given sensing cycle of the LIDAR sensor136can include a plurality of detected data points in the environment of vehicle100. For example, the LIDAR sensor136can scan a certain area during a particular sensing cycle to detect an object or an environment in the area. For instance, an instance of the LIDAR data generated during a given sensing cycle of the LIDAR sensor136can include a first LIDAR data point L1, a second LIDAR data point L2, a third LIDAR data point L3, a fourth LIDAR data point L4, a fifth LIDAR data point L5, a sixth LIDAR data point L6, and optionally additional LIDAR data points. As described with respect toFIG.2A, these LIDAR data points can be assembled into one or more point clouds. In some implementations, LIDAR sensor136can include a phase coherent LIDAR component during a sensing cycle. In some versions of those implementations, the instances of the first sensor data130A can include LIDAR data from a sensing cycle of LIDAR sensor136. The LIDAR data from the sensing cycle of LIDAR sensor136can include, for example, a transmitted encoded waveform that is sequentially directed to, and sequentially reflects off of, each of a plurality of points in an environment of vehicle100—and reflected portions of the encoded waveform are each detected, in a corresponding sensing event of the sensing cycle, by the at least one receiver of the phase coherent LIDAR component as data points. During a sensing cycle, the waveform is directed to a plurality of points in an area of the environment of vehicle100, and corresponding reflections detected, without the waveform being redirected to those points in the sensing cycle. Accordingly, the range and velocity for a point that is indicated by the LIDAR data of a sensing cycle can be instantaneous in that is based on single sensing event without reference to a prior or subsequent sensing event. In some versions of those implementations, multiple (e.g., all) sensing cycles can have the same duration, the same field-of-view, or the same pattern of waveform distribution (through directing of the waveform during the sensing cycle). For example, multiple sensing cycles that include a sweep can have the same duration, the same field-of-view, and the same pattern of waveform distribution. However, in many other implementations the duration, field-of-view, or waveform distribution pattern can vary amongst one or more sensing cycles. For example, a first sensing cycle can be of a first duration, have a first field-of view, and a first waveform distribution pattern; and a second sensing cycle can be of a second duration that is shorter than the first, have a second field-of-view that is a subset of the first field-of-view, and have a second waveform distribution pattern that is denser than the first. As described with respect toFIGS.2A,2B, and3, landmarks, and locations thereof, can be stored in a mapping of the environment of vehicle100, such as the environment depicted inFIG.4. The landmarks can include any object or surface in a previously mapped environment that can be reliably detected by the LIDAR sensor136, including, for example, a curb, a road retroreflector, a pavement marker, a lane line, an entry point of an intersection, a lane divider, a roadway sign, a traffic light, a sign post, a building, or any other object or surface that can be reliably detected be reliably detected by the LIDAR sensor136. Further, the landmarks can be stored in association with one or more features, and the one or more features can include, for example, saturated region(s) caused by a retroreflective surface that are indicative of a corresponding landmark when the environment of vehicle100was previously mapped and stored (e.g., by vehicle100, by another autonomous vehicle, or by other means). Accordingly, when vehicle100subsequently navigates through the environment corresponding to the stored mapping depicted inFIG.4, landmarks can be identified from instances of LIDAR data generated by the LIDAR sensor136based on the one or more features that are indicative of the landmark. The landmarks depicted inFIG.4include road retroreflectors402A-402F affixed to a surface of a road402and positioned between a first lane line401A and second land line401B, an entry point of an intersection403, and a stop sign404. Notably, the landmarks402A-402F,403, and404depicted inFIG.4include retroreflective surfaces as indicated by the hatched markings on the landmarks402A-402F,403, and404. Further, for the sensing cycle of the LIDAR sensor136depicted inFIG.4, the fourth LIDAR data point L4 detects a first road retroreflector402A of the road retroreflectors402A-402F, the fifth LIDAR point L5 detects the entry point of the intersection403, and the sixth LIDAR data point L6 detects the stop sign404. Notably, the first LIDAR data point L1, the second LIDAR data point L2, and the third LIDAR data point L3 detect another vehicle410(autonomous or otherwise) travelling in an opposite direction of vehicle. However, the another vehicle410is not utilized as a landmark since the another vehicle410cannot be reliably detected by the LIDAR sensor136in the environment of vehicle100depicted inFIG.4(i.e., the another vehicle410is not always present in the environment of vehicle100). Nonetheless, the first LIDAR data point L1, the second LIDAR data point L2, and the third LIDAR data point L3 can be utilized in assembling the one or more point clouds of the environment of vehicle100based on the LIDAR data generated by the LIDAR sensor136. Since the landmarks402A-402F,403, and404depicted inFIG.4include the retroreflective surfaces, saturated regions are detected at the fourth LIDAR data point L4, the fifth LIDAR point L5, and the sixth LIDAR data point L6, respectively. These saturated regions are indicative of a corresponding one of the landmarks402A-402F,403, and404. More particularly, a first saturated region can be identified at the first road retroreflector402A, a second saturated region can be identified at the entry point of the intersection403, and a third saturated region can be identified at the stop sign404. Further, locations of the environment corresponding to these saturated regions can be utilized as LIDAR-based predicted locations of the landmarks402A-402F,403, and404. The LIDAR-based predicted locations of one or more of the landmarks402A-402F,403, and404can be compared to pose-based predicted locations of a corresponding one of the landmarks402A-402F,403, and404. The pose-based predicted locations of the landmarks402A-402F,403, and404can be generated based on a local pose instance of vehicle100that temporally corresponds to the instance of LIDAR data generated by the LIDAR sensor136depicted inFIG.4, and based on a stored mapping of the landmarks402A-402F,403, and404as described in greater detail above (e.g., with respect toFIGS.2A,2B, and3). For example, a first pose-based predicted location of the first road retroreflector402A can be generated based on the local pose instance and the stored mapping of the first road retroreflector402A. Further, the first pose-based predicted location of the first road retroreflector402A can be compared to the LIDAR-based predicted location of the first road retroreflector402A. This can optionally be repeated for the other landmarks403and404depicted inFIG.4. By comparing the LIDAR-based predicted location and pose-based predicted location of the landmarks402A-402F,403, and404, a difference therebetween can be determined, and an error in the predicted locations can be determined based on the difference. In some implementations, the difference can be utilized to generate correction instances for generating additional local pose instances as described in greater detail above (e.g., with respect toFIGS.2A,2B, and3). For example, an offset can be determined based on the difference, and the correction instances utilized in generating additional local pose instances can include the offset. In additional or alternative implementations, the difference can be compared to an error threshold in localization of vehicle100. In some versions of those implementations, if the difference is greater than the error threshold, then vehicle100can perform a controlled stop using the local pose instances that are optionally generated using a correction instance generated based on the error. In some further versions of those implementations, if the difference is less than the error threshold, then vehicle100can continue normal operation of vehicle100. AlthoughFIG.4is depicted as including multiple landmarks, it should be understood that is not meant to be limiting and that the techniques described herein can be utilized with a single landmark. Moreover, althoughFIG.4is described herein with respect to particular landmark that have retroreflective surfaces, it should be understood that is also not meant to be limiting and that the techniques described herein can be utilized with any object or surface that can be reliably detected using the LIDAR sensor136. Turning now toFIG.5, an example method500for online localization of an autonomous vehicle is illustrated. The method500may be performed by an autonomous vehicle analyzing sensor data generated by sensor(s) of the autonomous vehicle (e.g., vehicle100ofFIG.1or vehicle400ofFIG.4), by another vehicle (autonomous or otherwise), by another computer system that is separate from the autonomous vehicle, or various combinations thereof. For the sake of simplicity, operations of the method500are described herein as being performed by a system (e.g., processor(s)122or primary vehicle control system120, processor(s)172of secondary vehicle control system170, or any combination thereof). It will be appreciated that the operations of the method500may be varied, and that various operations may be performed in parallel or iteratively in some implementations, so the method500illustrated inFIG.5is merely provided for illustrative purposes. At block552, the system receives an instance of first sensor data generated by first sensors of an autonomous vehicle (“AV”). The first sensors can include, for example, at least a LIDAR sensor of the AV, and the first sensor data can include LIDAR data generated the LIDAR sensor of the AV. At block554, the system generates a first predicted location of a landmark in an environment of the AV based on the instance of the first sensor data. Put another way, the first predicted location of the landmark can be a LIDAR-based predicted location of the landmark. At block556, the system determines a second predicted location of the landmark in the environment of the AV based on a pose instance of a pose of the AV, and a stored location, of the landmark, in a stored mapping of the environment. Put another way, the second predicted location of the landmark can be a pose-based predicted location of the landmark that is based on a local pose instance of a local pose of the AV or a global pose instance of a global pose of the AV. Generating the first predicted location of the landmark (e.g., LIDAR-based predicted location) and the second predicted location of the landmark (e.g., pose-base predicted location) is described in greater detail herein (e.g., with respect toFIGS.2A,3, and4). At block558, the system compares the first predicted location of the landmark to the second predicted location of the landmark. The system can determine a difference between the first predicted location of the landmark to the second predicted location of the landmark based on the comparing. At block560, the system generates a correction instance based on comparing the first predicted location of the landmark to the second predicted location of the landmark. The generated correction instance can be, for example, an offset based on the difference between the first predicted location of the landmark to the second predicted location of the landmark. Comparing the first predicted location of the landmark to the second predicted location of the landmark, and generating the correction instance is described in greater detail herein (e.g., with respect toFIGS.2A-4). At block562, the system transmits, by a primary control system of the AV, the correction instance to a secondary control system of the AV. As shown inFIG.5, the operations of blocks552-562are performed by a primary vehicle control system of the AV, and the operations of block564-566are performed by a secondary vehicle control system of the AV. Although certain operations ofFIG.5are depicted as being performed the primary vehicle control system of the AV and the secondary vehicle control system of the AV, it should be understood that is for the sake of example and is not meant to be limiting. For example, the operations ofFIG.5can be performed by any one of, or any combination of, the primary vehicle control system of the AV, the secondary vehicle control system of the AV, or a remote computing system. At block564, the system receives, by the secondary control system of the AV, the correction instance. At block566, the system generates an additional pose instance of the AV based on the correction instance and an instance of second sensor data generated by second sensor(s) of the AV. The second sensor data can include, for example, IMU data generated by IMU(s) of the AV, wheel encoder data generated by wheel encoders of the AV, other non-vision-based sensor data generated by the AV, or any combination thereof. In some implementations, the instance of the second sensor data utilized at block566is temporally distinct from the second sensor data utilized to generate the pose instance utilized at block556. Further, the instance of the second sensor data utilized at block556may temporally correspond to the instance of the first sensor data received at block552. The additional pose instance can then be transmitted back to the primary vehicle control system, and can be utilized in generating a further second predicted location of the landmark. At block568, the system causes the AV to be controlled based on the additional pose instance. For example, the additional pose instance can be transmitted to a planning or control subsystem of the AV. Turning now toFIG.6, another example method600for online localization of an autonomous vehicle is illustrated. The method600may be performed by an autonomous vehicle analyzing sensor data generated by sensor(s) of the autonomous vehicle (e.g., vehicle100ofFIG.1or vehicle400ofFIG.4), by another vehicle (autonomous or otherwise), by another computer system that is separate from the autonomous vehicle, or various combinations thereof. For the sake of simplicity, operations of the method600are described herein as being performed by a system (e.g., processor(s)122or primary vehicle control system120, processor(s)172of secondary vehicle control system170, or any combination thereof). It will be appreciated that the operations of the method600may be varied, and that various operations may be performed in parallel or iteratively in some implementations, so the method600illustrated inFIG.6is merely provided for illustrative purposes. At block652, the system generates a pose instance of an autonomous vehicle (“AV”) that defines a location of the AV within a mapping of an environment. The pose instance of the AV can define orientation information location information of the AV within a given tile. In some implementations, the pose instance of the AV can be generated based on IMU data generated by IMU(s) of the AV, wheel encoder data generated by wheel encoder(s) of the AV, other non-vision-based sensor data generated by the AV, or any combination thereof. In additional or alternative implementations, the pose instance of the AV can be generated based on LIDAR data generated by a LIDAR sensor of the AV, other vision-based based sensor data generated by the AV (e.g., image data generated by vision components of the AV, RADAR data generated by a RADAR sensor of the AV), or any combination thereof. Generating the pose instance in an online manner is described in greater detail herein (e.g., with respect toFIGS.2Aand3). At block654, the system determines a stored location of a landmark within the mapping of the environment. The stored location of the landmark within the mapping of the environment can include a stored point cloud that includes features of the landmark (e.g., one or more saturated regions), stored coordinates of the landmark within the mapping, other representations of the landmark, or any combination thereof. At block656, the system generates a pose-based predicted location of the landmark relative to the AV based on the pose instance of the AV and the stored location of the landmark. In some implementations, the pose-based predicted location can be generated based on a local pose instance of a local pose of the AV, whereas in other implementations, the pose-based predicted location can be generated based on a global pose instance of a global pose of the AV. Generating the pose-based predicted location in an online manner is described in greater detail herein (e.g., with respect toFIGS.2A and3). At block658, the system identifies, in an instance of LIDAR data generated by a LIDAR sensor of the AV and that temporally corresponds to the local pose instance, feature(s) that are indicative of the landmark. The feature(s) that are indicative of the landmark can include, for example, one or more saturated regions that correspond to the landmark. Identifying the feature(s) that are indicative of the landmark is described in greater detail herein (e.g., with respect toFIG.4). At block660, the system generates a LIDAR-based predicted location of the landmark relative to the AV based on the instance of the LIDAR data. Generating the LIDAR-based predicted location of the landmark relative to the AV based on the instance of the LIDAR data in an online manner is described in greater detail herein (e.g., with respect toFIGS.2A,3, and4). At block662, the system compares the pose-based predicted location and the LIDAR-based predicted location to determine a difference in the predicted locations. The system can compare the pose-based predicted location and the LIDAR-based predicted location to determine a difference between the pose-based predicted location and the LIDAR-based predicted location. At block664, the system generates a correction instance based on comparing the pose-based predicted location and the LIDAR-based predicted location. The system can determine an offset based on the difference between the pose-based predicted location and the LIDAR-based predicted location, and can generate the correction instance based on the determined offset. Generating the correction instance based on comparing the pose-based predicted location and the LIDAR-based predicted location in an online manner is described in greater detail herein (e.g., with respect toFIGS.2A and3). At block666, the system uses the correction instance in generating additional local pose instance(s). Thus, the additional local pose instance(s) are generated based on the determined offset. Notably, multiple additional pose instances can be generated based on the generated correction instance until a further correction instance is generated. The correction instance and the further correction instance can be combined, and further local pose instances can be generated based on the combined correction instance. At block668, the system determines whether the difference in the predicted locations of the landmark satisfies a threshold. If, at an iteration of block668, the system determines that the difference in the predicted locations of the landmark satisfies the threshold, then the system may proceed to block670. At block670, the system causes the AV to perform a controlled stop based on the additional pose instance(s). Thus, the system can cause the AV to perform the controlled stop in response to determine an error in localization of the AV exceeds the threshold. If, at an iteration of block668, the system determines that the difference in the predicted locations of the landmark does not satisfy the threshold, then the system may proceed to block672. At block672, the system causes the AV to be controlled based on the additional pose instance(s). Further, the system may return to block656to generate additional pose-based predicted location(s) of the landmark relative to the AV. Thus, the system can cause the AV to continue normal operation in response to determining that there is no error in the localization of the AV or that any determined error in the localization fails to satisfy the threshold. Although the operations of the method600ofFIG.6are depicted as occurring in particular order, it should be understood that is for the sake of example and is not meant to be limiting. For instance, the system may determine that the difference in the predicted locations of the landmark satisfies a threshold prior to generating any additional pose instances, and can perform the controlled stop based on a most recently generated local pose instance. As a result, the system can conserve computational resources in response to detecting an adverse event at the AV. Turning now toFIG.7, an example method700for offline validation of localization of a vehicle is illustrated. The method700may be performed by one or more computer systems that are separate from the autonomous vehicle. For the sake of simplicity, operations of the method700are described herein as being performed by a system (e.g., processor(s) and memory). It will be appreciated that the operations of the method700may be varied, and that various operations may be performed in parallel or iteratively in some implementations, so the method700illustrated inFIG.7is merely provided for illustrative purposes. At block752, the system obtains driving data from a past episode of locomotion of a vehicle. The driving data can be generated by the vehicle during the past episode of locomotion of the vehicle. Further, the driving data can include sensor data generated by sensors of the vehicle during the past episode of locomotion. In some implementations, the driving data can be manual driving data that is captured while a human is driving a vehicle (e.g., an AV or non-AV retrofitted with sufficient sensors (e.g., primary sensor130ofFIG.1)) in a real world and in a conventional mode, where the conventional mode represents the vehicle under active physical control of a human operating the vehicle. In other implementations, the driving data can be autonomous driving data that is captured while an AV is driving in a real world and in an autonomous mode, where the autonomous mode represents the AV being autonomously controlled. In yet other implementations, the driving data can be simulated driving data captured while a virtual human is driving a virtual vehicle in a simulated world. At block754, the system identifies, from the driving data, an instance of sensor data generated by sensor(s) of the vehicle. In some implementations, the sensor(s) of the vehicle include LIDAR sensor(s), and the instance of the sensor data generated by the LIDAR sensor(s) include an instance of LIDAR data generated by the LIDAR sensor(s). In some versions of those implementations, a global pose instance of a global pose of the vehicle can be generated based on the instance of the LIDAR data. In some additional or alternative implementations, the sensor(s) of the vehicle include IMU sensor(s), wheel encoder(s), other non-vision based sensors, or any combination thereof, and the instance of the sensor data generated by the LIDAR sensor(s) include various combinations of instances of IMU data, wheel encoder data, or other non-vision data. In some versions of those implementations, a local pose instance of a local pose of the vehicle can be generated based on the various combinations of the instances of IMU data, wheel encoder data, or other non-vision data. At block756, the system generates, based on the instance of the sensor data, a pose-based predicted location of a landmark in an environment of the vehicle. In implementations where the instance of the sensor data includes LIDAR data, the pose-based predicted location of the landmark can be a global pose-based predicted location of the landmark in the sense that the global pose-based predicted location of the landmark can generated based on the instance of the LIDAR data that can also be utilized in generating a global pose instance of a global pose of the vehicle. This is also referred to herein as a LIDAR-based predicted location of the landmark. The instance of the LIDAR can capture features of the landmark. For example, assume the landmark has a retroreflective surface. In this example, the retroreflective surface can cause the instance of the LIDAR data to include one or more saturated regions that correspond to the retroreflective surface of the landmark. The system can utilize location(s) corresponding to the one or more saturated regions of the landmark as the pose-based predicted location of the landmark. For instance, the one or more saturated regions can optionally be identified as part of matching the instance of the LIDAR data (or a point cloud corresponding thereto) to a previously stored mapping of the environment of the vehicle during the past episode of locomotion to generate the global pose instance. In implementations where the instance of the sensor data includes IMU data, wheel encoder data, or other non-vision based data, the pose-based predicted location of the landmark can be a local pose-based predicted location of the landmark that is generated based on the instance of the IMU data and the wheel encoder data and a previously stored mapping of the environment of the vehicle that includes the landmark. At block758, the system identifies, from a stored mapping of the environment of the vehicle, a stored location of the landmark in the environment of the vehicle. The stored location of the environment can also be utilized in generating the pose-based predicted location of the landmark (e.g., as indicated above with respect to the local pose-based instance at block756). The pose-based predicted location of the landmark and the stored location of the landmark can be defined by a coordinate system of a particular frame of reference of the vehicle. For example, the pose-based predicted location of the landmark and the stored location of the landmark can be defined by coordinates with respect to a tile in which the vehicle is located or relative to the vehicle. At block760, the system compares the pose-based predicted location of the landmark to the stored location of the landmark, and at block762, the system determines, based on the comparing, an error between the pose-based predicted location of the landmark and the stored location of the landmark. For example, in implementations where the pose-based predicted location is based on the instance of the LIDAR data, the system can compare a point cloud corresponding to pose-based predicted location of the landmark to an additional point cloud corresponding to the stored location of the landmark. The error can be determined based on a difference between the point clouds. As another example, in implementations where the pose-based predicted location is based on the instance of the IMU data and the wheel encoder data, the system can compare coordinates corresponding to the pose-based predicted location to coordinates corresponding to the stored location of the landmark. At block764, the system determines whether the error determined at block762satisfies an error threshold. If, at an iteration of block764, the system determines that the error does not satisfy the error threshold, then the system may return to block754. The error failing to satisfy the error threshold may indicate that a pose instance generated during the past episode and based on the sensor data instance is accurate. Accordingly, at a subsequent iteration of block754, the system can identify an additional instance of the sensor data generated by the sensor(s) of the vehicle during the past episode of locomotion of the vehicle (or from a different past episode of locomotion of the vehicle or another vehicle). If, at an iteration of block764, the system determines that the error satisfies the error threshold, then the system may proceed to block766. At block766, the system adjusts parameter(s) of the sensor(s) of the vehicle. The parameter(s) of the LIDAR sensor(s) can include, for example, a point density of LIDAR points, a scan pattern of the LIDAR sensor(s), a field-of-view of the LIDAR sensor(s), a duration of a sensing cycle of the LIDAR sensor(s), one or more biases of the LIDAR sensor(s), other LIDAR parameters, or any combination thereof. The parameter(s) of the wheel encoder(s) can include, for example, an encoding type, a number of pulses per inch (or other distance), a number of pulses per shaft revolution, one or more biases of the wheel encoder(s), other wheel encoder parameters, or any combination thereof. The parameter(s) of the IMU(s) can include, for example, gyroscopic parameters of the IMU(s), accelerometer parameters of the IMU(s), a sampling frequency of the IMU(s), one or more biases of the IMU(s), other IMU parameters, or any combination thereof. Further, the adjusted parameter(s) of the sensor(s) of the vehicle can be utilized in subsequent episodes of locomotion. In some implementations, block766may include optional sub-block766A. If included, at optional sub-block766A, the system may automatically adjust the parameter(s) of the sensor(s) of the vehicle based on the error. In some additional or alternative implementations, block766may include optional sub-block766B. If included, at optional sub-block766B, the system may adjust the parameter(s) of the sensor(s) of the vehicle based on user input. Adjusting the parameter(s) of the sensor(s) of the vehicle is described in detail herein (e.g., with respect toFIG.2B). These adjusted parameter(s) of the sensor(s) can be utilized by the vehicle (and optionally other vehicles) in subsequent episodes of locomotion. Notably, the method500ofFIG.5and the method600ofFIG.6are described herein with respect to online localization of an AV, whereas the method700ofFIG.7is described herein with respect to offline validation of localization of a vehicle (AV or otherwise). In other words, the method500ofFIG.5, the method600ofFIG.6, or both, may be performed by an AV during a given episode of locomotion, and the method700ofFIG.7can be performed by a computing system based on driving data generated during a past episode of locomotion. Other variations will be apparent to those of ordinary skill. Therefore, the invention lies in the claims hereinafter appended. | 104,549 |
11859995 | DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE DISCLOSURE Overview The systems, methods, and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for all of the desirable attributes disclosed herein. Details of one or more implementations of the subject matter described in this specification are set forth in the description below and the accompanying drawings. As described herein, a PLP system may include a user application, or “app” that enables a rideshare service user to preview the surroundings of a vehicle, such as an AV, as it approaches and/or arrives at a designated pickup location using the AV's onboard cameras, Light Detection and Ranging (LIDAR) system, Radio Detection and Ranging (RADAR) system, and/or other onboard sensor modalities. The user app may also enable the rideshare service user to preview a route from a current location of the user to the designated pickup location, again using the vehicle's onboard cameras, LIDAR system RADAR, system, and/or other onboard sensor modalities, as well as current and historical camera and sensor data from other AVs in a fleet. Using the preview functionality, the user is able to determine whether he or she feels comfortable proceeding to the vehicle at the designated pickup location or to initiate selection of an alternative pickup location. In accordance with features of embodiments described herein, real-time three-dimensional (3D) camera and sensor image data may be streamed from the vehicle to the user app on a user device, such as a mobile phone or tablet, and presented as a preview, which may include videos and/or still images. The preview presented on the user app may be manipulated by the user both spatially and temporally as desired to enable the user to virtually explore, in real-time, the surroundings of the pickup location. In certain embodiments, a UI overlay highlighting people and other objects of interest identified using 3D camera, RADAR, and LIDAR image data may be provided to assist the user in identifying people and objects in and around the pickup location. In certain embodiments, the PLP system includes an opt-in monitoring and notification feature that continuously monitors the vehicle's surroundings and notifies the user (via the user app) when the PLP system determines it is safe for the user to proceed toward the vehicle. In other embodiments, the PLP system includes a safer pickup location identification feature that automatically searches for and identifies locations meeting certain safety criteria, which may include default criteria or criteria identified by the user as lending to a feeling of safety. The PLP system may also include features that enable the user to extend the pickup time to provide the user additional time to assess the safety of the pickup location using the preview functionality and that enable the user to change the pickup location. In certain embodiments, the PLP system may process images collected by a fleet of AVs to identify recent and/or relevant video and still images of the pickup location and a route from the user's current location to the pickup location. Additionally, in certain embodiments, a UI of the user app of the PLP system may combine 3D live video stream and 3D images to enable users to transition seamlessly between viewing 360-degree video to 360-degree images as desired. Embodiments of the present disclosure provide a designated pickup location preview method including obtaining an image of a portion of an environment of a vehicle dispatched to a designated pickup location in response to a service request from a user, wherein the obtaining is performed using at least one onboard sensor of the vehicle, and displaying the image of the environment portion on a UI of a user device substantially in real-time. Embodiments further include a pickup location preview method including obtaining an image of an environment of an AV dispatched to a designated pickup location in response to a service request from a user, wherein the obtaining is performed using at least one onboard sensor of the vehicle, determining that the designated pickup location is unsafe based and that an alternative pickup location is safe based at least in part on the image, and notifying the user of the alternative pickup location. Embodiments further include a location preview system including a vehicle comprising at least one onboard sensor for generating a live image of an environment of the vehicle when the vehicle is dispatched to a designated pickup location in response to a service request by a user, and a preview control module for providing the generated live image to a device of the user, the generated live image being displayed on a UI of the user device, wherein the user can manipulate a view of the live image generated by the at least one onboard sensor using the UI. As will be appreciated by one skilled in the art, aspects of the present disclosure, in particular aspects of a PLP system for rideshare services described herein, may be embodied in various manners (e.g., as a method, a system, an AV, a computer program product, or a computer-readable storage medium). Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Functions described in this disclosure may be implemented as an algorithm executed by one or more hardware processing units, e.g., one or more microprocessors of one or more computers. In various embodiments, different steps and portions of the steps of each of the methods described herein may be performed by different processing units. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable medium(s), preferably non-transitory, having computer-readable program code embodied, e.g., stored, thereon. In various embodiments, such a computer program may, for example, be downloaded (updated) to the existing devices and systems (e.g., to the existing perception system devices and/or their controllers, etc.) or be stored upon manufacturing of these devices and systems. The following detailed description presents various descriptions of specific certain embodiments. However, the innovations described herein can be embodied in a multitude of different ways, for example, as defined and covered by the claims and/or select examples. In the following description, reference is made to the drawings, in which like reference numerals can indicate identical or functionally similar elements. It will be understood that elements illustrated in the drawings are not necessarily drawn to scale. Moreover, it will be understood that certain embodiments can include more elements than illustrated in a drawing and/or a subset of the elements illustrated in a drawing. Further, some embodiments can incorporate any suitable combination of features from two or more drawings. The following disclosure describes various illustrative embodiments and examples for implementing the features and functionality of the present disclosure. While particular components, arrangements, and/or features are described below in connection with various example embodiments, these are merely examples used to simplify the present disclosure and are not intended to be limiting. It will of course be appreciated that in the development of any actual embodiment, numerous implementation-specific decisions must be made to achieve the developer's specific goals, including compliance with system, business, and/or legal constraints, which may vary from one implementation to another. Moreover, it will be appreciated that, while such a development effort might be complex and time-consuming; it would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. In the specification, reference may be made to the spatial relationships between various components and to the spatial orientation of various aspects of components as depicted in the attached drawings. However, as will be recognized by those skilled in the art after a complete reading of the present disclosure, the devices, components, members, apparatuses, etc. described herein may be positioned in any desired orientation. Thus, the use of terms such as “above”, “below”, “upper”, “lower”, “top”, “bottom”, or other similar terms to describe a spatial relationship between various components or to describe the spatial orientation of aspects of such components, should be understood to describe a relative relationship between the components or a spatial orientation of aspects of such components, respectively, as the components described herein may be oriented in any desired direction. When used to describe a range of dimensions or other characteristics (e.g., time, pressure, temperature, length, width, etc.) of an element, operations, and/or conditions, the phrase “between X and Y” represents a range that includes X and Y. The terms “substantially,” “close,” “approximately,” “near,” and “about,” generally refer to being within +/−20% of a target value (e.g., within +/−5 or 10% of a target value) based on the context of a particular value as described herein or as known in the art. As described herein, one aspect of the present technology is the gathering and use of data available from various sources to improve quality and experience. The present disclosure contemplates that in some instances, this gathered data may include personal information. The present disclosure contemplates that the entities involved with such personal information respect and value privacy policies and practices. Other features and advantages of the disclosure will be apparent from the following description and the claims. Example Environment for AV Rideshare Services FIG.1is a block diagram illustrating an environment100including an AV110that can be used to provide rideshare services, which may include delivery services as well as human passenger transportation services, to a user according to some embodiments of the present disclosure. In particular, the environment100may comprise a PLP system, as will be described in greater detail below. The environment100includes an AV110, a fleet management system120, and a user device130. The AV110may include a sensor suite140and an onboard computer150. The fleet management system120may manage a fleet of AVs that are similar to AV110; one or more of the other AVs in the fleet may also include a sensor suite and onboard computer. The fleet management system120may receive service requests for the AVs110from user devices130. For example, a user135may make a request for rideshare service using an application, or “app,” executing on the user device130. The user device130may transmit the request directly to the fleet management system120. In the case of a delivery service, the user device130may also transmit the request to a separate service (e.g., a service provided by a grocery store or restaurant) that coordinates with the fleet management system120to deliver orders to users. The fleet management system120dispatches the AV110to carry out the service requests. When the AV110arrives at a pickup location (i.e., the location at which user is to meet the AV to begin the rideshare service or to retrieve his or her delivery order), the user may be notified by the app to meet the AV. The AV110is preferably a fully autonomous automobile, but may additionally or alternatively be any semi-autonomous or fully autonomous vehicle; e.g., a boat, an unmanned aerial vehicle, a self-driving car, etc. Additionally, or alternatively, the AV110may be a vehicle that switches between a semi-autonomous state and a fully autonomous state and thus, the AV may have attributes of both a semi-autonomous vehicle and a fully autonomous vehicle depending on the state of the vehicle. The AV110may include a throttle interface that controls an engine throttle, motor speed (e.g., rotational speed of electric motor), or any other movement-enabling mechanism; a brake interface that controls brakes of the AV (or any other movement-retarding mechanism); and a steering interface that controls steering of the AV (e.g., by changing the angle of wheels of the AV). The AV110may additionally or alternatively include interfaces for control of any other vehicle functions, e.g., windshield wipers, headlights, turn indicators, air conditioning, etc. The AV110includes a sensor suite140, which may include a computer vision (“CV”) system, localization sensors, and driving sensors. For example, the sensor suite140may include photodetectors, cameras, RADAR, LIDAR, Sound Navigation and Ranging (SONAR), Global Positioning System (GPS), wheel speed sensors, inertial measurement units (IMUS), accelerometers, microphones, strain gauges, pressure monitors, barometers, thermometers, altimeters, etc. The sensors may be located in various positions in and around the AV110. For example, the sensor suite140may include multiple cameras mounted at different positions on the AV110, including within the main cabin for passengers and/or deliveries. A high definition (HD) video display145may be provided on an exterior of the AV110for displaying HD video images, for purposes that will be described hereinbelow. An onboard computer150may be connected to the sensor suite140and the HD video display145and functions to control the AV110and to process sensed data from the sensor suite140and/or other sensors in order to determine the state of the AV110. Based upon the vehicle state and programmed instructions, the onboard computer150modifies or controls behavior of the AV110. In addition, the onboard computer150controls various aspects of the functionality of the HD video display145, including display of video thereon. The onboard computer150is preferably a general-purpose computer adapted for I/O communication with vehicle control systems and sensor suite140but may additionally or alternatively be any suitable computing device. The onboard computer150is preferably connected to the Internet via a wireless connection (e.g., via a cellular data connection). Additionally or alternatively, the onboard computer150may be coupled to any number of wireless or wired communication systems. Aspects of the onboard computer150are described in greater detail with reference toFIG.3. The fleet management system120manages the fleet of AVs, including AV110. The fleet management system120may manage one or more services that provide or use the AVs, e.g., a service for providing rides to users with the AVs, or a service that delivers items, such as prepared foods, groceries, or packages, using the AVs. The fleet management system120may select an AV from the fleet of AVs to perform a particular service or other task and instruct the selected AV to autonomously drive to a particular location (e.g., a designated pickup location) to pick up a user and/or drop off an order to a user. The fleet management system120may select a route for the AV110to follow. The fleet management system120may also manage fleet maintenance tasks, such as charging, servicing, and cleaning of the AV. As shown inFIG.1, the AV110communicates with the fleet management system120. The AV110and fleet management system120may connect over a public network, such as the Internet. The fleet management system120is described in greater detail with reference toFIG.4. Example UI for PLP System FIGS.2A-2Fillustrate various aspects of an example UI400of a user app for a PLP system, such as the PLP system of environment100(FIG.1), according to embodiments described herein. As shown inFIGS.2A-2F, the UI400may be displayed on a touch-enabled display device of a mobile device402, which in the illustrated embodiment includes a mobile phone. It will be recognized that the UI400may be used by a user to interact with user app to initiate a rideshare request. As previously noted, the rideshare request may be a request for transportation of a passenger or request for delivery of an item, such as a grocery or restaurant order. The rideshare request includes a designated pickup location, which as defined herein includes a location at which the passenger is to be picked up by an AV dispatched by the fleet management system or a location from which the user is to pick up the item being delivered from the AV dispatched by the fleet management system. As shown inFIG.2A, when the AV is within a certain distance of the designated pickup location (e.g., 0.25 miles), a PREVIEW button404is displayed on the UI400. It will be recognized that the distance from the designated pickup location that triggers display of the PREVIEW button404may be a default distance between the AV and the designated pickup location. Alternatively, the distance from the designated pickup location that triggers display of the PREVIEW button404may be configured as a preference in a user profile of the user in connection with the rideshare service in general and the PLP system in particular. Still further, instead of being triggered by a distance from the designated pickup location, display of the PREVIEW button404may be triggered by an estimated arrival time falling below a default or user-configured threshold amount of time (e.g., 5 minutes to arrival). Referring toFIGS.2A and2B, selection of the PREVIEW button404by the user results initiation of a preview functionality of the PLP system, which includes presentation of one or more videos and/or still images of the surroundings of the AV, represented inFIG.2Bby images410,412, on the UI200. In particular embodiments, the videos and/or images410,412, include an interactive, live (i.e., substantially real-time) 3D video and/or images of the surroundings of the vehicle. In certain embodiments, the images may be accompanied by live audio of the surroundings of the vehicle captured by one or more microphones included in the sensor suite. The particular view shown in the video and/or images may be changed and the surroundings navigated by the user by moving the user device402in 3D space or by using touchscreen functions, such as “swiping” or using arrow buttons, for example, or other functions. The preview displayed using the UI400may provide the user with a real-time video stream and/or still images of the vehicle's surroundings comprising the designated pickup location (and/or the route to the designated pickup location if the vehicle has not yet arrived). In certain embodiments, CV models, paired with RADAR and LIDAR data, may be used to identify and highlight moving objects and people in proximity to the vehicle using, for example, an overlay on the displayed images. Using the preview functionality, the user may determine whether he or she feels safe proceeding to the designated pickup location. Referring now toFIG.2C, when the AV arrives at the designated pickup location, a prompt420may be displayed on the UI400to query the user whether the designated pickup location is acceptable, for example, based on the user's assessment of the surroundings using of the preview video(s) and/or image(s) (FIG.2B). The user may indicate his or her approval or disapproval of the pickup location by respectively selecting a YES button422or a NO button424. Referring now toFIG.2D, if the user indicates with his or her selection of the NO button424that the pickup location is not acceptable, the user may be presented with a number of alternatives corresponding to enhanced features, or options, from among which to select. Such enhanced features may include one or more of an Extend Pickup Time feature430, a Change Pickup Location feature432, a Monitor and Notify feature434, a Find Safer Pickup Location feature436, and a Phone a Friend feature438. Each of these features will be described in further detail below. The Extend Pickup Time feature430enables the user to extend the amount of time the vehicle will wait before departing the designated pickup location. This feature effectively allows the user to continuously monitor the surroundings at the pickup location via the preview functionality without time pressure and to elect to proceed to the pickup location when the user feels safe in doing so. Upon expiry of the first extension of time, the user may be prompted to select additional extensions of time (up to a limited or unlimited number of extensions) until he or she feels comfortable proceeding to the vehicle or until the requested rideshare service is ultimately canceled. The Change Pickup Location432feature enables the user to designate an entirely new (i.e., safer) pickup location, such as one located on a more well-lit side of a building or in an area known to have more pedestrian traffic. The new pickup location may be selected using the preview functionality to observe and assess areas close to the currently designated pickup location to select what appears to be a safer pickup location. Other data may be provided by this feature, including annotated (or semantically labeled) map data and/or data from a fleet management system, for example, to enable the user to select a new pickup location. The Monitor and Notify feature434enables the user to request the PLP system to continue monitoring the surroundings at the designated pickup location and to notify the user via the UI400when the surroundings appear safer. This feature leverages input from various onboard-vehicle sensors to continuously monitor the vehicle's surroundings and identify when it is safe for the user to proceed to the vehicle, at which time the user will be provided with a notification via the UI400. The Find a Safer Pickup Location feature436enables the user to request the vehicle to search for a safer pickup location. When this option is selected, the vehicle may begin to drive around the area proximate the designated pickup location (e.g., around the block) searching for a safer pickup location (e.g., a location that is more well-lit and has higher pedestrian traffic). In one embodiment, when the vehicle arrives at a location determined to be safe, the vehicle stops and the system notifies the user of the updated pickup location, as well as a route to the location. The user may also be provided with a preview of the new location and may be queried as to whether the new pickup location is acceptable, as shown inFIGS.2A and2B, for example. Alternatively, instead of relying on the vehicle to identify a safer pickup location, the user may be provided with a continuous live video and/or still images from the vehicle's sensors showing the vehicle's surroundings as the it traverses the area and may proactively notify the vehicle when it arrives at a location that the user deems safe. The Phone a Friend feature438enables the user to initiate a video conference with a friend via the UI400, which video conference is concurrently displayed on an external HD video display (e.g., HD video display145shown inFIG.1) of the vehicle. This feature can function to ward off potential bad actors from the area, while simultaneously offering reassurance to the user by providing a virtual witness in the form of a trusted third party in the area. In certain embodiments, the user may toggle among enhanced features430-438as desired until the user boards the vehicle. Additionally, the user may initiate the preview functionality at any time prior to boarding the vehicle. In certain embodiments, the safety of a location may be assessed by the vehicle/PLP system (e.g., in connection with the Change Pickup Location, Monitor and Notify, and Find a Safer Pickup Location features) using a combination of live and historical video, images, and data and with reference to one or more of a variety of safety criteria, including but not limited to crime statistics, lighting, pedestrian traffic, automobile traffic, etc., which criteria may be quantified, combined, and/or weighted in a variety of manners to develop a safety score, for example, which may be compared to safety scores of other locations. One or more safety criteria, as well as one or more factors related to the safety criteria (e.g., relative weight, priority), may be default values. Additionally and/or alternatively, one or more of safety criteria, as well as one or more factors related to the safety criteria, may be explicitly selected or set by a user, e.g., as user preferences included in a user profile associated with the user. After the user selects one of the enhanced features430-438, the selected one of the enhanced features is initiated and a preview of the pickup location may again be provided to the user on the user app. As represented inFIG.2E, in some embodiments, once the user approves a pickup location, if the location is more than a specified distance (e.g., a block) from a current location of the user, additional preview options may be provided using the UI400to ensure the safety of the user en route to the pickup location. For example, sensor data recently acquired by the vehicle on the way to the pickup location may be used to provide additional information regarding the route to the pickup location from the user's current location. For example, recent CV/RADAR/LIDAR data of the route between the user's current location and the pickup location may be presented to the user on the UI400, e.g., as represented by an image440. In addition, helpful semantic labels, such as “well-lit,” “low-crime,” and “high pedestrian traffic,” may be presented on a map showing the route between the user's current location and the pickup location. Still further, 3D images of the route may be provided to and manipulated by the user using the app, similar to the preview of the pickup location surroundings. It will be understood, however, that the route data may be several seconds to minutes old. In certain embodiments, the user may move spatially and temporally through the data, as the vehicle captures a continuous feed, enabling the user to swipe to move forward and/or backward through streets and may even access data from different times of the day to better understand typical conditions of the route (and pickup location). Route information (including video, images, and other data) from the vehicle itself may be augmented using live or recently acquired route information (including video, images, and other data) from other vehicles in the fleet. Moreover, if even more additional route information is needed or desired, the vehicle can circle the area and capture the additional information while the user reviews the situation via the user app. In certain embodiments, the user may be provided with generalized information regarding the pickup location based on live sensor data from the AV. Referring now toFIG.2F, a 2D map450of the pickup location may be presented on the UI (not shown inFIG.2F). An UI overlay is provided on the map450to indicate the location of the AV452as well as locations of various objects of potential interest to the user, such as pedestrians454, other vehicles456, street lights, such as a street light458, and visual obstructions, such as a dumpster460, relative to the AV452. The objects and their locations relative to the AV may be identified using, for example, camera, LIDAR, and/or RADAR data from the sensor suite of the AV. One purpose of the overlay is to provide the user with information to make their own assessment as to the safety of the pickup location based on other objects in the area. In addition and/or alternatively to the overlay shown inFIG.2F, text information denoting the type and number of objects within a certain distance of the pickup location (e.g., “4 pedestrians, 2 parked vehicles, 1 street light,” etc.) may be provided to facilitate a safety assessment by the user. It should be noted that the icons in the FIGS. representing certain objects of interest may represent one or more detected objects of that type (e.g., each person icon may represent n people, etc.), which also helps to generalize the detailed information received by the AV. Example Onboard Computer FIG.3is a block diagram illustrating an onboard computer150for enabling features according to some embodiments of the present disclosure. The onboard computer150may include memory505, a map database510, a sensor interface520, a perception module530, a planning module540, and a PLP system controller550. In alternative configurations, fewer, different and/or additional components may be included in the onboard computer150. For example, components and modules for controlling movements of the AV110and other vehicle functions, and components and modules for communicating with other systems, such as the fleet management system120, are not shown inFIG.3. Further, functionality attributed to one component of the onboard computer150may be accomplished by a different component included in the onboard computer150or a different system from those illustrated. The map database510stores a detailed map that includes a current environment of the AV110. The map database510includes data describing roadways (e.g., locations of roadways, connections between roadways, roadway names, speed limits, traffic flow regulations, toll information, etc.) and data describing buildings (e.g., locations of buildings, building geometry, building types). The map database510may further include data describing other features, such as bike lanes, sidewalks, crosswalks, traffic lights, parking lots, etc. The sensor interface520interfaces with the sensors in the sensor suite140. The sensor interface520may request data from the sensor suite140, e.g., by requesting that a sensor capture data in a particular direction or at a particular time. The sensor interface520is configured to receive data captured by sensors of the sensor suite140. The sensor interface520may have subcomponents for interfacing with individual sensors or groups of sensors of the sensor suite140, such as a thermal sensor interface, a camera interface, a lidar interface, a radar interface, a microphone interface, etc. The perception module530identifies objects in the environment of the AV110. The sensor suite140produces a data set that is processed by the perception module530to detect other cars, pedestrians, trees, bicycles, and objects traveling on or near a road on which the AV110is traveling or stopped, and indications surrounding the AV110(such as construction signs, traffic cones, traffic lights, stop indicators, and other street signs). For example, the data set from the sensor suite140may include images obtained by cameras, point clouds obtained by LIDAR sensors, and data collected by RADAR sensors. The perception module530may include one or more classifiers trained using machine learning to identify particular objects. For example, a multi-class classifier may be used to classify each object in the environment of the AV110as one of a set of potential objects, e.g., a vehicle, a pedestrian, or a cyclist. As another example, a human classifier recognizes humans in the environment of the AV110, a vehicle classifier recognizes vehicles in the environment of the AV110, etc. The planning module540plans maneuvers for the AV110based on map data retrieved from the map database510, data received from the perception module530, and navigation information, e.g., a route instructed by the fleet management system120. In some embodiments, the planning module540receives map data from the map database510describing known, relatively fixed features and objects in the environment of the AV110. For example, the map data includes data describing roads as well as buildings, bus stations, trees, fences, sidewalks, etc. The planning module540receives data from the perception module530describing at least some of the features described by the map data in the environment of the AV110. The planning module540determines a pathway for the AV110to follow. The pathway includes locations for the AV110to maneuver to, and timing and/or speed of the AV110in maneuvering to the locations. The PLP system controller550interacts with the map database510, sensor interface520, and perception module530to control and provide various aspects of the PLP system functionality, including but not limited to providing preview functionality and other features as described above with reference toFIGS.2A-2Fand as described below with reference toFIG.5. Example Fleet Management System FIG.4is a block diagram illustrating the fleet management system120according to some embodiments of the present disclosure. The fleet management system120includes a UI server610, a map database620, a user database630, a vehicle manager640, and a PLP system manager650. In alternative configurations, different, additional, or fewer components may be included in the fleet management system120. Further, functionality attributed to one component of the fleet management system120may be accomplished by a different component included in the fleet management system120or a different system than those illustrated. The UI server610is configured to communicate with client devices that provide a user interface to users. For example, the UI server610may be a web server that provides a browser-based application to client devices, or the UI server610may be a user app server that interfaces with a user app installed on client devices, such as the user device130. The UI enables the user to access a service of the fleet management system120, e.g., to request a ride from an AV110, or to request a delivery from an AV110. For example, the UI server610receives a request for a ride that includes an origin location (e.g., the user's current location) and a destination location, or a request for a delivery that includes a pickup location (e.g., a local restaurant) and a destination location (e.g., the user's home address). In accordance with features of embodiments described herein, UI server610may communicate information to a user regarding various aspects of the PLP system functionality, including but not limited to providing preview functionality and other features as described above with reference toFIGS.2A-2Fand as described below with reference toFIG.5. The map database620stores a detailed map describing roads and other areas (e.g., parking lots, AV service facilities) traversed by the fleet of AVs110. The map database620includes data describing roadways (e.g., locations of roadways, connections between roadways, roadway names, speed limits, traffic flow regulations, toll information, etc.), data describing buildings (e.g., locations of buildings, building geometry, building types), and data describing other objects (e.g., location, geometry, object type), and data describing other features, such as bike lanes, sidewalks, crosswalks, traffic lights, parking lots, etc. At least a portion of the data stored in the map database620is provided to the AVs110as a map database510, described above. The user database630stores data describing users of the fleet of AVs110. Users may create accounts with the fleet management system120, which stores user information associated with the user accounts, or user profiles, in the user database630. The user information may include identifying information (name, user name), password, payment information, home address, contact information (e.g., email and telephone number), and information for verifying the user (e.g., photograph, driver's license number). Users may provide some or all of the user information, including user preferences regarding certain aspects of services provided by the rideshare system, to the fleet management system120. In some embodiments, the fleet management system120may infer some user information from usage data or obtain user information from other sources, such as public databases or licensed data sources. The fleet management system120may learn one or more home addresses for a user based on various data sources and user interactions. The user may provide a home address when setting up his account, e.g., the user may input a home address, or the user may provide an address in conjunction with credit card information. In some cases, the user may have more than one home, or the user may not provide a home address, or the user-provided home address may not be correct (e.g., if the user moves and the home address is out of date, or if the user's address associated with the credit card information is not the user's home address). In such cases, the fleet management system120may obtain a home address from one or more alternate sources. In one example, the fleet management system120obtains an address associated with an official record related to a user, such as a record from a state licensing agency (e.g., an address on the user's driver's license), an address from the postal service, an address associated with a phone record, or other publicly available or licensed records. In another example, the fleet management system120infers a home address based on the user's use of a service provided by the fleet management system120. For example, the fleet management system120identifies an address associated with at least a threshold number of previous rides provided to a user (e.g., at least 10 rides, at least 50% of rides, or a plurality of rides), or at least a threshold number of previous deliveries (e.g., at least five deliveries, at least 60% of deliveries) as a home address or candidate home address. The fleet management system120may look up a candidate home address in the map database620to determine if the candidate home address is associated with a residential building type, e.g., a single-family home, a condominium, or an apartment. The fleet management system120stores the identified home address in the user database630. The fleet management system120may obtain or identify multiple addresses for a user and associate each address with the user in the user database630. In some embodiments, the fleet management system120identifies a current home address from multiple candidate home addresses, e.g., the most recent address, or an address that the user rides to or from most frequently and flags the identified current home address in the user database630. The vehicle manager640directs the movements of the AVs110in the fleet. The vehicle manager640receives service requests from users from the UI server610, and the vehicle manager640assigns service requests to individual AVs110. For example, in response to a user request for transportation from an origin location to a destination location, the vehicle manager640selects an AV and instructs the AV to drive to the origin location (e.g., a passenger or delivery pickup location), and then instructs the AV to drive to the destination location (e.g., the passenger or delivery destination location). In addition, the vehicle manager640may instruct AVs110to drive to other locations while not servicing a user, e.g., to improve geographic distribution of the fleet, to anticipate demand at particular locations, to drive to a charging station for charging, etc. The vehicle manager640also instructs AVs110to return to AV facilities for recharging, maintenance, or storage. The PLP system manager650manages various aspects of PLP system services performed by an AV as described herein, including but not limited to providing data and information for supporting preview functionality and other features as described above with reference toFIGS.2A-2Fand as described below with reference toFIG.5. Example Methods for Pickup Location Preview System Implementation and Operation FIG.5is a flowchart illustrating an example process for implementing and operating a PLP system for an AV rideshare service according to some embodiments of the present disclosure. One or more of the steps illustrated inFIG.5may be executed by one or more of the elements shown inFIGS.3and4. In step700, in response to a request from a user (e.g., using an app on a user device), a vehicle is dispatched (e.g., by fleet management system120) to a designated pickup location. The designated pickup location may be a location explicitly specified by the user (e.g., using the app) or may be a location identified to be proximate to the location specified by the user. Additionally and/or alternatively, the pickup location may be automatically designated based on a current location of the user. In step702, a PREVIEW button (or link) may be displayed to the user using the user app (e.g., as shown inFIG.2A). In particular embodiments, the PREVIEW button is automatically displayed on the user app when the vehicle approaches the designated pickup location. In step704, after the user selects the PREVIEW button, a preview of the vehicle's surroundings is presented to the user using the user app (e.g., as shown inFIG.2B). In particular embodiments, the preview includes an interactive, live 3D video and/or 3D images of the surroundings of the vehicle, which may be navigated by the user by moving the user device in 3D space or by using touchscreen functions, such as “swiping” or using arrow buttons, for example, or other functions. As described above, using the preview functionality, the user may determine whether he or she feels safe proceeding to the designated pickup location. In certain embodiments, live audio may be provided along with the live 3D video and/or 3D images. In a particular embodiment, the live (or substantially real-time) video, images, and/or audio may be communicated from the AV sensors to the fleet management system, which may communicate the video and/or audio to the user device (e.g., via a cellular communications network). In step706, the user is queried whether he or she feels safe proceeding to the designated pickup location. In an example embodiment, the user may be prompted to select “YES” or “NO” to indicate his or her response using the user app (e.g., as shown inFIG.2C). If in step706, the user indicates that he or she does not feel safe proceeding to the designated pickup location (e.g., by selecting NO), execution proceeds to step708, in which the user may select one or more enhanced features (e.g., as shown inFIG.2D) to increase the user's perceived safety and/or comfort with the pickup location. As previously noted, in certain embodiments, available enhanced features may include one or more of an Extend Pickup Time feature, a Change Pickup Location feature, a Monitor and Notify feature, a Find Safer Pickup Location feature, and a Phone a Friend feature. Once the user selects one of the enhanced features in step710, the selected one of the enhanced features is initiated and a preview of the pickup location may once again be provided to the user on the user app (step704). In certain embodiments, the user could set a preference in their user profile to automatically enable one or more of the enhanced features by default for services during particular hours of the day or under select circumstances. For example, the Find a Safer Pickup Location feature and/or Monitor and Notify feature could be enabled for any rides between the hours of 10 PM and 6 AM. Once an acceptable location is determined (step706), in step712, if the pickup location is more than a specified distance (e.g., a block) from the user, additional preview options may be provided on the app to ensure the safety of the user en route to the pickup location. Once the user determines that a pickup location and a route to the location are acceptably safe, the vehicle parks at the pickup location and awaits arrival of the user, who can continue to monitor the designated pickup location surroundings using the preview and other functionality of the user app and may revise his or her responses and feature selections indicative of his or her perceived safety at any time. In various embodiments, location information (including 2D and 3D video and images and other data) from the vehicle itself may be augmented using live or recently acquired location information (including 2D and 3D video and images and other data) from other vehicles in the fleet. Moreover, if even more additional location information is needed or desired, the vehicle can circle the area and capture the additional information while the user reviews the situation via the user app. The availability of data from other vehicles increases the availability of recent, non-stale, data to provide a more accurate preview to the user. Although the operations of the example method shown inFIG.5are illustrated as occurring once each and in a particular order, it will be recognized that the operations may be performed in any suitable order and repeated as desired. Additionally, one or more operations may be performed in parallel. Furthermore, the operations illustrated inFIG.5may be combined or may include more or fewer details than described. It will be recognized that, although embodiments are described herein primarily with reference to passenger transportation services, they may also be advantageously applied to delivery services provided by AVs. Additionally, in addition to being applied in connection with pickup of a passenger and/or delivery of an item to a user, embodiments described herein may also be advantageously applied to drop off of a passenger and/or pickup of an item for delivery. Select Examples Example 1 provides a method including obtaining an image of a portion of an environment of a vehicle dispatched to a designated location in response to a service request from a user, in which the obtaining is performed using at least one onboard sensor of the vehicle and displaying the image of the environment portion on a UI of a user device substantially in real-time. Example 2 provides the method of example 1, further including, in response to input from the user using the UI, obtaining an image of a different portion of the environment of the vehicle and displaying the image of the different environment portion on the UI substantially in real-time. Example 3 provides the method of any of examples 1-2, in which the at least one onboard sensor includes at least one of a CV system, a camera, a LIDAR sensor, and a RADAR sensor. Example 4 provides the method of any of examples 1-3, in which the image includes at least one of a three-dimensional (3D) video image and a 3D still image. Example 5 provides the method of any of examples 1-4, in which the environment of the vehicle includes the designated location. Example 6 provides the method of any of examples 1-5, in which displaying is performed after the vehicle is less than a predetermined distance from the designated location. Example 7 provides the method of any of claims1-6, in which the displaying is performed after an estimated arrival time of the vehicle at the designated location is within a predetermined amount of time. Example 8 provides the method of any of examples 1-7, further including a safety of the designated location based at least in part on data including the image. Example 9 provides the method of any of examples 1-8, further including notifying the user of the assessed safety of the designated location using the UI. Example 10 provides the method of example 8, further including selecting a safer location than the designated location based at least in part on the data including the image and notifying the user of the selected safer location using the UI. Example 11 provides the method of example 8 further including determining based on the assessing that the designated location is unsafe and causing the vehicle to traverse an area proximate the designated location to locate a safer alternative location using at least one onboard sensor of the vehicle. Example 12 provides the method of any of examples 1-11, further including obtaining an image of a route between a current location of the user and the designated location, and displaying the route on the UI, in which the route image is obtained using at least one of the at least one onboard sensor of the vehicle and at least one onboard sensor of another vehicle. Example 13 provides the method of example 12, further including displaying a map of the route on the UI, the map including at least one semantic label indicative of a safety condition of the route. Example 14 provides the method of example 13, in which the safety condition includes at least one of lighting conditions, pedestrian traffic levels, crime statistics, and vehicle traffic. Example 15 provides the method of any of examples 1-14, in which the vehicle includes an AV. Example 16 provides the method of any of examples 1-15, further including prompting the user to initiate a video call with a third party and presenting the video call on a video display located on an external surface of the vehicle. Example 17 provides the method of any of examples 1-16, in which the image displayed on the UI includes an overlay highlighting at least one object shown in the image. Example 18 provides a method including obtaining an image of an environment of an AV dispatched to a designated location in response to a service request from a user, in which the obtaining is performed using at least one onboard sensor of the AV; determining that the designated location is unsafe and that an alternative location is safe based at least in part on the image; and notifying the user of the alternative location. Example 19 provides the method of example 18, in which the notifying is displayed on a UI of a mobile device. Example 20 provides the method of any of examples 18-19, further including determining that the designated location is safe based at least in part on the image and notifying the user that the designated location has been determined to be safe. Example 21 provides the method of example 20, in which the notifying the user that the designated location has been determined to be safe is displayed on a UI of a mobile device. Example 22 provides the method of any of examples 18-21, further including obtaining an image of a route between a current location of the user and the designated location and determining that the route is safe based at least in part on the route image and notifying the user that the route has been determined to be safe. Example 23 provides the method of example 22, in which the route image is obtained using the at least one onboard sensor of the AV. Example 24 provides the method of example 22, in which the AV includes one of a fleet of AVs, the route image is obtained using at least one onboard sensor of another AV of the fleet of AVs. Example 25 a location preview system, including a vehicle including at least one onboard sensor for generating a live image of an environment of the vehicle when the vehicle is dispatched to a designated location in response to a service request by a user, and a preview control module for providing the generated live image to a device of the user, the generated live image being displayed on a UI of the user device, in which the user can manipulate a view of the live image generated by the at least one onboard sensor using the UI. Example 26 provides the location preview system of example 25, in which the at least one onboard sensor includes at least one of a CV system, a camera, a LIDAR sensor, and a RADAR sensor. Example 27 provides the location preview system of any of examples 25-26, in which the generated live image includes at least one of a three-dimensional (3D) video image and a 3D still image. Example 28 provides the location preview system of any of examples 25-27, in which the vehicle includes an AV. Example 29 provides the location preview system of any of examples claim25-28, in which the vehicle further includes a video display on an external surface thereof. Example 30 provides the location preview system of any of examples 25-29, in which the preview control module displays a video conference call between the user and a third party on the video display. Example 31 provides the method of any of examples 1-17, in which the displayed image includes a two-dimensional (2D) map of the designated location, the method further including providing an overlay on the 2D map, the overlay identifying a location of the vehicle on the 2D map and a location and identity of at least one object at the designated location detected by the at least one onboard sensor of the vehicle. OTHER IMPLEMENTATION NOTES, VARIATIONS, AND APPLICATIONS It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein. In one example embodiment, any number of electrical circuits of the figures may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various embodiments, the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities. It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of processors, logic operations, etc.) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described with reference to particular arrangements of components. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more components; however, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the FIGS. may be combined in various possible configurations, all of which are clearly within the broad scope of this specification. Various operations may be described as multiple discrete actions or operations in turn in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed, and/or described operations may be omitted in additional embodiments. Note that in this specification, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in “one embodiment”, “example embodiment”, “an embodiment”, “another embodiment”, “some embodiments”, “various embodiments”, “other embodiments”, “alternative embodiment”, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. Note that all optional features of the systems and methods described above may also be implemented with respect to the methods or systems described herein and specifics in the examples may be used anywhere in one or more embodiments. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph (f) of 35 U.S.C. Section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims. | 58,277 |
11859996 | The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. Hereinafter, some forms of the present disclosure will be described in detail with reference to the exemplary drawings. In adding the reference numerals to the components of each drawing, it should be noted that the identical or equivalent component is designated by the identical numeral even when they are displayed on other drawings. Further, in describing the exemplary forms of the present disclosure, a detailed description of well-known features or functions will be ruled out in order not to unnecessarily obscure the gist of the present disclosure. In addition, in the following description of components according to an exemplary form of the present disclosure, the terms ‘first’, ‘second’, ‘B’, ‘(a)’, and ‘(b)’ may be used. These terms are merely intended to distinguish one component from another component, and the terms do not limit the nature, sequence or order of the constituent components. In addition, unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meanings as those generally understood by those skilled in the art to which the present disclosure pertains. Such terms as those defined in a generally used dictionary are to be interpreted as having meanings equal to the contextual meanings in the relevant field of art, and are not to be interpreted as having ideal or excessively formal meanings unless clearly defined as having such in the present application. Hereinafter, forms of the present disclosure will be described with reference toFIGS.1to7. FIG.1is a block diagram illustrating a configuration of a vehicle system including an apparatus for suggesting stopping by facilities, according to an exemplary form of the present disclosure. Referring toFIG.1, the vehicle system may include an apparatus (hereinafter, referred to as a “via-facilities suggesting apparatus”)100for suggesting stopping by facilities and a sensor device200. According to one form of the present disclosure, the via-facilities suggesting apparatus100may be implemented inside the vehicle. In this case, the via-facilities suggesting apparatus100may be implemented integrally with internal control units of the vehicle. Alternatively, the via-facilities suggesting apparatus100may be implemented separately from the internal control units of the vehicle and may be connected with the internal control units of the vehicle through an additional connection unit. The via-facilities suggesting apparatus100may suggest, as a stop, facilities, which are suitable for a user tendency, of facilities positioned on a driving path of an autonomous vehicle. In other words, the via-facilities suggesting apparatus100may provide, to a user, a list of facilities suitable for the user tendency, and may add the facilities selected by the user to a stop positioned on the driving path and guide the user to the stop The via-facilities suggesting apparatus100may add, to a stop, facilities, which is selected by the user, of the facilities positioned on the driving path of the autonomous vehicle. When the selected stop includes a plurality of places, the via-facilities suggesting apparatus100may list up the plurality of places by priorities which are preset based on the driving path, may add a place, which is selected by the user from the plurality of places, to the stop, and may guide to the driving path. The via-facilities suggesting apparatus100may detect, in advance, facilities necessary for the user situation during the driving of the autonomous vehicle and may add the facilities to the stop. In addition, the via-facilities suggesting apparatus100may determine whether a user intakes food or beverage inside the autonomous vehicle, and may suggest adding a public restroom or a rest area as the stop after a specific time is elapsed when the user intakes food and beverage Referring toFIG.1, the via-facilities suggesting apparatus100may include a communication device110, a storage120, an interface130, and a processor140. The communication device110, which is a hardware device implemented with various electronic circuits to transmit or receive a signal through wireless or wired connection, and may make communication with an in-vehicle sensing device200or an in-vehicle device300based on an in-vehicle communication technology, and may make V2I communication through an in-vehicle network communication technology or, wireless Internet access or short range communication technology with an external server of a vehicle, an infrastructure, other vehicles, or a user terminal400. In this case, the user terminal400may include all mobile communication terminals having a display, and the mobile communication terminal includes a smartphone, a personal digital assistant (PDA), a portable multimedia player (PMP), a digital camera, a portable game console, an MP3 player, a smart key, and a tablet PC. The in-vehicle device300may include audio, video, navigation (AVN), telematics, and navigation. In this case, the vehicle network communication technology may include a controller area network (CAN) communication technology, a local interconnect network (LIN) communication technology, a FlexRay communication technology, and in-vehicle communication may be performed through the above communication technology. In addition, the wireless Internet technology may include a wireless LAN (WLAN), a wireless broadband (Wibro), a Wi-Fi, World Interoperability for Microwave Access (Wimax). In addition, the short-range communication technology may include Bluetooth, ZigBee, ultra wideband (UWB), radio frequency identification (RFID), or infrared data association (IrDA). For example, the communication device110may transmit/receive traffic information, road information, and information on facilities, by making wireless communication with other vehicles, traffic centers, and facilities. In this case, the traffic information may include traffic jam information, accident information, or the like, and the road information may include road construction information, and road detour information, or the like. The information on the facilities may include information on facilities closed, information on users of the facilities. The storage120may store the sensing result of the sensing device200and data and/or algorithms necessary for the processor140to operate. For example, the storage120may store information received through the communication device110. In addition, the storage120may store a first reference distance, a second reference distance, and a third reference distance in advance for the notification of arrival at a stop. The storage120may include at least one storage medium of a memory in a flash memory type, a hard disk type, a micro type, the type of a card (e.g., a Security Digital (SD) card or an eXtreme digital card), a Random Access Memory (RAM), a Static RAM (SRAM), a Read Only Memory (ROM), a Programmable ROM (PROM), an Electrically Erasable and Programmable ROM (EEPROM), a magnetic RAM (MRAM), a magnetic disk-type memory, or an optical disk-type memory. The interface130may include an input device to receive a control command from a user and an output device to output the operation state, the operation result, the notification, or the guide of the via-facilities suggesting apparatus100. In this case, the input device may include a key button, and may include a mouse, a joystick, a jog shuttle, a stylus pen, or the like. In addition, the input device may include a soft key implemented on a display. The output device may include a display and may include a voice output device such as a speaker. When a touch sensor, such as a touch film, a touch sheet, or a touch pad, is included in the display, the display may operate as a touch screen, and the input device and the output device may be implemented in the integral form. According to the present disclosure, the output device may output a notification of arriving at a stop, an additional guide to the stop, a stop list, the list of facilities, information on selected facilities, an inquiry about a guide to the facilities, and a screen for suggesting stopping by the facilities. In addition, the input device may receive, from a user, an input related to the inquiry about the guide to the facilities or a suggestion of stopping by facilities. In this case, the display may include at least one of a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), a flexible display, a field emission display (FED), or a three dimensional display (3D display). The processor140may be electrically connected with the communication device110, the storage120, the interface130, and the like, may electrically control each component, and may be an electric circuit that executes software commands. Accordingly, the processor140may perform various data processing and calculation, to be described below. Accordingly, the processor140may process signals transmitted/received between components of the via-facilities suggesting apparatus100. The processor140may be, for example, an electronic control unit (ECU), a micro controller unit (MCU), or another lower-level controller mounted in the vehicle. The processor140may provide, to a user, a list of facilities, which are suitable for the user tendency, of facilities positioned on a driving path of the autonomous vehicle, and may add the facilities, which are selected by the user, to a stop positioned on the driving path and guide the user to the stop The processor140may output a notification to a user terminal or an in-vehicle device, when the vehicle enters within a first reference distance (e.g., 1 km) before arriving at the stop. The processor140may determine whether the stop is a place providing Driving Through (DT). The processor140may provide a guide to arrival at the stop, when the stop is not the place providing the DT and when the autonomous vehicle enters within a second distance shorter than the first reference distance before arriving at the stop. The processor140may provide the guide to arrival at the stop, when the stop is the place providing the DT and when the autonomous vehicle enters within a second distance, which is shorter than the first reference distance, before arriving at the stop, and may control a window of the autonomous vehicle to be open. The process140may inform that the autonomous driving deviates from the stop, when the autonomous vehicle is beyond a third reference distance (e.g., 100 m), which is shorter than the second reference distance, from the stop, and may control the window of the autonomous vehicle to be closed. The processor140may provide, to the in-vehicle device or the user terminal, one of a voice guide, a text guide, or a vibration notification, when informing that the autonomous driving deviates from the stop, and may output the voice guide, the text guide, and the vibration notification by preset priorities. The processor140may inform the list of the facilities, based on information on facilities-as-stop setting, which is preset by the user, in which the information on the facilities-as-stop setting information is information on setting of facilities as a stop. The processor140may receive the facilities-as-stop setting while distinguishing between a personal autonomous vehicle and a shared autonomous vehicle. The processor140may determine whether an autonomous vehicle is the personal autonomous vehicle or the shared autonomous vehicle, by determining whether a plurality of driving paths are provided. The processor140may inform the list of facilities positioned on the driving path, based on the information on facilities-as-stop setting, which is determined by a majority decision, of information on facilities-as-stop setting of a plurality of users who use the shared autonomous vehicle. The processor140may receive selections for stops, which are contained in the list of facilities, from the plurality of users, and may determine whether stops having the same brand name are present in the selected stops. The processor140may add the selected stop to the driving path and may guide to the driving path, when the stops having the same brand name are absent. When the stops having the same brand name are present, the processor140may list up the selected stops by preset priorities and may perform the path guide. The processor140may receive the setting of at least one of a distance to the stop, a past use history of a user, a price, or a congestion degree by priorities, in advance. When a user, who wants to add a stop, requests for adding the stop in the shared autonomous vehicle, and when remaining users in the shared autonomous vehicle approves of adding the stop, the processor140may add the stop, may perform a path guide, and may compensate the remaining users. When a user, who wants to add a stop, requests for adding the stop in the shared autonomous vehicle, and when the stop is added by a majority decision, the processor140may compensate users, who do not approve of adding the stop, of remaining users in the shared autonomous driving. The processor140may add a stop, which is selected by the user, of the facilities positioned on the driving path of the autonomous vehicle. When the selected stop includes a plurality of places, the processor140may list up the plurality of places positioned on the driving path by preset priorities, may add a place, which is selected by the user from the plurality of places, as the stop, and may perform the path guide. The processor140may detect, in advance, facilities necessary for the user situation, based on image data of a camera inside the autonomous vehicle, during the driving of the autonomous vehicle and may suggest adding the detected facilities as a stop. The processor140may determine whether a user intakes food or beverage in the autonomous vehicle and may suggest adding a public restroom or a rest area as the stop after a specific time is elapsed when the user intakes food and beverage. The sensing device200may sense whether a user (occupant) intakes food or beverage in the vehicle. To this end the sensing device200may include an in-vehicle camera or a motion sensor. Hereinafter, a method for suggesting stopping by facilities will be described in detail with reference toFIG.2according to another exemplary form of the present disclosure.FIG.2is a flowchart illustrating a method for suggesting stopping by facilities in a personal autonomous vehicle, according to an exemplary form of the present disclosure. Hereinafter, it is assumed that the via-facilities suggesting apparatus100ofFIG.1performs the process ofFIG.2. In addition, in the description made with reference toFIG.2, it may be understood that operations described as being performed by an apparatus are controlled by the processor140of the via-facilities suggesting apparatus100. Referring toFIG.2, the via-facilities suggesting apparatus100determines whether a present path of the vehicle is a multiple-path (S101). In this case, the via-facilities suggesting apparatus100may determine a host autonomous vehicle as being a shared autonomous vehicle when the present path is the multiple-path. When the present path is a single path, the via-facilities suggesting apparatus100may determine the host autonomous vehicle as a personal autonomous vehicle. In this case, in the personal autonomous vehicle, at least one or two occupants are present. In S101, when the present path is the multiple-path, S300ofFIG.6is performed thereafter. When the present path is not the multiple-path, that is, when the host autonomous vehicle is the personal autonomous vehicle, the via-facilities suggesting apparatus100determines whether via-facilities suggesting setting contained in occupant settings is in an On state.FIG.3is a view illustrating a screen for via-facilities setting on a user terminal in a personal autonomous vehicle, according to an exemplary form of the present disclosure. When an application installed in a user terminal as illustrated in reference numeral301ofFIG.3is executed, a personal autonomous driving setting menu is displayed as illustrated in reference numeral302, and a stop suggesting menu is displayed as illustrated in reference numeral303when the personal autonomous driving setting menu is selected. Thereafter, when the stop suggesting menu is selected, a personal mode and a shared mode are separately displayed as illustrated in reference numeral304. When the personal mode is selected, and when an ON state is set in a facilities-as-stop suggesting mode as illustrated in reference numeral305 AlthoughFIG.3illustrates that via-facilities suggesting setting is performed using a user terminal, the via-facilities suggesting setting is enabled through the in-vehicle device, in the case of the persona autonomous driving vehicle. Meanwhile, when the via-facilities suggesting setting is in an OFF state, a path guide to a destination is provided without suggesting a stop, and S200ofFIG.5is performed thereafter. When the via-facilities suggesting setting is in an ON state, the via-facilities suggesting apparatus100informs the list of facilities, which are positioned on the path, in a pop-up form, and outputs the voice guide (S103). In this case, the via-facilities suggesting apparatus100may information the list of facilities through the in-vehicle device300or the user terminal400. In this case, the list of facilities may be formed by prioritizing facilities based on distances, fit degrees, prices, congestion degrees, and visit counts (past use histories). The via-facilities suggesting apparatus100may form the list of facilities by priorities preset by the user. In particular, the via-facilities suggesting apparatus100may display places frequently visited by the user with the highest priority, based on a database in the case of the personal autonomous vehicle. For example, when the via-facilities suggesting apparatus100suggests a café, and when there is present a history that the user frequently visited to Starbucks in the past, the via-facilities suggesting apparatus100may suggest Starbucks with the highest priority. In addition, the list of facilities may be provided in a manner set by the user, that is, to the in-vehicle device or the user terminal. In addition, the via-facilities suggesting apparatus100may provide a notification through the user terminal400after providing a voice guide in the autonomous vehicle, when setting the voice guide to have the highest priority. As described above, the via-facilities suggesting apparatus100may perform the via-facilities suggesting setting by distinguishing between the personal autonomous vehicle and the shared autonomous vehicle. For example, although the user want to stop by a DT café when using the vehicle alone or with a friend, the user does not want to stop by a DT café when gets in the shared autonomous vehicle with a stranger. As described above, the via-facilities suggesting apparatus100may provide different via-facilities suggesting settings for the personal autonomous vehicle and the shared autonomous vehicle, and may regard a setting value as being for the personal mode when the user gests in the shared autonomous vehicle alone. In addition, the via-facilities suggesting apparatus100may suggest facilities in limitation to facilities, such as a DT store, or a convenience store, which allow occupants to receive food in the vehicle or to finish urgent businesses within a short time, because many unspecified people, which do not know each other, commonly the shared autonomous vehicle, in the case of the shared autonomous vehicle having multiple paths. Thereafter, the via-facilities suggesting apparatus100may determine whether a user (an occupant) selects facilities from the informed list of facilities (S104). When the facilities are selected from the list of facilities by the user, the selected facilities are added to as a stop and a path guide may be performed (S105) Meanwhile, when the facilities are not selected from the list of facilities by the user, the path guide to the destination is provided without suggesting the stop, and S200ofFIG.5is performed thereafter. The via-facilities suggesting apparatus100notifies that the vehicle approaches the stop, through the application of the user terminal400, when the vehicle enters within the first reference distance (e.g., 1 km), which is preset, before arriving at the stop (S106). In other words, the via-facilities suggesting apparatus100may allow the user to recognize, in advance, that the vehicle almost arrives at the stop, by providing, in advance, a notification to user before arriving at the stop, when the user sleeps or takes a rest in the autonomous vehicle. Thereafter, the via-facilities suggesting apparatus100determines whether the stop is a place allowing DT (S107). When the stop is not the place allowing DT, the via-facilities suggesting apparatus100provides a guide to the arrival at the stop when the vehicle enters within the second reference distance (e.g., 300 m) before arriving at the stop (S108). Meanwhile, when the stop is the place allowing DT, the via-facilities suggesting apparatus100provides information on arrival and performs the opening of the vehicle window when the vehicle enters within the second distance (e.g., 300 m) before arriving at the stop (S109), such that the user receives food or beverage through the open window. Thereafter, the via-facilities suggesting apparatus100informs the deviation from the stop and controls the vehicle window to be closed, when the vehicle moves to be out of the preset third distance (e.g., 100 m) from the stop (S110). For example, the via-facilities suggesting apparatus100may control the vehicle window to be closed two seconds after the voice guide of “the window is closed” In this case, the via-facilities suggesting apparatus100may be set by the user regarding whether to provide the voice guide through the in-vehicle device or to provide an alarm through the user terminal, depending on the shared autonomous vehicle or the personal autonomous vehicle. The voice guide through the in-vehicle device and the alarm through the user terminal may be provided by priorities set therefor. However, the via-facilities suggesting apparatus100may set the voice guide in default in the shared autonomous vehicle, and all occupants may set the alarm provided through the user terminal to have the highest priority, which may be informed by the alarm through the user terminal. Hereinafter, a method for suggesting stopping by facilities in the personal autonomous vehicle will be described in detail with reference toFIG.4according to another form of the present disclosure.FIG.4is a flowchart illustrating a method for suggesting stopping by facilities in the personal autonomous vehicle, according to another form of the present disclosure. Hereinafter, it is assumed that the via-facilities suggesting apparatus100ofFIG.1performs the process ofFIG.4. In addition, in the description made with reference toFIG.4, it may be understood that operations described as being performed by an apparatus are controlled by the processor140of the via-facilities suggesting apparatus100. Referring toFIG.4, the via-facilities suggesting apparatus100determines whether a present path of the vehicle is a multiple-path (S121). In this case, the via-facilities suggesting apparatus100may determine a host autonomous vehicle as being a shared autonomous vehicle when the multiple-path is present. When the present path is a single path, the via-facilities suggesting apparatus100may determine the host autonomous vehicle as a personal autonomous vehicle. In this case, at least one or two occupants are present in the personal autonomous vehicle. In S121, when the present path is the multiple-path, S300ofFIG.6is performed thereafter. When the present path is not the multiple-path, that is, when the host autonomous vehicle is the personal autonomous vehicle, the via-facilities suggesting apparatus100may directly receive an additional stop from the user (S122). In this case, the via-facilities suggesting apparatus100determines whether the input stop is a specific single store, or a franchise having a plurality of places (S123). When the input stop is the specific single place, the via-facilities suggesting apparatus100adds the stop and performs the path guide (S124). Meanwhile, when input stops are the plurality of places, the via-facilities suggesting apparatus100recommends the list of facilities in order of the optimal places positioned on the path, adds a place, which is selected by the user from the recommended list of facilities, as a stop, and performs the path guide (S125). In this case, the via-facilities suggesting apparatus100may form the list of the facilities by preset priorities based on distances, prices, congestion degrees, or the past use history and may perform the path guide. Thereafter, S126to S130are the same S106to S110ofFIG.2, so the details thereof will be omitted. Hereinafter, a method for suggesting, in advance, stopping by facilities depending on a user situation will be described in detail with reference toFIG.5.FIG.5is a flowchart illustrating a method for suggesting stopping by facilities depending on the situation of a user, according to an exemplary form of the present disclosure. Hereinafter, it is assumed that via-facilities suggesting apparatus100ofFIG.1performs the process ofFIG.5. In addition, in the description made with reference toFIG.5, it may be understood that operations described as being performed by an apparatus are controlled by the processor140of the via-facilities suggesting apparatus100. Referring toFIG.5, the via-facilities suggesting apparatus100determines whether a user intakes food or beverage (S201). The via-facilities suggesting apparatus100may determine whether the user intakes the food or the beverage through the sensing device200, especially, the camera in the autonomous vehicle. When it is determined that the user intakes the food or the beverage, the via-facilities suggesting apparatus100may suggest stopping by a restroom after a specific time (e.g., 30 minutes) is elapsed. In this case, the via-facilities suggesting apparatus100may output, to the in-vehicle device300or the user terminal400, a wording or a voice for suggesting stopping by the restroom (rest area) in a pop-up form. Accordingly, the via-facilities suggesting apparatus100determines whether stopping by the restroom (rest area) is selected by the user (S203). In this case, when a plurality of users are present, stopping by the restroom is selected through a majority decision. When stopping by the restroom (rest area) is selected by a single user or by at least two users, the via-facilities suggesting apparatus100performs the path guide by adding the restroom as the stop (S204). Hereinafter, a method for suggesting suggests stopping by facilities in the shared autonomous vehicle will be described in detail with reference toFIG.6according to another form of the present disclosure.FIG.6is a flowchart illustrating a method for suggesting stopping by facilities in the shared autonomous vehicle, according to another form of the present disclosure. Hereinafter, it is assumed that the via-facilities suggesting apparatus100ofFIG.1performs the process ofFIG.6. In addition, in the description made with reference toFIG.6, it may be understood that operations described as being performed by an apparatus are controlled by the processor140of the via-facilities suggesting apparatus100. Referring toFIG.6, the via-facilities suggesting apparatus100determines whether the via-facilities suggesting setting is in an On state, by determining the setting information of users, in the shared autonomous vehicle used by at least two users (S301). In this case, when a plurality of users are present, the On/Off state of the via-facilities suggesting setting may be determined by a majority decision. When the via-facilities suggesting setting is determined as being in the Off state by the majority decision, the via-facilities suggesting apparatus100provides a path guide to a destination without suggesting a stop, and performs S200ofFIG.5. When the via-facilities suggesting setting is determined to be in the On state by the majority decision, the via-facilities suggesting apparatus100outputs an inquiry about whether to guide the list of facilities that the vehicle is able to stop by, in a pop-up form to the user terminal, receives, from the user, a response to the inquiry about whether to guide to the list of the facilities, and determines whether to guide to the list of facilities, by the majority decision (S302). When many users do not select the guide to the list of the facilities, the via-facilities suggesting apparatus100provides the path guide to the destination without suggesting the stop and performs S200ofFIG.5thereafter. When many users select the guide to the list of the facilities, the via-facilities suggesting apparatus100guides the list of the facilities in a pop-up form (S303). Accordingly, the via-facilities suggesting apparatus100receives a selection for at least one stop from the list of facilities from the users and determines whether stops having the same brand name are present in selected stops (S304). The via-facilities suggesting apparatus100may add the selected stop to a path and may guide to the path, when the stops having the same brand name are absent in the selected stops (S305). The via-facilities suggesting apparatus100provides a guide in a pop-up form such that a stop is selected from a list of facilities having the same brand name again, when stops having the same brand name are present with respect to the selected stops (S306). In this case, when the stops having the same brand name are present in the selected stops, the via-facilities suggesting apparatus100re-organizes the list of the facilities by priorities of distances, prices, and congestion degrees, and performs the path guide. When a plurality of stops are present, the via-facilities suggesting apparatus100adds the plurality of stops, sets a final path, and asks users for the consent to the stops in the pop-up form on user terminals. In addition, when a user, who wants to add a stop, requests for adding the stop in the shared autonomous vehicle, and when more than half of the remaining users approve of the addition of the stop, the via-facilities suggesting apparatus100may add the stop to the path and guide to the path. In addition, when the addition of the stop is approved, the via-facilities suggesting apparatus100may allow the user, who adds the stop, to distribute the rewards of the user to other users having approved of the addition of the stop to compensate the other users for inconvenience. As described above, in the case of the shared autonomous vehicle, stopping by is determined by a majority decision of users using the vehicle, and users, who do not want to stop by but approve of the addition of the stop, may receive double rewards useful like cash, and may be compensated with a discount rate. As described above, the stop is suggested on the driving path by detecting the tendency of the user, or whether the user intakes food or beverage is detected by using the camera installed in the vehicle such that the stop necessary for the user is suggested in advance, thereby improve user convenience. FIG.7illustrates a computing system, according to an exemplary form of the present disclosure. Referring toFIG.7, a computing system1000may include at least one processor1100, a memory1300, a user interface input device1400, a user interface output device1500, a storage1600, and a network interface1700, which are connected with each other via a bus1200. The processor1100may be a central processing unit (CPU) or a semiconductor device for processing instructions stored in the memory1300and/or the storage1600. Each of the memory1300and the storage1600may include various types of volatile or non-volatile storage media. For example, the memory1300may include a read only memory (ROM) and a random access memory (RAM). Thus, the operations of the methods or algorithms described in connection with the forms disclosed in the present disclosure may be directly implemented with a hardware module, a software module, or the combinations thereof, executed by the processor1100. The software module may reside on a storage medium (i.e., the memory1300and/or the storage1600), such as a RAM, a flash memory, a ROM, an erasable and programmable ROM (EPROM), an electrically EPROM (EEPROM), a register, a hard disc, a removable disc, or a compact disc-ROM (CD-ROM). The exemplary storage medium may be coupled to the processor1100. The processor1100may read out information from the storage medium and may write information in the storage medium. Alternatively, the storage medium may be integrated with the processor1100. The processor and storage medium may reside in an application specific integrated circuit (ASIC). The ASIC may reside in a user terminal. Alternatively, the processor and storage medium may reside as separate components of the user terminal. Hereinabove, although the present disclosure has been described with reference to exemplary forms and the accompanying drawings, the present disclosure is not limited thereto, but may be variously modified and altered by those skilled in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure claimed in the following claims. Therefore, forms of the present disclosure are not intended to limit the technical spirit of the present disclosure, but provided only for the illustrative purpose. The scope of protection of the present disclosure should be construed by the attached claims, and all equivalents thereof should be construed as being included within the scope of the present disclosure. As described above, according to the present disclosure, stopping by facilities suitable for a user tendency or facilities necessary based on a user situation may be suggested, in advance, during personal autonomous driving or the use of a shared vehicle in the middle of controlling autonomous driving, thereby increasing user convenience and improving the quality of a product. Besides, a variety of effects directly or indirectly understood through the disclosure may be provided. Hereinabove, although the present disclosure has been described with reference to exemplary forms and the accompanying drawings, the present disclosure is not limited thereto, but may be variously modified and altered by those skilled in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure. | 35,391 |
11859997 | DETAILED DESCRIPTION According to an aspect of the disclosure, an operating method of an electronic device includes: obtaining image data of a first resolution and image data of a second resolution for each of a plurality of nodes generated while the electronic device moves; obtaining location information with respect to each of the generated nodes, by using the image data of the second resolution; generating and storing map data by matching the obtained location information with the image data of the first resolution for each node; and estimating a current location of the electronic device by using the generated map data and the image data of the first resolution. According to another aspect of the disclosure, an electronic device includes: at least one sensing portion configured to obtain image data of a first resolution and image data of a second resolution for each of a plurality of nodes generated while the electronic device moves; a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory to: obtain location information with respect to each of the generated nodes by using the image data of the second resolution, generate map data by matching the obtained location information with the image data of the first resolution for each node, store the generated map data in the memory, and estimate a current location of the electronic device by using the generated map data and the image data of the first resolution. According to another aspect of the disclosure, a computer-readable recording medium has recorded thereon a program for executing the operating method of the electronic device on a computer. Hereinafter, embodiments of the disclosure will be described n detail with reference to the accompanying drawings so that one of ordinary skill in the art could easily execute the disclosure. However, the disclosure may have different forms and should not be construed as being limited to the embodiments described herein. Also, in the drawings, parts not related to descriptions are omitted for the clear description of the disclosure, and throughout the specification, like reference numerals are used for like elements. Throughout the specification, when a part is referred to as being “connected” to other parts, the part may be “directly connected” to the other parts or may be “electrically connected” to the other parts with other devices therebetween. When a part “includes” a certain element, unless it is specifically mentioned otherwise, the part may further include another component and may not exclude the other component. Also, the terms, such as “unit” or “module,” used in the specification, should be understood as a unit that processes at least one function or operation and that may be embodied in a hardware manner, a software manner, or a combination of the hardware manner and the software manner. Hereinafter, embodiments of the disclosure will be described n detail with reference to the accompanying drawings so that one of ordinary skill in the art could easily execute the disclosure. However, the disclosure may have different forms and should not be construed as being limited to the embodiments described herein. Hereinafter, the disclosure will be described in detail with reference to the accompanying drawings. In this specification, a vehicle1may include an electronic device100(hereinafter, the electronic device100) for assisting in or controlling driving of the vehicle1. FIG.1is a view for describing an example of an operation of an electronic device according to an embodiment. Referring toFIG.1, the electronic device100included in the vehicle1may generate map data by recognizing a surrounding environment through a sensing portion110, while the vehicle1drives on the road. According to an embodiment, image data of different resolutions may be obtained for a plurality of nodes generated while the vehicle1moves. The plurality of nodes may be non-continually generated while the vehicle1moves. According to an embodiment, the image data obtained for each node may include a 3D point cloud, image, etc. Also, the image data according to an embodiment may include a distribution chart indicating information sensed with respect to a two-dimensional or a three-dimensional space. However, the image data according to an embodiment is not limited to the example described above and may include various types of data indicating information collected about surrounding environmental conditions at a certain location. The node according to an embodiment may correspond to a location of the vehicle1of the electronic device100when the image data is obtained. According to an embodiment, the electronic device100may generate a plurality of nodes according to a time interval or a distance interval. However, it is not limited thereto, and the electronic device100may non-continually generate a plurality of nodes. For example, when a location of the electronic device100at a temporal point t is node A, a location of the electronic device100at a temporal point t+1 may correspond to node B adjacent to the node A. A path through which the vehicle1including the electronic device100drives may be a set of continual nodes. According to an embodiment, when the vehicle1including the electronic device100moves, images of different resolutions including the surrounding environment of the vehicle1may be captured at the nodes. The electronic device100may generate the map data by using image data of different resolutions. According to an embodiment, the electronic device100may obtain pose information of the vehicle1by using image data of a high resolution and may obtain location information for a current node by using the pose information of the vehicle. For example, the location information for the current node may be obtained by calculating a distance and a direction of movement of the vehicle from a location of a previous node, based on the pose information of the vehicle. The electronic device100may generate the map data by matching the location information of the current node with image data of a low resolution. When generating the map data, it may be difficult to distinctly compare features of two images, by using only the image data of the low resolution, and thus, the electronic device100may have difficulty accurately obtaining location information corresponding to the image data of the low resolution. However, according to an embodiment, the electronic device100may obtain location information of the electronic device100, the location information having high accuracy, by using image data of a high resolution, and may generate the map data including image data of a low resolution by using the location information. According to an embodiment, the electronic device100may estimate a current location of the electronic device100by using the generated map data and image data of a first resolution (for example, the image data of the low resolution). For example, the electronic device100may estimate the current location of the electronic device100by obtaining, from among the image data of the first resolution of the map data, image data which is most closely matched to the image data of the first resolution, which is obtained with respect to the current location. According to an embodiment, the electronic device100may determine a range for the current location of the electronic device100and estimate the current location by using map data corresponding to at least one node included in the determined range. The range for the current location of the electronic device100may include an area which may be estimated as the current location of the electronic device100. The electronic device100may use the map data corresponding to the range for the current location, in order to minimize the amount of calculations taken to compare image data of the map data with image data obtained at the current location. According to an embodiment, the range for the current location of the electronic device100may be determined based on at least one of information about a previous location of the electronic device100and global positioning system (GPS) information about the current location of the electronic device100. For example, the electronic device100may determine the range which may include the current location based on the movement of the electronic device100, based on the previous location. Also, the electronic device100may determine the range which may include the current location, based on the GPS information and an error bound of the GPS information. The disclosure is not limited to the example described above. The range for the current location of the electronic device100may be determined based on information collected using various methods with respect to the current location of the electronic device100. The pose information of the vehicle1according to an embodiment may include 6-degree-of-freedom information. The 6-degree-of-freedom information may include information about a direction in which a vehicle moves and rotation of the vehicle. For example, the 6-degree-of-freedom information may include at least one of x, y, z, roll, yaw, and pitch. The x, y, z values may include information about a direction (eg. a vector value) in which the vehicle moves. The roll value may be an angle of rotation in a counter-clockwise direction based on an x-axis, the yaw value may be an angle of rotation in a counter-clockwise direction based on a y-axis, and the pitch value may be an angle of rotation in a counter-clockwise direction based on a z-axis. The yaw value may indicate a movement direction of the vehicle1and the pitch value may indicate whether the vehicle1moves through a slope or a bump. The pose information of the vehicle1may be obtained based on the number of rotations of a wheel of the vehicle1and a direction of the rotation, which are measured through an odometery sensor230. However, the pose information measured through the odometery sensor230may have low accuracy, due to a slipping phenomenon generated between the wheel and a bottom surface. Thus, the electronic device100according to an embodiment may obtain the pose information of the vehicle1, the pose information having high accuracy, by using image data of a second resolution and may obtain location information of the vehicle1based on the pose information. According to an embodiment, the electronic device100may obtain the pose information of the vehicle from the plurality of nodes, by using the image data of the second resolution. For example, the electronic device100may obtain a difference value between the pose information of each node, by using the image data of the second resolution, and based on the difference value of the pose information, may obtain pose information for each node, the pose information being optimized to have the least error. Also, the electronic device100may obtain location information for a current node of the vehicle1, based on the pose information of the current node and a previous node and location information of the previous node. The pose information of the vehicle may include a direction in which the vehicle moves and a direction of the rotation. For example, the electronic device100may obtain the location information for the current node by obtaining information about a direction and a distance of the movement from the previous node to the current node, based on the pose information of at least one of the current node and the previous node. Also, the electronic device100may generate the map data by matching image data of a first resolution (for example, a low resolution) with respect to the current node to the obtained location information of the current node of the vehicle1. The electronic device100according to an embodiment may obtain the image data from each node, through the sensing portion110including a radar sensor226, a lidar sensor227, an image sensor228, etc. The image data of the first resolution described above may be generated by a sensor using radio waves, for example, the radar sensor226. Also, the image data of the second resolution described above may be generated by a sensor using a laser beam or light, for example, the lidar sensor227, the image sensor228, etc. For example, the electronic device100may obtain the image data of the first resolution (for example, the low resolution) by using the radar sensor226and obtain the image data of the second resolution (for example, the high resolution) by using at least one of the lidar sensor227and the image sensor228. Thus, according to an embodiment, without expensive equipment (for example, a lidar sensor) to capture an image of high resolution, estimation of a current location of a moving object is possible by using equipment (for example, a radar sensor) to capture an image of a low resolution with map data including the image data of the low resolution. Thus, according to an embodiment, by using the map data including the image data of the low resolution, accurate location estimation is possible by using only a less expensive device for capturing image data of a low resolution. Also, when the image data of the second resolution (for example, the high resolution) described above includes image data, at which a difference of the pose information between adjacent nodes is to be identified, the image data of the second resolution may be used to obtain the pose information, according to an embodiment. Thus, the electronic device100may obtain the location information according to an embodiment by using the image data of the second resolution obtained by using for example a lidar sensor of a first channel having one beam. The electronic device100according to an embodiment may perform the operation according to an embodiment, without including an expensive lidar sensor having a plurality of beams. Also, when the image data of the first resolution is an image generated by using radio waves, speed detection using a Doppler effect is possible with respect to an object in an image. A dynamic object having a speed is desirably excluded from image data, in generating the map data. That is, the electronic device100according to an embodiment may identify a dynamic object from objects in an image, based on speed information, and generate or modify and refine the map data by using the image data from which the dynamic object is excluded. For example, the electronic device100may obtain speed information with respect to the image data of the first resolution. The speed information may include, for example, a speed value corresponding to each unit area of the image data. The electronic device100may identify the dynamic object included in the image data of the first resolution, based on the speed information with respect to the image data of the first resolution. The electronic device100may remove the dynamic object identified in the image data of the first resolution. Also, the electronic device100may remove the identified dynamic object from the image data of the second resolution corresponding to the image data of the first resolution. Also, the electronic device100may generate the map data based on the image data of the first resolution and the image data of the second resolution, from which the dynamic object is removed. Thus, the map data according to an embodiment may include the image data of the first resolution including a static object. Also, when the electronic device100modifies and refines the map data, the electronic device100may modify and refine the map data based on the image data of the first resolution and the image data of the second resolution, from which the dynamic object is removed. To easily modify and refine the map data, the electronic device100may use an area of the image data of the first resolution obtained to modify and refine the map data, the area including the identified static object, rather than a total area thereof. When an autonomous vehicle drives on a road, the autonomous vehicle may generate and modify and refine map data about a surrounding environment by using various pieces of sensor information and estimate a current location of the vehicle on the map data. Here, as the vehicle contains more precise map data, a more accurate location of the vehicle may be estimated on the map data. Also,FIG.1illustrates that the electronic device100is included in the vehicle1. However, it is not limited thereto. According to an embodiment, a movable device or robot (not shown) may include the electronic device100. Also, the electronic device100according to an embodiment may generate the map data by using the image data of the first resolution including a distribution chart based on information sensed at a certain location. For example, the electronic device100may obtain an indoor temperature or dust distribution chart, or an indoor wireless signal strength distribution chart, as the image data of the first resolution, from each node, and may obtain location information based on the image data of the second resolution. The electronic device100may match the location information with the image data of the first resolution obtained from each node, to generate the map data including the indoor temperature or dust distribution chart, or the indoor wireless signal strength distribution chart. FIG.2shows an example path graph including a plurality of nodes according to an embodiment. Referring toFIG.2, the electronic device100may generate a path graph as a set of at least two nodes and edges between the at least two nodes. The graph may be generated by indicating the plurality of nodes as dots and connecting the adjacent nodes via edges. For example, the path graph p20may include an edge e21connecting a node21and a node22. Each of the nodes21and22may include pose information of the electronic device100according to an embodiment and the edge21may include a difference value between pose information of adjacent nodes. The electronic device100according to an embodiment may obtain at least one of a difference value and a covariance between pose information of the adjacent nodes, as a value of the edge e21between the two nodes, based on the image data of the second resolution corresponding to each of the nodes21and22. The covariance may indicate a degree in which values of the pose information of the two nodes are changed in a correlated manner. According to an embodiment, based on at least one of the difference value and the covariance, the pose information of the node22may be obtained from the pose information of the node21. For example, the electronic device100may obtain the pose information with respect to at least one node connected to an edge, based on a value of an edge. For example, the electronic device100may obtain the pose information of the at least one node, at which the pose information has the least error, based on at least one edge value. For example, the electronic device100may obtain at least one of the difference value and the covariance of the pose information, by comparing image data of the node21with image data of the node22. The pose information of the node21may include a pre-obtained value or a pre-defined value based on a certain condition, according to pose information of a previous node adjacent to the node21. Thus, according to an embodiment, the electronic device100may obtain the pose information of the node22, based on the pose information of the node21and the value of the edge e21. Also, the electronic device100may obtain location information of each node, by using the pose information of each node. For example, based on information about a moving distance and a moving direction of the electronic device100, the information being included in the pose information, the location information of the current node may be obtained from the location information of the previous node of the electronic device100. FIG.3shows an example of loop-closing according to an embodiment. According to an embodiment, the electronic device100may correct the pose information of each node such that a sum of error values of edges included in a path graph is minimized. According to an embodiment, the electronic device100may use simultaneous localization and mapping (SLAM) technologies, in which a moving vehicle or robot measures its location while simultaneously writing a map of a surrounding environment. The electronic device100may perform loop-closing based on a relative location of two adjacent nodes, by using graph-based SLAM technologies. The electronic device100may generate a loop-closure edge connecting two nodes, by using a relative distance, a relative angle, etc., between two nodes, to derive a corrected resultant value. Referring to a path graph path30ofFIG.3, the electronic device100may move in a counterclockwise direction from a node31to a node32. When the node31and the node32are located at the same location, the electronic device100may obtain optimized pose information having the least error, based on a value of at least one edge including an edge e31included in the path graph path30, according to the loop-closing correction method. For example, with the node31and the node32having the same location information as a pre-requisite condition, the optimized pose information of each node may be obtained. However, it is not limited to the example described above. The electronic device100may obtain the pose information of each node, by using various methods of optimizing the pose information, in addition to the loop-closing correction method. According to an embodiment, the electronic device100may obtain the value of the at least one edge included in the path graph path30, by using the image data of the second resolution for each node of the path graph path30. Also, the electronic device100may obtain the pose information of at least one node included in the path graph path30, the node being configured to have the least error, based on the value of the at least one edge. The electronic device100may obtain location information of each node, based on the pose information of each node. FIG.4is a view for describing an example of generating map data according to an embodiment. Referring toFIG.4, for example, the electronic device100may generate map data40including image data d41corresponding to a node41and image data d42corresponding to a node42. The image data d41and d42may be the image data of the first resolution (low resolution) described above. According to an embodiment, the electronic device100may generate the map data by storing image data of a first resolution and location information, corresponding to each node. According to an embodiment, the map data may be realized in the form of a 3D point cloud map, a 2D grid map, a 3D voxel map, etc., based on the image data of the first resolution and the location information, but is not limited thereto. Also, according to an embodiment, the map data may be realized in the various forms (for example, a feature map, a semantic map, a dense map, a texture map, etc.) according to types of data included in a map when the map is generated. For example, the electronic device100may generate the map data in the form of the 3D point cloud map, by using image data in a 3D point cloud form corresponding to each node, based on location information of each node of a corrected path graph. The image data may be, for example, the image data of the first resolution. Also, the electronic device100may generate the map data generated by converting the image data in the 3D point cloud form corresponding to each node into a 3D voxel form. Also, the electronic device100may generate the map data in a 2D grid form by using only a point cloud corresponding to each node or a ground surface of a road extracted from image data in an image form. FIG.5is a block diagram of an electronic device according to an embodiment. According to an embodiment, the electronic device100may include the sensing portion110, a processor120, and a memory130.FIG.5illustrates only components of the electronic device100, the components being related to the present embodiment. Thus, it will be understood by one of ordinary skill in the art that other general-purpose components than the components illustrated inFIG.5may further be included. According to an embodiment, the sensing portion110may obtain a peripheral image including objects located around the vehicle1(FIG.1) driving on a road. Also, the sensing portion110according to an embodiment may obtain the peripheral image described above as image data of different resolutions. The sensing portion110may include a plurality of sensors configured to obtain the peripheral image. For example, the sensing portion110may include a distance sensor, such as a lidar sensor and a radar sensor, and an image sensor, such as a camera. According to an embodiment, the lidar sensor of the sensing portion110may generate the image data of the second resolution (for example, the high resolution) described above, and the radar sensor may generate the image data of the first resolution (for example, the low resolution). Also, the sensing portion110may include one or more actuators configured to correct locations and/or alignments of the plurality of sensors, and thus may sense an object located at each of a front direction, a rear direction, and side directions of the vehicle1. Also, the sensing portion110may sense a shape of a peripheral object and a shape of a road by using the image sensor. According to an embodiment, the processor120may include at least one processor. Also, the processor120may execute one or more instructions stored in the memory130. According to an embodiment, the processor120may generate map data by using the image data of different resolutions. For example, the processor120may obtain location information of a plurality of nodes by using image data of a second resolution (for example, a high resolution). Also, the processor120may generate the map data by matching the location information of each node with image data of a first resolution (for example, a low resolution) of each node. Also, the processor120may obtain the location information of each node by using image data of a second resolution (for example, a high resolution). For example, the processor120may obtain pose information of the electronic device100by using the image data of the second resolution (for example, the high resolution) and obtain the location information of each node by using the pose information of the electronic device100. Also, the processor120may obtain at least one of a difference value and a covariance between pose information of a first node and a second node, and based on the obtained at least one, may obtain the location information of the second node from the location information of the first node. Also, the processor120may obtain the at least one of the difference value and the covariance between the pose information described above by comparing the image data of the second resolution with respect to the first node and the second node. The pose information described above may include 6-degree-of-freedom information of the electronic device100. Also, the processor120may determine a range of a current location based on information about the current location obtained in various methods and may estimate the current location based on map data corresponding to at least one node included in the determined range. For example, the range of the current location may be determined based on at least one of information about a previous location of the electronic device and GPS information about the current location of the electronic device. Also, the map data corresponding to the at least one node included in the range of the current location may include at least one piece of image data of a first resolution corresponding to the at least one node. Also, the processor120may identify a dynamic object in the image data of the first resolution, based on speed information with respect to the image data of the first resolution. The processor120may remove the identified dynamic object from at least one of the image data of the first resolution and the image data of the second resolution. Thus, the processor120may generate the map data by using the image data from which the dynamic object is removed. The memory130according to an embodiment may store one or more instructions performed by the processor120. For example, the memory130may store various data and programs for driving and controlling the electronic device100under control of the processor120. Also, the memory130may store signals or data that is input/output based on operations of the sensing portion110and the processor120. The memory130may store the map data generated by the processor120under control of the processor120. FIG.6is a block diagram of an electronic device according to an embodiment. The electronic device100may include the sensing portion110, the processor120, the memory130, an outputter140, an inputter150, and a communicator160. The electronic device100, the sensing portion110, the processor120, and the memory130illustrated inFIG.6may correspond to the electronic device100, the sensing portion110, the processor120, and the memory130ofFIG.5, respectively. The sensing portion110may include a plurality of sensors configured to sense information about a surrounding environment in which the vehicle (FIG.1) is located and may include one or more actuators configured to correct locations and/or alignments of the sensors. For example, the sensing portion110may include a GPS224, an inertial measurement unit (IMU)225, a radar sensor226, a lidar sensor227, an image sensor228, and an odometery sensor230. Also, the sensing portion110may include at least one of a temperature/humidity sensor232, an infrared sensor233, an atmospheric sensor235, a proximity sensor236, and an RGB illuminance sensor237, but is not limited thereto. A function of each sensor may be intuitively inferred by one of ordinary skill in the art based on a name of the sensor, and thus, its detailed description is omitted. Also, the sensing portion110may include a motion sensing portion238configured to sense a motion of the vehicle1(FIG.1). The motion sensing portion238may include a magnetic sensor229, an acceleration sensor231, and a gyroscope sensor234. The GPS224may include a sensor configured to estimate a geographical location of the vehicle1(FIG.1). That is, the GPS224may include a transceiver configured to estimate a location of the vehicle1(FIG.1) on the earth. According to an embodiment, a range of a current location of the vehicle1may be determined based on GPS information with respect to the current location of the vehicle1. The current location of the vehicle1may be estimated based on the map data obtained based on the determined range. The IMU225may be a combination of sensors configured to sense changes of a location and an alignment of the vehicle1(FIG.1) based on inertia acceleration. For example, the combination of sensors may include accelerometers and gyroscopes. The radar sensor226may include a sensor configured to sense objects in an environment in which the vehicle1(FIG.1) is located, by using wireless signals. Also, the radar sensor226may be configured to sense a speed and/or a direction of objects. The lidar sensor227may include a sensor configured to sense objects in an environment in which the vehicle1(FIG.1) is located, by using a laser beam. In more detail, the lidar sensor227may include a laser light source and/or a laser scanner configured to emit a laser beam, and a sensor configured to sense reflection of the laser beam. The lidar sensor227may be configured to operate in a coherent (for example, using heterodyne sensing) or an incoherent sensing mode. The image sensor228may include a still camera or a video camera configured to record an environment outside the vehicle1(FIG.1). For example, the image sensor228may include a plurality of cameras and the plurality of cameras may be arranged at various locations inside and outside of the vehicle1(FIG.1). The odometery sensor230may estimate the location of the vehicle1(FIG.1) and measure a moving distance. For example, the odometery sensor230may measure a value of a location change of the vehicle1(FIG.1) by using the number of rotations of a wheel of the vehicle1(FIG.1). Also, the location of the electronic device100may be measured by using the methods of trilateration, triangulation, etc., using sensors and communication devices, such as 3G, LTE, a global navigation satellite system (GNSS), a global system for mobile communication (GSM), LORAN-C, NELS, WLAN, Bluetooth, etc. Also, when the electronic device100is in an indoor environment, a location of the electronic device100may be estimated by using sensors, such as indoor-GPS, Bluetooth, WLAN, VLC, active badge, GSM, RFID, visual tags, WIPS, WLAN, ultraviolet rays, magnetic sensors, etc. The method of measuring the location of the electronic device100according to an embodiment is not limited to the examples described above. Other methods, in which location data of the electronic device100may be obtained, may also be used. The memory130may include a magnetic disk drive, an optical disk drive, and a flash memory. Alternatively, the memory130may include a portable USB data storage. The memory130may store system software configured to execute examples related to the disclosure. The system software configured to execute the examples related to the disclosure may be stored in a portable storage medium. The communicator160may include at least one antenna for wirelessly communicating with other devices. For example, the communicator160may be used to wirelessly communicate with cellular networks or other wireless protocols and systems through Wi-Fi or Bluetooth. The communicator160controlled by the processor120may transmit and receive wireless signals. For example, the processor120may execute a program included in the storage140for the communicator160to transmit and receive wireless signals to and from the cellular network. The inputter150refers to a device for inputting data for controlling the vehicle1(FIG.1). For example, the inputter150may include a key pad, a dome switch, a touch pad (a touch capacitance method, a pressure-resistive layer method, an infrared sensing method, a surface ultrasonic conductive method, an integral tension measuring method, a piezo effect method, etc.), a jog wheel, a jog switch, etc., but is not limited thereto. Also, the inputter150may include a microphone, which may be configured to receive audio (for example, a voice command) from a passenger of the vehicle1(FIG.1). The outputter140may output an audio signal or a video signal, and an output device280may include a display281and a sound outputter282. The display281may include at least one of a liquid crystal display, a thin-film transistor-liquid crystal display, an organic light-emitting diode, a flexible display, a 3D display, and an electrophoretic display. According to a realized form of the outputter130, the outputter140may include at least two displays281. The sound outputter282may output audio data received from the communicator160or stored in the storage140. Also, the sound outputter282may include a speaker, a buzzer, etc. The inputter150and the outputter140may include a network interface and may be realized as a touch screen. The processor120may execute programs stored in the memory130to generally control the sensing portion110, the communicator160, the inputter150, the storage140, and the outputter140. FIG.7is a block diagram of a vehicle according to an embodiment. According to an embodiment, the vehicle1may include the electronic device100and a driving device200.FIG.7illustrates only components of the vehicle1, the components being related to the present embodiment. Thus, it will be understood by one of ordinary skill in the art that other general-purpose components than the components illustrated inFIG.7may further be included. The electronic device100may include the sensing portion110, the processor120, and the memory130. The sensing portion110, the processor120, and the memory130are described in detail inFIGS.5and6, and thus, their descriptions are omitted. The driving device200may include a brake unit221, a steering unit222, and a throttle223. The steering unit222may be a combination of mechanisms configured to adjust a direction of the vehicle1. The throttle223may be a combination of mechanisms configured to control a speed of the vehicle1by controlling an operating speed of an engine/motor211. Also, the throttle223may adjust an amount of mixture gas of fuel air inserted into the engine/motor211by adjusting an opening amount of the throttle and may control power and a driving force by adjusting the opening amount of the throttle. The brake unit221may be a combination of mechanisms configured to decelerate the vehicle1. For example, the brake unit221may use friction to reduce a speed of a wheel/tire214. FIG.8is a flowchart of an operating method of an electronic device according to an embodiment. Referring toFIG.8, in operation810, the electronic device100may obtain image data of a first resolution and image data of a second resolution for each of a plurality of nodes generated while the electronic device100moves. The electronic device100according to an embodiment may obtain image data of different resolutions at each of the nodes, by using different sensors. In operation820, the electronic device100may obtain location information with respect to each node based on the image data of the second resolution. The electronic device100according to an embodiment may compare the image data of the second resolution with respect to the first node and the second node, to obtain at least one of a difference value and a covariance between pose information of a first node and a second node as a value of an edge between the two nodes. According to an embodiment, the first node and the second node may be a previous node and a current node of the electronic device100, respectively. The covariance may indicate a degree in which values of the pose information of the two nodes are changed in a correlated manner. According to an embodiment, based on at least one of the difference value and the covariance, the pose information of the second node may be obtained from the pose information of the first node. Also, the electronic device100may obtain the pose information of the second node based on the value of the edge and the pose information of the first node. Alternatively, the electronic device100may obtain the pose information of at least one optimized node, according to a loop-closing correction method, based on the value of the edge. The electronic device100may obtain location information of the current node based on the pose information of each node. The electronic device100according to an embodiment may obtain location information having high accuracy by using image data of a high resolution, through which feature values of the image data may be distinctly compared. In operation830, the electronic device100may match and store the location information for each node obtained in operation S820and the image data of the first resolution. The electronic device100may generate the map data by matching and storing the location information for each node and the image data of the first resolution. In operation840, the electronic device100may estimate the current location of the electronic device100by using the map data generated in operation840and the image data of the first resolution. The image data of the first resolution may be image data obtained at the current location of the electronic device100. According to an embodiment, the electronic device100may estimate the current location of the electronic device100by comparing the image data of the first resolution obtained at the current location with the image data of the first resolution included in the map data. For example, the electronic device100may determine, from the image data of the first resolution included in the map data, image data of the first resolution mostly closely matched to the image data of the first resolution obtained at the current location. The electronic device100may estimate the location information corresponding to the determined image data of the first resolution as the current location of the electronic device100. The device according to the embodiments described herein may include a processor, a memory for storing program data and executing it, a permanent storage such as a disk drive, a communication port for handling communications with external devices, and user interface devices, such as touch panels, keys, buttons, etc. Any methods implemented as software modules or algorithms may be stored as program instructions or computer-readable codes executable by a processor on a computer-readable recording media. Here, the computer-readable recording media may include magnetic storage media (for example, read-only memory (ROM), random-access memory (RAM), floppy disks, hard disks, etc.) and optical reading media (for example, CD-ROMs, digital versatile disc (DVD), etc.). The computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. The media can be read by the computer, stored in the memory, and executed by the processor. The present embodiment may be described in terms of functional block components and various processing steps. Such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. For example, the embodiment may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the present embodiment are implemented using software programming or software elements the embodiment may be implemented with any programming or scripting language such as C, C++, Java, assembler, or the like, with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements. Functional aspects may be implemented in algorithms that execute on one or more processors. Furthermore, the present embodiment could employ any number of conventional techniques for electronics configuration, signal processing and/or control, data processing and the like. The words “mechanism,” “element,” and “component” are used broadly and are not limited to mechanical or physical embodiments. The meaning of these words can include software routines in conjunction with processors, etc. | 43,139 |
11859998 | DESCRIPTION OF EMBODIMENTS Exemplary embodiments of the present application will be illustrated in combination with the accompanying drawings in the following, which include various details of the embodiments of the present application to facilitate understanding, and they should be considered as merely exemplary. Therefore, those of ordinary skill in the art should recognize that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of the present application. Also, for clarity and conciseness, description of well-known functions and structures are omitted in the following description. Vehicle to X (V2X) communication is a key technical direction of Internet of Vehicle. The V2X communication safely and efficiently realizes information exchanges of various elements in vehicles and transportation systems. At the same time, with the rapid development of cities and the increase in road complexity, users' requirements for electronic maps are becoming stronger and stronger. Map data is the basis of the electronic maps, mainly including road information, etc. The road information includes, for example, road construction, road closures, and occurrence of traffic accidents. Since the road information is not static, it is necessary to update the map data according to the road information. Common map data updating methods include map data updating methods based on V2X, 5th generation (5G) or edge technology, which comprehensively, accurately and quickly recognize road information and update map data. Specifically, after multiple roadside units (RSUs) recognize road information; or, after a control center recognizes the road information and sends it to the RSUs, the RSUs broadcast the road information to an On board Unit (OBU). After receiving the road information, the OBU uploads the road information to a server. The server utilizes these road information to update the map data. The road information is acquired by utilizing the V2X technology and reported to the server, the map data is updated by the server according to the road information. The above map data updating methods do not consider a situation of malicious destruction of an On board device like an OBU. When the on board device is maliciously damaged, the damaged OBU may report invalid road information through means such as blocking, forgery, tampering. If the server fails to detect the invalid road information in time, then it will cause errors in map data updating. A problematic electronic map is generated based on incorrect map data. If a user utilizes such problematic electronic map to navigate, it is very likely that a navigation route will be wrong, which will increase the travel cost of the user. Therefore, detection of invalid road information has become a key issue for the map data updating. The embodiments of the present application provide a map data updating method, an apparatus, a device, and a readable storage medium, which, by recognizing valid road information, and updating map data utilizing the valid road information, achieves the purpose of accurately updating the map data. First, terms involved in the embodiments of the present application are explained. V2X: refers to wireless communication technology for vehicles, the technology can safely and efficiently realize information exchanges of various elements in vehicles and transportation systems. V represents vehicles, and X represents all objects that can perform information interaction with the vehicles, mainly including vehicles, persons, and traffic road test infrastructure such as RSUs, networks, and the like. RSU: refers to roadside devices installed beside a road, the RSU is also referred to as a roadside unit. The RSU collects information of road, traffic and weather, the information is processed by the RSU itself; or, the RSU transmits these information to a control center to be processed by the control center. The processed information is broadcast to an OBU connected to the RSU, so as to realize an all-round connection among roads and vehicles, roads and persons, roads and cloud platforms. Among them, the cloud platforms are also referred to as cloud servers, servers, V2X platforms, etc. OBU: is an On board device implementing V2X wireless communication. The OBU interacts with RSU, V2X platforms and other OBUs, and the like, by utilizing the V2X communication technology, which helps drivers obtain a current driving environment, so as to instruct the drivers to drive stably and safely under various complex situations. Sequence: is also referred to as a V2X sequence, which is generated by a server according to road information uploaded by an electronic device such as an OBU. In a process of generating a sequence, the server categorizes and sorts multiple road information according to at least one of types, occurrence locations of the road information and a time point when the OBU receives the road information, thereby obtaining the sequence. Common road information includes road construction, road closures or occurrence of traffic accidents. Next, a network architecture applied to the embodiments of the present application is illustrated in detail. FIG.1Ais a schematic diagram of a network architecture of a map data updating method according to an embodiment of the present application. Referring toFIG.1A, the network architecture includes: a server1, electronic devices2, a roadside unit3, and a camera4. Map data is stored in the server1, and the electronic devices2are, for example, OBUs, mobile phones, notebooks, tablet computers, and the like, andFIG.1Atakes the OBUs as an example. The camera4, for example, is a device installed on the roadside to take pictures of vehicles, pedestrians, and the like, on a road. Assuming that a vehicle collision accident occurs on the road, then collision information of an OBU of a vehicle that occurs a collision will be reported to an RSU, as shown by {circle around (1)} in the figure. Or, the camera4connected to the roadside unit3photographs the road, and sends collision information to the roadside unit3, as shown by {circle around (2)} in the figure. The roadside unit3receives the collision information, recognizes the collision information to obtain road information, where the road information indicates that the vehicle on the road occurs a collision and the road is blocked. After recognizing the road information, the roadside unit3broadcasts the road information to enable the electronic devices2connected to them to receive the road information, as shown by {circle around (3)} in the figure. The electronic devices2send the received road information to the server1, as shown by {circle around (4)} in the figure. The server1generates a sequence according to the received road information, and inputs road information contained in the sequence into a pre-trained neural network model, thereby recognizing whether the road information indicated by the sequence is valid. If the road information is valid, the map data is updated by utilizing the valid road information. FIG.1Bis a schematic diagram of another network architecture of a map data updating method according to an embodiment of the present application. Compared with the architecture shown inFIG.1A, this network architecture further includes a control center5, the control center5is configured to control the RSU3. After the RSU receives the collision information sent by the electronic devices2or the collision information sent by the camera4, the RSU sends the collision information to the control center5. As shown by {circle around (5)} in the figure, after receiving the collision information, the control center recognizes the collision information to obtain road information, which indicates that the vehicle on the road occurs a collision and the road is blocked. After that, the control center5sends the road information to the roadside unit3, and the roadside unit3broadcasts the road information. Hereinafter, the map data updating method described in the embodiments of the present application is illustrated in detail based on the above-mentioned term explanations and the network architectures shown inFIG.1AandFIG.1B. Exemplary, reference is made toFIG.2. FIG.2is a flowchart of a map data updating method according to an embodiment of the present application. The execution subject of this embodiment is an electronic device, which is, for example, the server inFIG.1AandFIG.1B. This embodiment includes: 101: receive road information reported by an electronic device, where the road information is road information broadcast to the electronic device by a roadside unit. Referring toFIG.1AandFIG.1B, the electronic device is, for example, the electronic device that receives broadcast information from the RSU. The electronic device sends the received road information to the server, as shown by {circle around (4)} inFIG.1AandFIG.1B. 102: determine at least one sequence according to the road information, where road information belonging to the same sequence in the at least one sequence has the same type and occurrence location. After receiving the road information, the server obtains at least one sequence according to a type and an occurrence location of each road information. For example, the road information received by the server includes road information a to road information e, where a type of road information a is collision, a location is location A, and a time point is T1; a type of road information b is collision, a location is location A, and a time point is T2; a type of road information c is collision, a location is location A, and a time point is T3; a type of road information d is blockage, a location is location B, and a time point is T4; a type of road information e is blockage, a location is location B, and a time point is T5. Then, the server generates two sequences based on these road information, namely sequence 1: [road information a, road information b, road information c], and sequence 2: [road information d, road information e]. 103: input road information contained in each sequence in the at least one sequence into a pre-trained neural network model to obtain a recognition result of a corresponding sequence, where the recognition result is used to indicate whether the road information belonging to the corresponding sequence is valid, and when the road information belonging to the corresponding sequence is valid, the road information belonging to the corresponding sequence is real road information. Exemplary, a trained neural network model is pre-deployed on the server. The server sequentially inputs the road information contained in each sequence into the neural network model, and the neural network model learns the road information contained in this sequence to obtain an output result, which is used to indicate whether the road information belonging to this sequence is valid. For example, the server inputs sequence 1: [road information a, road information b, road information c] into the neural network model, the neural network model extracts a feature vector for each road information in sequence 1, and learns these feature vectors to obtain an output result. When the output result is 0, it means that the road information a, the road information b and the road information c are invalid. When the output result is 1, it means that the road information a, the road information b and the road information c are valid. 104: update map data by utilizing the road information belonging to the corresponding sequence if the road information belonging to the corresponding sequence is valid. Exemplary, after determining valid road information, the server updates the valid road information into the map data. When a user uses an electronic map, the server sends updated map data to an electronic device of the user, so that the electronic device of the user displays a map based on the updated map data, and performs navigation for the user. In the map data updating method provided by the embodiment of the present application, after receiving the road information reported by the electronic device, the server obtains multiple sequences according to the road information, where each road information belonging to the same sequence has the same type and location. After that, the server inputs each road information contained in the sequence into the pre-trained neural network model, so that the neural network model outputs the recognition result according to the sequence. If the recognition result indicates that the road information belonging to the sequence is valid, then the server updates the map data by utilizing the valid road information. With this solution, the server inputs each road information contained in the sequence into the neural network model, recognizes valid road information by combining context of each road information in the sequence and the neural network technology, and updates the map data, thereby achieving the purpose of accurately updating the map data. The embodiments of the present application are roughly divided into three stages: a pre-training model stage, a stage that utilizes the model to perform an online prediction, and a map data updating stage. In the following, these stages will be illustrated in detail, respectively. First, the pre-training model stage. In the above-mentioned embodiment, before the road information contained in each sequence in the at least one sequence is input into the pre-trained neural network model to obtain the recognition result of the corresponding sequence, the neural network model is further trained. In a process of training the neural network model, the server first acquires a sample set, and samples in the sample set include positive samples and negative samples, where the positive samples are real road information, and the negative samples are false road information. Then, the server divides the samples in the sample set to obtain at least one sample sequence. Samples belonging to the same sample sequence in the at least one sample sequence have the same type and occurrence location. Finally, the server trains an initial model according to the at least one sample sequence to obtain the neural network model. Exemplary, the positive samples and the negative samples are labeled from multiple samples in advance through manners such as manual labeling. After that, these samples are divided. In a division process, the server divides samples of the same type and location into a group. Next, for samples belonging to the same group, the server sorts these samples according to a receiving time of each sample, thereby obtaining a sequence. For example, road information divided into a group includes that: the type of the road information a is collision, the location is location A, and the time point is T1; the type of the road information b is collision, the location is location A, and the time point is T2; the type of the road information c is collision, the location is location A, and the time point is T3. The three road information has the same type and location, but the time points when the electronic device receives the road information are different. Among them, T1, T2, and T3are time points when the same electronic device or different electronic devices receive the road information, and a sequential order is T2, T1and T3. Therefore, the sequence is [road information b, road information a, road information c]. Finally, the server trains the initial model according to the at least one sample sequence, and continuously optimizes parameters and the like of the initial model until the initial model reaches an optimal state, then the model with the optimal state is used as the trained neural network model. With this solution, after acquiring the samples reported by the electronic device, the server sorts the samples according to the type of each sample, the occurrence location of each sample, and the time point when the electronic device receives each sample, thereby achieving the purpose of acquiring the sample sequence. FIG.3is a schematic diagram of model training in a map data updating method according to an embodiment of the present application. Referring toFIG.3, the initial model contains five layers, namely an Embedding layer, a Bi-directional Long Short-Term Memory Recurrent Neural Network (BiLSTM) layer, a Concatenate layer, a Fully connected (FC) layer and a loss function layer, a loss function is, for example, softmax. Referring toFIG.3, the server sorts the samples to obtain at least one sequence, and the at least one sequence forms a sequence set S. After that, any sample sequence Siin the sequence set5, hereinafter is referred to as an ithsample sequence. The ithsample sequence is obtained according to any sample sequence in the at least one sample sequence. For example, the ithsample sequence is any sample sequence in the at least one sample sequence; for another example, the ithsample sequence is a subsequence of any sample sequence in the at least one sample sequence. It is assumed that the ithsample sequence Sicontains NSisamples, and Nsi≥1 and is an integer. The server extracts multiple consecutive samples to form a subsequence. Assuming that samples contained in the subsequence contains are sample so, sample s1, sample s2, and sample s3, then the subsequence is expressed as: {s0, s1, s2, s3}. With this solution, when the number of samples is relatively small, the number of sequences is expanded by means of extracting subsequences, which improves the accuracy of model training. After obtaining the sequence set S, the server inputs road information contained in the ithsample sequence into the Embedding layer of the initial model, so that the Embedding layer extracts a feature vector of each sample in the ithsample sequence, and inputs the extracted feature vector into the BiLSTM layer. For example, if the ithsample sequence Siis {s0, s1, s2, s3}, then the Embedding layer extracts respective feature vectors of sample s0, sample s1, sample s2, and sample s3. The extracted feature vectors are input into the BiLSTM layer. In the BiLSTM layer, the server learns the feature vector of each sample in the ithsample sequence by utilizing the long short-term memory recurrent neural network layer of the initial model to obtain multiple context vectors, where each context vector of the multiple context vectors is used to indicate relationships among samples in the ithsample sequence. Exemplary, BiLSTM consists of forward LSTM and backward LSTM, and is usually used to model context information. After the feature vector of each sample of the ithsample sequence Siare input into the BiLSTM layer, the BiLSTM layer obtains the multiple context vectors by utilizing context of each sample of the ithsample sequence Si. Each context vector of these context vectors carries relationships among samples. The server trains the Concatenate layer, the Fully Connected layer and the loss function layer of the initial model according to the multiple context vectors to obtain the neural network model. Exemplary, a loss function corresponding to the loss function layer is, for example, softmax. The server continuously adjusts parameters of the Concatenate layer, the Fully Connected layer and the loss function layer of the initial model according to the multiple context vectors, so that the parameters of the Concatenate layer, the Fully Connected layer and the loss function layer of the initial model are optimal, and the optimal initial model is used as the neural network model. With this solution, the server takes the samples in the ithsample sequence Siin the sequence set S as input, and continuously trains and optimizes the initial model, thereby achieving the purpose of obtaining the neural network model. When training the Concatenate layer, the Fully Connected layer and the loss function layer of the initial model according to the multiple context vectors to obtain the neural network model, the server first concatenates the multiple context vectors in the Concatenate layer of the initial model to obtain a concatenating vector; then, learns the Fully Connected layer and the loss function layer of the initial model by utilizing the concatenating vector to obtain the neural network model. Exemplary, reference is made toFIG.3again, the BiLSTM layer outputs the multiple context vectors, and these vectors are input into the Concatenate layer. The Concatenate layer concatenates all the context vectors output by the BiLSTM layer to obtain a concatenating vector. The concatenating vector is input into the FC layer. After the concatenating vector is processed by the FC layer and the softmax layer, the parameters of each layer of the initial model are adjusted. With this solution, the purpose of adjusting the parameters of the Concatenate layer, the FC layer and the softmax layer of the initial model is realized. Second, the stage that utilizes the model to perform the online prediction. In a process of predicting whether the road information is valid after the neural network model is trained, the server receives the road information reported by the electronic device, such as the OBU, and sorts the received road information according to the types and the locations of the road information, so as to acquire multiple sequences. After that, the road information contained in the sequences is input into the pre-trained neural network model to judge whether the road information belonging to the sequences is valid. In the judgment process, the Embedding layer of the neural network model extracts a feature of each road information in the sequences to obtain the feature vector of each road information, and these feature vectors are input into the BiLSTM layer. After that, the BiLSTM layer learns the feature vectors and obtains multiple context vectors containing context information. The Concatenate layer concatenates these context vectors to obtain the concatenating vector. Finally, after the concatenating vector is processed by the FC layer and the Softmax layer, a recognition result can be obtained. The recognition result is used to indicate whether the road information of a certain type and a certain location is valid. Finally, the map data updating stage. In this stage, after obtaining the valid road information, the server extracts information such as a Global Positioning System (GPS) location and a type of the road information, and updates the map data according to the extracted information. The aforementioned stage that utilizes the model to perform the online prediction and the map data updating stage can be shown inFIG.4as follows.FIG.4is a process schematic diagram of a map data updating method according to an embodiment of the present application. Referring toFIG.4, the server is provided with a sequence extracting module, a valid road information recognizing module and a map data updating module, where the sequence extracting module is configured to extract sequences, for example, sort road information according to a type, an occurrence location and an OBU receiving time of each road information, so as to obtain the sequences. The valid road information recognizing module: is configured to mine valid road information. During a mining process, relevant features such as an OBU feature, an RSU feature, and a road information feature is extracted for each road information in each sequence, and a feature vector is generated. Then, the feature vector is input into a model to determine whether the road information is valid. The map data updating module: is configured to update map data. For example, information such as a GPS location and a type is extracted from valid road information, and the map data is updated. In the following, how does the server determine the feature vector of each sample in the ithsample sequence in the foregoing embodiment will be described in detail. For each sample of the ithsample sequence Siin the sequence set S, the server extracts at least one of an electronic device feature, a roadside unit RSU feature, and a road information feature corresponding to each sample, the electronic device feature is used to characterize an electronic device that reports the each sample, the RSU feature is used to characterize an RSU that broadcasts the each sample to the electronic device, and the road information feature is used to characterize the each sample. Then, the server generates the feature vector of the each sample according to at least one of the electronic device feature, the RSU feature, and the road information feature of the sample. Exemplary, for each sample, the server extracts the electronic device feature (for example, an OBU feature), the RSU feature, the road information feature, and the like, thereby generating a feature vector of this sample. In the following, the electronic device feature, the RSU feature and the road information feature are respectively described in detail. First, the electronic device feature. The electronic device feature is used to characterize the electronic device that reports the each sample, and includes an electronic device identification oid, the number of times that the electronic device reports the each sample No, and the number of times that the electronic device reports a valid sample Nov. A: the electronic device identification oid. For each electronic device, the server randomly generates a vector Rokwith a dimension of k, this vector Rokobeys a normal distribution N(0,1), and k is 32, for example. This vector Rokis used to represent the identification oid of the electronic device. B: The number of times that the electronic device reports the each sample Noand the number of times that the electronic device reports the valid sample Nov. The server counts offline the number of times that the electronic device historically reports the road information, and this number of times is the number of times that the electronic device reports the each sample No. The server also counts offline the number of times that the electronic device historically reports valid road information, and the number of times is the number of times that the electronic device reports the valid sample N. After counting the number of times that the electronic device reports the each sample Noand the number of times that the electronic device reports the valid sample Nov, the server de-duplicates the each sample reported by the electronic device to determine the number of times that the electronic device reports a non-repetitive sample; and de-duplicates the valid sample reported by the electronic device to determine the number of times that the electronic device reports a non-repetitive valid sample. Exemplary, since the electronic device may receive road information from different RSUs, however, the road information broadcasted by the different RSUs may be the same. Therefore, the same road information needs to be de-duplicated, and only one of the multiple duplicate road information is retained, and the rest are deleted. At the same time, the number of times that the electronic device reports the each sample Nois adjusted according to the number of deleted samples, and the number of times that the electronic device reports the valid sample Novis adjusted according to the number of deleted valid samples. After de-duplication, the server utilizes Z-score to standardize the number of times that the electronic device reports the each sample No, so that the number of times that the electronic device reports the each sample Noobeys a normal distribution N(0,1). The server utilizes Z-score to standardize the number of times that the electronic device reports the valid sample Nov, so that the number of times that the electronic device reports the samples Novobeys a normal distribution N(0,1). With this solution, the samples and the valid samples reported by the electronic device are de-duplicated to ensure the unity of the samples, and then the accuracy of the model is improved. Second, the RSU feature. The RSU feature is used to characterize the RSU that broadcasts the each sample to the electronic device, and includes at least one of an RSU identification rid, the total number of times that the RSU broadcasts the each sample Nr, and the valid number of times that the RSU broadcasts the valid sample Nrv. C: the identification rid of the RSU. For each RSU, the server randomly generates a vector Rrkwith a dimension of k, this vector Rrkobeys a normal distribution N(0,1), and k is 32, for example. This vector Rrkis used to represent the identification rid of the RSU. D: the total number of times that the RSU broadcasts the each sample Nrand the number of times that the RSU broadcasts the valid sample Nrv. The server counts offline the number of times that the RSU transmits road information to the electronic device such as the OBU, and this number of times is the total number of times that the RSU sends the each sample Nr. The server also counts offline the number of times that the RSU sends the valid road information to the OBU, and this number of times is the number of times that the RSU broadcasts the valid sample Nrv. After determining the total number of times that the RSU broadcasts the each sample Nrand the number of times that the RSU broadcasts the valid sample Nrv, the server further removes the number of times that the RSU repeatedly broadcasts the each sample from the total number of times that the RSU broadcasts the each sample, and removes the number of times that the RSU repeatedly broadcasts the valid sample from the number of times that the RSU broadcasts the valid number. Exemplary, since the RSU may broadcast the same road information for multiple times, it is necessary to subtract the number of times of repeatedly broadcasting the same sample from the total number of times Nr, and remove the number of times of repeatedly broadcasting the same valid sample from the number of times Nrvthat the RSU broadcasts the valid sample. After de-duplication, the server utilizes Z-score to standardize the total number of times that the RSU broadcasts the each sample Nr, so that the total number of times that the RSU broadcasts the each sample Nrobeys a normal distribution N(0,1). The server utilizes Z-score to standardize the number of times that the RSU broadcasts the valid sample Nrv, so that the number of times that the RSU broadcasts the valid sample Nrvobeys a normal distribution N(0,1). With this solution, the total number of times that the RSU broadcasts the each sample Nrand the number of times that the RSU broadcasts the valid sample Nrvare de-duplicated to ensure the unity of the samples, and then the accuracy of the model is improved. Finally, the road information feature. In the embodiment of the present application, the road information feature is used to characterize the road information, and the road information feature includes at least one of the following features: a sample type Ti, a sample location Is, a sample start time ts, a sample end time te, and a time trwhen the electronic device receives the each sample, where the sample location Isis used to characterize a geographic location where the each sample occurs. E: the sample type T. For each road information, the server randomly generates a vector4with a dimension of k, this vector Rtkobeys a normal distribution N(0,1), and k is 32, for example. This vector Rtkis used to represent the sample type Ti. F: the sample start time6. The sample start time tsis used to characterize a time point when the road information occurs. In order to ensure the continuity of time, the sample start time teis represented by two characteristics after sine and cosine revolution. That is, the sample start time tsis expressed as: cos(2π24×60×60×ts)andsin(2π24×60×60×ts). G: the sample end time te. The sample end time teis used to characterize a time point when the road information ends. In order to ensure the continuity of time, the sample end time6is consistent with the sample start time ts, and is also represented by two characteristics after sine and cosine revolution. That is, the sample end time6is expressed as: cos(2π24×60×60×te) and sin(2π24×60×60×te). H: the time trwhen the electronic device receives the each sample. The time trwhen the electronic device receives the each sample is used to characterize a time point when the electronic device such as the OBU receives the road information broadcast by the RSU. In order to ensure the continuity of time, the time trwhen the electronic device receives the each sample is consistent with the sample end time teand the sample start time ts, which are also represented by two characteristics after sine and cosine revolution. That is, the time trwhen the electronic device receives the each sample is expressed as: cos(2π24×60×60×tr) and sin(2π24×60×60×tr). I: the sample location Is. The sample location Isis used to characterize the geographic location where the each sample occurs. In order to improve the generalization ability of location feature, a national map is divided into a set of square grids L with a side length of 100 meters, and monotonically increasing integers are used to identify the grids from top to bottom and from left to right. After that, the grids are standardized utilizing Z-score to make them obey a normal distribution N(0,1). When determining a grid location of the road information, that is, the sample location Is, the server determines a grid where the road information is located according to GPS information of a location where the information occurs, and then a value corresponding to the grid is acquired. The foregoing describes specific implementations of the map data updating method mentioned in the embodiments of the present application. The following are apparatus embodiments of the present application, which can be used to implement the method embodiments of the present application. For details not disclosed in the apparatus embodiments of the present application, reference is made to the method embodiments of the present application. FIG.5is a schematic structural diagram of a map data updating apparatus according to an embodiment of the present application. The apparatus can be integrated in a server or implemented by a server. As shown inFIG.5, in this embodiment, the map data updating apparatus100may include: a receiving module11, a determining module12, a recognizing module13, and an updating module14. The receiving module11is configured to receive road information reported by an electronic device, where the road information is road information broadcast to the electronic device by a roadside unit; the determining module12is configured to determine at least one sequence according to the road information, where road information belonging to the same sequence in the at least one sequence has the same type and occurrence location; the recognizing module13is configured to input road information contained in each sequence in the at least one sequence into a pre-trained neural network model to obtain a recognition result of a corresponding sequence, where the recognition result is used to indicate whether the road information belonging to the corresponding sequence is valid, and when the road information belonging to the corresponding sequence is valid, the road information belonging to the corresponding sequence is real road information; and the updating module14is configured to update map data by utilizing the road information belonging to the corresponding sequence if the road information belonging to the corresponding sequence is valid. FIG.6is a schematic structural diagram of another map data updating apparatus according to an embodiment of the present application. As shown inFIG.6, the map data updating apparatus100provided in this embodiment, on the basis of the above-mentionedFIG.5, further includes: a training module15, configured to acquire a sample set before the recognizing module13inputs the road information contained in each sequence in the at least one sequence into the pre-trained neural network model to obtain the recognition result of the corresponding sequence, where samples in the sample set include positive samples and negative samples, where the positive samples are real road information, and the negative samples are false road information; divide the samples in the sample set to obtain at least one sample sequence, where samples belonging to the same sample sequence in the at least one sample sequence have the same type and occurrence location; and train an initial model according to the at least one sample sequence to obtain the neural network model. In a feasible design, when training the initial model according to the at least one sample sequence to obtain the neural network model, the training module15is configured to determine, for an ithsample sequence, a feature vector of each sample in the ithsample sequence in an embedding layer of the initial model, where the ithsample sequence is obtained according to any sample sequence of the at least one sample sequence; learn, by utilizing a long-short-term memory recurrent neural network layer of the initial model, the feature vector of each sample in the ithsample sequence to obtain multiple context vectors, where each context vector of the multiple context vectors is used to indicate relationships among samples in the ithsample sequence; and train a Concatenate layer, a Fully Connected layer and a loss function layer of the initial model according to the multiple context vectors to obtain the neural network model. In a feasible design, when training the Concatenate layer, the Fully Connected layer and the loss function layer of the initial model according to the multiple context vectors to obtain the neural network model, the training module15is configured to concatenate the multiple context vectors in the Concatenate layer of the initial model to obtain a concatenating vector; and learn, by utilizing the concatenating vector, the Fully Connected layer and the loss function layer of the initial model to obtain the neural network model. In a feasible design, the ithsample sequence is any sample sequence in the at least one sample sequence; or the ithsample sequence is a subsequence of any sample sequence in the at least one sample sequence. In a feasible design, when determining, for the ithsample sequence, the feature vector of the each sample in the ithsample sequence, the training module is configured to extract at least one of an electronic device feature, a roadside unit RSU feature, and a road information feature corresponding to the each sample in the ithsample sequence, and generate, for the each sample in the ithsample sequence, the feature vector of the each sample according to at least one of the electronic device feature, the RSU feature, and the road information feature corresponding to the each sample, where the electronic device feature is used to characterize an electronic device that reports the each sample, the RSU feature is used to characterize an RSU that broadcasts the each sample to the electronic device, and the road information feature is used to characterize the each sample. In a feasible design, the electronic device feature includes an identification of the electronic device, the number of times that the electronic device reports the each sample, or the number of times that the electronic device reports a valid sample, and the training module is further configured to de-duplicate the each sample reported by the electronic device to determine the number of times that the electronic device reports a non-repetitive sample, and de-duplicate the valid sample reported by the electronic device to determine the number of times that the electronic device reports a non-repetitive valid sample. In a feasible design, the RSU feature includes an identification of the RSU, the total number of times that the RSU broadcasts the each sample, and the number of times that the RSU broadcasts the valid sample, and the training module is further configured to remove the number of times that the RSU repeatedly broadcasts the each sample from the total number of times that the RSU broadcasts the each sample, and remove the number of times that the RSU repeatedly broadcasts the valid sample from the number of times that the RSU broadcasts the valid sample. In a feasible design, the road information feature includes at least one of the following features: a sample type, a sample location, a sample start time, a sample end time, and a time when the electronic device receives the each sample, and where the sample location is used to characterize a geographic location where the each sample occurs. The map data updating apparatus provided in the embodiments of the present application can be used in the methods executed by the server in the above embodiments, and implementation principles and technical effects thereof are similar, which will not be repeated herein. According to the embodiments of the present application, the present application further provides an electronic device and a readable storage medium. FIG.7is a block diagram of an electronic device for implementing the map data updating method according to an embodiment of the present application. The electronic device is intended to represent various forms of digital computers, such as laptop computer, desktop computer, workstation, personal digital assistant, server, blade server, mainframe computer, and other suitable computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistant, cellular phone, smart phone, wearable device and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementations of the present application described and/or claimed herein. As shown inFIG.7, the electronic device includes: one or more processors21, a memory22, and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses, and can be mounted on a common motherboard or otherwise installed as required. The processor may process instructions executed within the electronic device, including instructions stored in or on the memory to display graphical information of Graphical User Interface (GUI) on an external input/output apparatus, such as a display device coupled to an interface. In other implementations, a plurality of processors and/or a plurality of buses may be used together with a plurality of memories, if desired. Similarly, a plurality of electronic devices can be connected, and each device provides part of necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system). One processor21is taken as an example in FIG.7. The memory22is a non-transitory computer-readable storage medium provided by the present application. The memory stores instructions executable by at least one processor, so that the at least one processor executes the map data updating method provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions, the computer instructions are used to cause a computer to execute the map data updating method provided by the present application. As a non-transitory computer-readable storage medium, the memory22may be configured to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the map data updating method in the embodiments of the present application (for example, the receiving module11, the determining module12, the recognizing module13, and the updating module14shown inFIG.5, and the training module15shown inFIG.6). The processor21executes various functional applications and data processing of the server by running non-transitory software programs, instructions and modules stored in the memory22, that is, the map data updating method in the above-mentioned method embodiments are realized. The memory22may include a storage program area and a storage data area, where the storage program area may store an operating system and at least one application program required for functions; the storage data area may store data created according to the use of the electronic device of the map data updating method, and the like. In addition, the memory22may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk memory device, flash memory device, or other non-transitory solid-state memory devices. In some embodiments, the memory22may include memories remotely disposed with respect to the processor21, and these remote memories may be connected to the electronic device implementing the map data updating method through a network. Examples of the above-mentioned network include, but are not limited to, an Internet, an Intranet, a local area network, a mobile communication network and combination thereof. The electronic device for implementing the map data updating method may further include: an input apparatus23and an output apparatus24. The processor21, the memory22, the input apparatus23, and the output apparatus24may be connected through a bus or other means. InFIG.7, a connection through a bus is taken as an example. The input apparatus23may receive input numeric or character information, and generate key signal inputs related to user settings and function control of electronic device for implementing map data updating method, for example input apparatus such as touch screen, keypad, mouse, track pad, touch pad, pointing stick, one or more mouse buttons, trackball, joystick. The output apparatus24may include a display device, an auxiliary lighting apparatus (e.g., an LED), a haptic feedback apparatus (e.g., a vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light-emitting diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen. Various implementations of the systems and technologies described herein may be implemented in a digital electronic circuit system, an integrated circuit system, an application-specific ASIC (application-specific integrated circuit), computer hardware, firmware, software, and/or combination thereof. These various implementations may include: being implemented in one or more computer programs, the one or more computer programs are executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor may be a dedicated or general-purpose programmable processor that may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, at least one input apparatus, and at least one output apparatus. These computer programs (also known as programs, software, software applications, or codes) include machine instructions of a programmable processor, and can be implemented by using high-level procedures and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, device, and/or apparatus used to provide machine instructions and/or data to a programmable processor (e.g., a magnetic disk, an optical disk, a memory, a programmable logic device (PLD)), including machine-readable medium that receive machine instructions as machine-readable signals. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide interaction with the user, the systems and technologies described herein can be implemented on a computer having: a display apparatus (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing apparatus (e.g., a mouse or a trackball) through which the user can provide input to the computer. Other kinds of apparatuses may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensor feedback (for example, visual feedback, audible feedback, or haptic feedback); and input from the user may be received in any form, including acoustic input, voice input or haptic input. The systems and technologies described herein can be implemented in a computing system including background components (e.g., as a data server), or a computing system including middleware components (e.g., an application server), or a computing system including front-end components (e.g., a user computer with a graphical user interface or a web browser through which users can interact with implementation of the systems and technologies described herein), or a computing system that includes any combination of such back-end components, middleware components, or front-end components. The components of the systems can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of the communication network include local area network (LAN), wide area network (WAN), and Internet. The computing system may include a client side and a server. The client side and the server are generally remote from each other and typically interact through a communication network. The relationship between the client side and server is generated by computer programs running on a corresponding computer and having a client side-server relationship with each other. An embodiment of the present application further provides a map data updating method, which receives road information reported by an electronic device, determines at least one sequence according to the road information, where road information belonging to the same sequence in the at least one sequence has the same type and occurrence location; and updates map data according to the at least one sequence. For the specific implementation principle of this embodiment, reference may be made to the record of the foregoing embodiments, which will not be repeated herein. According to the technical solution of the embodiments of the present application, a server inputs each road information contained in the sequence into the neural network model, recognizes valid road information by combining context of each road information in the sequence and the neural network technology, and updates map data, thereby achieving the purpose of accurately updating the map data. It should be understood that various forms of processes shown above can be used to reorder, add, or delete steps. For example, various steps recorded in the present application can be executed in parallel, sequentially or in different orders. As long as the desired results of the technical solutions disclosed in the present application can be achieved, there is no limitation herein. The above-mentioned specific implementations do not constitute a limitation of the protection scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made according to design requirements and other factors. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. | 52,160 |
11859999 | Other features, characteristics, advantages, and benefits of the present disclosure may become more apparent from the following detailed description in connection with the accompanying drawings. DETAILED DESCRIPTION OF THE EMBODIMENTS For the following detailed description of preferred embodiments, reference will be made to the accompanying drawings which form a part of the present disclosure. The accompanying drawings show specific embodiments in which the present disclosure can be realized through examples. Exemplary embodiments are not intended to be exhaustive of all embodiments according to the present disclosure. It can be understood that other embodiments may be used and structural or logical modifications may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not intended to be limiting, and the scope of the present disclosure is defined by the appended claims. In order to solve the technical problem in the existing technology that a laser detector must be used to sense the height difference for calibrating a laser level, the inventors of the present disclosure thought of designing a method to determine the height difference of lasers formed before and after by rotating the laser level for a certain angle on the target. Thus, on one hand, there is a relatively great possibility to reduce the cost of the system. On another hand, since the accuracy of the image recognition technology is relatively high, the calibration accuracy may be improved. The target may include a conventional laser detector or a simple target such as a whiteboard. Based on the above intention concept, the inventors of the present disclosure thought of designing a device for calibrating a laser level. The device may include an image recognition device. The image recognition device may be configured to obtain an image of a laser projected on a target and determine positions of lasers emitted by the laser level before and after the laser level rotates for a first angle on the target based on the image recognition technology. Then, a corresponding processor may be configured to determine whether the laser level needs to be calibrated based on a deviation based on the image recognition technology, distance data according to position data of a first position and a second position before and after the rotation of the first angle, and the first angle. The device for calibrating the laser level disclosed according to the present disclosure is further described below in connection with the accompanying drawings. FIG.1is a schematic structural diagram of a device100configured to calibrate a laser level according to an embodiment of the disclosure. As shown inFIG.1, the device100configured to calibrate the laser level of the present disclosure includes a base platform110. The base platform110may be configured to support a laser level120that is to be mounted on the base platform110at a first position (left side inFIG.1). The device100further includes a target130. The target130may be arranged at a second position (right side inFIG.1) of the base platform110and configured to receive the laser emitted by the laser level110. The device100further includes an image recognition device140. The image recognition device140may be configured to obtain an image of the laser projected on the target130and determine positions of the lasers emitted by the laser level110before and after the rotation of the first angle on the target130based on the mage recognition technology to determine a distance deviation to further determine whether the laser level needs to be calibrated based on the deviation data, the distance data determined based on the position data of the first position and the second position, and the first angle. Thus, the image recognition device140may include a smartphone, a tablet computer, and/or a camera. As such, existing apparatuses owned by the user of the device100for calibrating the laser level110may be reused to further reduce the apparatus cost of calibrating the laser level110. The device100for calibrating the laser level110of the present disclosure may not require a detector with a laser height difference recognition function and can perform the calibration on the laser level only by the image recognition device. Thus, the device may have a simple structure and high calibration accuracy. In some embodiments, the target130inFIG.1may have various forms. For example, the target130may have no pattern or have various patterns. For example, the target130may include any one of a horizontal scale, a black color block, a black and white grid scale, a cross scale, and/or a high and low scale. The technical solution without any pattern is first described below. When the target130has no pattern, the image recognition device140may be implemented by, for example, a cellphone. Assume that the target130includes a black color block. Thus, the image recognition of the image recognition device140may include obtaining position information of a black rectangle of the target on a screen using an algorithm such as threshold, findContours, etc., in the OpenCV framework. A coordinate of an upper left corner point of the black rectangle may be denoted as (x,y), and a height may be denoted as H. Then, the height H may be sent to a downstream machine. Then, in a dark environment, a frame with light spot may be binarized into a grayscale image by the algorithm of findContours in the openCV framework. A position of the light spot of the laser level on the screen may be obtained. Based on a coordinate of a center point of the light spot, the position may be denoted as (m, n). Thus, the position of the laser emitted by the laser level 1 on the target130may be identified. In addition, another possibility may exist, that is, the target130does not include the black color block. Thus, a distance between the image recognition device140and the target130may be required to be constant. The height of the black color block on the screen in the above solution may be a constant value H. Since the constant value H is known, thus, the height H may also be sent to the lower machine. Then, in the dark environment, the frame with the light spot may be binarized into the grayscale image through the algorithm of findContours in the openCV framework to obtain the position of the light spot of the laser level on the screen, and based on the center coordinate of the light spot, the position may be denoted as (m, n). Thus, the position of the laser emitted by the laser level 1 on the target130may also be recognized. Of course, different patterns may also be provided on the target130. Various possible pattern forms are described below. FIG.2a schematic diagram showing a target230used in the device configured to calibrate the laser level according to an embodiment of the disclosure shown inFIG.1. As shown inFIG.2, a line with a relevant position mark is arranged on the target230. Thus, when the laser emitted by the laser level is close to or coincides with a line, the height of the laser emitted by the laser level may be recorded as a value recorded by a mark associated with the line. Then, the laser level may be rotated by a certain angle. A value recorded by a mark associated with another line may be recorded in the same method after the rotation of the certain angle. For example, before the rotation, the height of the laser may be associated with line marked with “+3.” After the rotation of the certain angle, the height of the laser may be associated with a line marked with “−3.” Thus, the height difference before and after the rotation of the certain angle may be +3−(−3), that is 6. Then, based on the height difference, the rotation angle, and the distance between the laser level and the target, whether the laser level needs to be calibrated may be determined, and a specific calibration parameter value may be calibrated when the laser level needs to be calibrated. FIG.3is a schematic diagram showing a target330used in the device configured to calibrate the laser level according to an embodiment of the disclosure shown inFIG.1. As can inFIG.3, the target330includes a black and white grid with relevant position marks. Thus, when the laser emitted by the laser level is close to or coincide with a black and white grid, the height of the laser emitted by the laser level may be denoted as a value recorded by a mark associated with the black and white grid. Then, the laser level may be rotated for a certain angle, a value recorded by a mark of another black and white grid may be determined in the same method after the rotation of the certain angle. For example, a distance from the laser to a reference line 0 can be 10% of a distance between the reference line 0 and black grid +50 before the rotation, which can be marked with “+5.” Similarly, after the rotation of the certain angle, the distance between the laser and the reference line 0 can be 20% of the distance between the reference line 0 and the white grid −50, which can be marked with “−10.” Thus, the height difference before and after the rotation of the certain angle may be +5−(−10), that is 15. Then, based on the height difference, the rotation angle, and the distance between the laser level and the target, whether the laser level needs to be calibrated may be determined, and a specific calibration parameter value may be calibrated when the laser level needs to be calibrated. FIG.4is a schematic diagram showing a target430used in the device configured to calibrate the laser level according to an embodiment of the disclosure shown inFIG.1. As shown inFIG.4, the target430includes a cross scale. Thus, when the laser emitted by the laser level is close to or coincides with a scale, the height of the laser emitted by the laser level may be denoted by the scale. Then, the laser level may be rotated for a certain angle, another scale may be determined in the same method after the rotation of the certain angle. For example, before the rotation, the height may be associated with a first scale, and after the rotation, the height may be associated with a second scale. Then, a height difference before and after the rotation for the certain angle may be a difference between the first scale and the second scale. Then, whether the laser level needs to be calibrated may be determined based on the height difference, the rotation angle, and the distance between the laser level and the target. When the calibration is required, a specific calibration parameter value may be calculated. FIG.5is a schematic diagram showing a target530at zero degree used in the device configured to calibrate the laser level according to an embodiment of the disclosure shown inFIG.1.FIG.6is a schematic diagram showing the target530at 180 degrees used in the device configured to calibrate the laser level according to an embodiment of the disclosure shown inFIG.1. A difference betweenFIG.5andFIG.6withFIG.2includes that a horizontal scale with a large interval is included, and each horizontal scale is divided into three steps. Thus, the scale may be further refined, and a higher precision may be realized. In some embodiments, as shown inFIG.5andFIG.6, a line with a relevant position mark is arranged on the target530. Thus, when the laser emitted by the laser level is close to or coincides with a line, the height of the laser emitted by the laser level may be denoted by the value recorded by the mark associated with the line. Then, the laser level may be rotated for a certain angle, e.g., 180 degrees, the value recorded by the mark associated with another line may be determined in the same method after the rotation for the certain angle. For example, before the rotation, the height may be relevant to the line marked with “+4,” and after the rotation, the height may be relevant to the line marked with “+1.” Thus, the height different before and after the rotation for the certain angle may be +4−(+1), that is 3. Then, based on the height difference, the rotation angle, and the distance between the laser level and the target, whether the laser level needs to be calibrated may be determined. When the calibration is required, a specific calibration parameter value may be calibrated. In addition, the inventors of the present disclosure considered the technical problem that the device for calibrating the laser level is limited by space and thought of designing a kind of optical enlargement of the physical distance between the laser level and the target using an optical device. Thus, the physical distance may be enlarged in a limited space to improve the calibration accuracy. In some embodiments, as shown inFIGS.7and8,FIG.7is a schematic diagram showing a device700configured to calibrate a laser level from an angle according to an embodiment of the disclosure.FIG.8is a schematic diagram showing the device700configured to calibrate the laser level from another angle according to an embodiment of the disclosure. As shown inFIG.7andFIG.8, the present disclosure provides the device700for calibrating the laser level, including a base platform710. The base platform710may be configured to support the laser level720that is to be mounted on the base platform710at a first position. In addition, the device700also includes a target730. The target730may be arranged at a second position of the base platform710and configured to receive a laser. In addition, the device700further includes an image recognition device740. The image recognition device740may be configured to obtain the image of the laser projected on the target730and determine positions of lasers emitted by the laser level720before and after the rotation of the first angle based on the image recognition technology to determine the deviation distance. Thus, based on the deviation data, the distance data determined according to the position data of the first position and the second position, and the first angle, the device700may be configured to determine whether the laser level720needs to be calibrated. In embodiments shown inFIG.7andFIG.8, the image recognition device740may be arranged at a third position on a side of the target730facing the laser level720. Those skilled in the art should know that when the image recognition device740is arranged at the third position of the target730facing the laser level720, the image recognition device740may easily recognize the position of the laser on the target730to improve the recognition accuracy. The base platform710may further include a rotation device (not shown in the drawing, e.g., right below the laser level720). The rotation device may be configured to rotate the laser level720for the first angle based on a control instruction received from the image recognition device740. Thus, the rotation angle of the laser level720may be controlled more accurately. In an embodiment of the present disclosure, the first angle may include one of 180 degrees, 90 degrees, or 270 degrees. Thus, those skilled in the art should know that these three angles are merely exemplary not limiting. With these angles, the calibration parameter may be easily calculated. With other angles, the calibration may also be realized. Thus, the technical solutions of other angles may be also included in the technical solutions claimed by the independent claims of the present disclosure. In an embodiment of the present disclosure, when the image recognition device determines that the laser level needs to be calibrated, the image recognition device may determine a calibration signal based on the distance data, the deviation distance, and the first angle and send the calibration signal to the laser level. In addition to this, the device700also includes an optical path extension device750. The optical path extension device750may be arranged at a fourth position between the laser level720and the target730and configured to receive the laser emitted by the laser level720and project to the target730after the laser is adjusted by the optical path extension device750. More preferably, in an embodiment of the present disclosure, the optical path extension device750may include an objective lens. The objective lens may be configured to receive the laser emitted by the laser level. The optical path extension device750may further include an objective focusing lens. The objective focusing lens may be configured to perform focusing processing on the laser received by the objective lens. The optical path extension device750may further include an eyepiece. The eyepiece may be configured to project the laser focused by the objective focusing lens to the target. As such, the laser can be processed by the objective focusing lens after being received by the objective lens and then projected by the eyepiece and onto the target. Thus, the physical distance between the laser level and the target may be enlarged in an optical manner after the laser being processed by the objective focusing lens. In addition, further preferably, in an embodiment of the present disclosure, the optical path extension device750may further include a crosshair reticle. The crosshair reticle may be arranged between the objective focusing lens and the eyepiece and configured to perform assistant alignment on the laser. In an embodiment of present disclosure, the objective lens may include a set of objective lenses. In an embodiment of the present disclosure, the optical path extension device may include a first times of an optical extension times. Preferably, in an embodiment of the present disclosure, the first times may be 32 times or 26 times. An appropriate times may be selected according to actual needs. Preferably, in an embodiment of the present disclosure, the optical path extension device may be configured as a level. An optical path center of the level may be aligned with the laser emitted by the laser level. In an embodiment of the present disclosure, a wired connection or a wireless connection may exist between the image recognition device740and the laser level720. The wired connection or the wireless connection may be configured to transmit a calibration signal from the image recognition device740to the laser level720. Optionally, in an embodiment of the present disclosure, the wireless connection may include at least one connection of an infrared connection, a Bluetooth connection, or a WiFi connection. In summary, the device for calibrating the laser level of the present disclosure may not need a laser detector to detect and identify changes in the laser position. A suitable target object may be selected as the target according to the application scenario. Thus, the laser level may be calibrated only by the image recognition device and have a wide application range, a simple structure, and high calibration accuracy. Although various exemplary embodiments of the present disclosure have been described, various changes and modifications may be performed on the device apparent to those skilled in the art. One or some of the advantages of the present disclosure may be realized without departing from the spirit and scope of the content of the present disclosure. For those skilled in the art, other components performing the same function may be replaced appropriately. The features explained herein with reference to a particular figure may be combined with features of other figures, even in those cases where this is not explicitly mentioned. Furthermore, the methods of the present disclosure may be implemented either in all software implementations using appropriate processor instructions or in hybrid implementations that utilize a combination of hardware logic and software logic to achieve the same results. Such modifications to the solution according to the present disclosure are intended to be covered by the appended claims. | 19,926 |
11860000 | DETAILED DESCRIPTION It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. Additionally, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures and components have not been described in detail so as not to obscure the related relevant feature being described. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features. The description is not to be considered as limiting the scope of the embodiments described herein. Several definitions that apply throughout this disclosure will now be presented. The term “coupled” is defined as connected, whether directly or indirectly through intervening components, and is not necessarily limited to physical connections. The connection can be such that the objects are permanently connected or releasably connected. The term “substantially” is defined to be essentially conforming to the particular dimension, shape, or another word that “substantially” modifies, such that the component need not be exact. For example, “substantially cylindrical” means that the object resembles a cylinder, but can have one or more deviations from a true cylinder. The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like. FIG.1shows an embodiment of a level gauge100that can simultaneously determine an inclination direction and an inclination angle of a gauging surface200. The level gauge100includes a housing10, a vertical member20, and a perpendicular member30. The housing10includes a bottom wall11and a side wall12. An outer surface of the side wall12is provided with a first scale A′, and the bottom wall11and the side wall12enclose a receiving cavity10a. An axis of the vertical member20is maintained in a vertical orientation in a natural state, and the vertical member20is arranged inside the receiving cavity10a. A center of gravity of the perpendicular member30is arranged on top of the vertical member20, and a plane of orientation of the perpendicular member30is perpendicular to the axis of the vertical member20. When the bottom wall11is placed on the gauging surface200, the axis of the vertical member20is maintained in the vertical orientation, thereby driving the perpendicular member30to remain in a horizontal plane. A maximum value corresponding to the perpendicular member30on the first scale A′ is the inclination angle of the gauging surface200, and a straight line connecting the center of gravity of the perpendicular member30and the maximum value on the first scale A′ is the inclination direction of the gauging surface200. In one embodiment, the bottom wall11is circular, the side wall12is cylindrical, and cross-sectional areas of the bottom wall11and the side wall12are the same, so that the housing10and the receiving cavity10aare cylindrical. The side wall12is made of transparent material to facilitate observing the inclination degree on the first scale A′ corresponding to the perpendicular member30. In one embodiment, the vertical member20is in the shape of a droplet with a spherical bottom and a pointed top. A center of gravity of the vertical member20is located on the axis of the vertical member20adjacent to the spherical bottom. The spherical bottom can reduce friction between the vertical member20and the bottom wall11. When the axis of the vertical member20is offset, the center of gravity of the vertical member20will naturally drive the axis of the vertical member20to return to the vertical orientation. The axis of the vertical member20passes through the center of the bottom wall11. In one embodiment, the perpendicular member30is in the shape of a disk, and the center of gravity of the perpendicular member30is at the center of the disk. The plane of the perpendicular member30is perpendicular to the axis of the vertical member20, and the center of gravity of the perpendicular member30is at the axis of the vertical member20, which will not affect the ability of the vertical member20to naturally maintain the vertical orientation. Because the perpendicular member30is fixedly arranged on the top of the vertical member20, the axis of the vertical member20will drive the plane orientation of the perpendicular member30. Referring toFIG.3, in one embodiment, the first scale A′ is arranged along an axial direction of the side wall12, and there is a plurality of first scales A′ arranged side-by-side around the side wall12, so that the inclination degree can be observed from different directions. When the bottom wall11is horizontal on the gauging surface200, the corresponding value of the perpendicular member30on the first scale A′ is 0. Referring toFIG.1andFIG.2, the level gauge100further includes a rolling member40and an attracting device50. The rolling member40is rotationally arranged on an upper surface of the perpendicular member30. The attracting device50can attract the rolling member40to the center of gravity of the perpendicular member30. In one embodiment, the level gauge100may further include a perpendicular holding device60for maintaining the axis of the vertical member20perpendicular to the bottom wall11. Referring toFIG.4, when the bottom wall11is placed on the gauging surface200and the corresponding value of the perpendicular member30on the first scale A′ is 0, the gauging surface200is substantially horizontal. In order to confirm that the gauging surface200is truly horizontal, the perpendicular holding device60is turned on to make the perpendicular member30parallel to the bottom wall11, and the attracting device50is turned off to release the rolling member40. If the rolling member40remains at the center of gravity of the perpendicular member30, then it is confirmed that the gauging surface200is horizontal. If the rolling member40rolls away from the center of gravity of the perpendicular member300, it is indicated that the gauging surface200is inclined, and the rolling direction of the rolling member40is the inclination direction of the gauging surface200. In one embodiment, the rolling member40is a smooth ball. The attracting device50includes a coil50a, which is wound around the vertical member20along the axial direction of the vertical member20. The rolling member40is magnetic, and when the coil50ais energized, the vertical member20generates a magnetic force to attract the rolling member40to the center of gravity of the perpendicular member30. The vertical member20may also be magnetic, as long as a magnetism of the vertical member20is less than a magnetism of the rolling member40. The vertical member20is weakly magnetic, and the rolling member40is strongly magnetic to prevent the coil50afrom attracting the rolling member40when the coil50ais not energized. In one embodiment, the attracting device50further includes a battery51and a switch52. The battery51is coupled to the coil50a, and the switch52is used to switch the coil501on and off. The battery51and the switch52are installed on the housing10. In one embodiment, an inner cross-sectional shape of the housing10along the axis of the housing10is the same as the cross-sectional shape of the perpendicular member30and has the same cross-sectional area as the perpendicular member30to prevent the rolling member40from falling through a gap between the housing10and the perpendicular member30. Referring toFIG.2, the upper surface of the perpendicular member30is provided with a second scale B′ arranged around the center of gravity of the perpendicular member30. The second scale B′ is used to facilitate observing the rolling direction of the rolling member40to determine the inclination direction of the gauging surface200. In one embodiment, the second scale B′ measures 360 degrees around the edge of the perpendicular member30. In one embodiment, the perpendicular holding device is located in the receiving cavity10abetween the bottom wall11and the perpendicular member30, and the perpendicular holding device includes a first clamping member, a second clamping member, and a driving device. When the driving device is turned on, the first clamping member and the second clamping member can be driven to respectively clamp opposite sides of the vertical member20to maintain the axis of the vertical member20perpendicular to the bottom wall11. Referring toFIG.3andFIG.4, to level the gauging surface200, the bottom wall11is first placed on the gauging surface200, the perpendicular holding device is turned off, and the attracting device50is turned on. The vertical member20drives the perpendicular member30to maintain the horizontal orientation. At this time, the maximum value is observed on the first scale A′. The maximum value is the inclination angle of the gauging surface200, and the second scale B′ indicates the inclination direction of the gauging surface200. Then, the gauging surface200can be adjusted to a horizontal orientation by adjusting the gauging surface200according to the maximum value and the inclination direction. After adjustment, in order to confirm whether the gauging surface200is horizontal, the perpendicular holding device is turned on, and the attracting device50is turned off. If the rolling member40remains at the center of gravity of the perpendicular member30, the gauging surface200has been leveled successfully. If the rolling member40rolls, the gauging surface200is still inclined, and the rolling direction of the rolling member40is the inclination direction of the gauging surface200. The gauging surface200can be continuously fine-tuned until the rolling member40remains at the center of gravity of the horizontal member30. It is understandable that, in other embodiments, the perpendicular holding device may maintain the axis of the vertical member20perpendicular to the bottom wall11in other ways, such as by magnetic fixing or other mechanical mechanisms. After the housing10of the level gauge100is placed on the gauging surface200, the vertical member20is kept upright and drives the perpendicular member30to remain horizontal, and the inclination angle of the gauging surface200is obtained by observing the maximum value of the perpendicular member30on the first scale A′, and the inclination direction of the gauging surface200is obtained through observing the straight line between the maximum value and the center of gravity of the perpendicular member30on the second scale B′, which achieves the purpose of identifying the inclination angle and the inclination direction of the gauging surface200simultaneously. The embodiments shown and described above are only examples. Even though numerous characteristics and advantages of the present technology have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the detail, including in matters of shape, size and arrangement of the parts within the principles of the present disclosure up to, and including, the full extent established by the broad general meaning of the terms used in the claims. | 11,596 |
11860001 | DETAILED DESCRIPTION FIG.1shows an exemplary embodiment of a multi-parameter sensor1according to the present disclosure with a sensor set2, which has, for example, four sensors20,30,60,70. Of course, the sensor set2can also have more or fewer than four sensors. In any case, however, the sensor set2has at least two sensors. The multi-parameter sensor1comprises a sensor housing10with at least two connection points11,12. A first connection point11is for receiving a first sensor20, and a second connection point12is for receiving a second sensor30.FIG.2shows an embodiment with four connection points for four sensors. Of course, the multi-parameter sensor1has as many connection points as sensors. The first connection point11has a first geometric, mistake-proofing feature13, and the second connection point12has a second geometric, mistake-proofing feature14different from the first geometric, mistake-proofing feature13. The first geometric mistake-proofing feature13allows only the first sensor20to be inserted into the first connection point11, and the second geometric, mistake-proofing feature14allows only the second sensor30to be inserted into the second connection point12. If further connection points are present in the sensor housing10, each connection point has an individual geometric, mistake-proofing feature. The first geometric, mistake-proofing feature13is, for example, a first thread G1with a first pitch S1. The second geometric, mistake-proofing feature14is, for example, a second thread G2with a second pitch S2different from the first pitch S1. For example, the first pitch S1is 4, and the second pitch S2is 6. Thus, when screwed into the first connection point11with the first geometric, mistake-proofing feature13, the first sensor20is screwed in by 4 mm along the sensor axis, i.e., a first axis A1, during one complete revolution of the sensor, and the second sensor30is screwed into the second connection point12with the second geometric, mistake-proofing feature14by 6 mm along the sensor axis, i.e., the first axis A1′, during one complete revolution of the sensor. The first thread G1may have a number of turns different from the second thread G2. The first thread G1may have, for example, two thread turns, and the second thread G2may have, for example, three thread turns. Alternatively or in addition to the described embodiment with different pitches of the different threads, the threads have different thread profiles. According to such an embodiment, the first thread G1has a first thread profile GP1, and the second thread G2has a second thread profile GP2. The first thread profile GP1is, for example, a metric ISO trapezoidal thread, and the second thread profile GP2is, for example, a steel-armored pipe thread or a pipe thread or buttress thread. Alternatively or in addition to the described embodiments of the geometric mistake-proofing feature, the first geometric, mistake-proofing feature13of the first connection point11has a first keyhole SL1with a first hole shape, and the second geometric, mistake-proofing feature14of the second connection point12has a second keyhole SL2with a second hole shape different from the first hole shape (seeFIG.5). The first hole shape has, for example, a hexagonal shape, and the second hole shape is, for example, round, oval, or has a polygonal shape or a multi-toothed profile. According to this embodiment, the first connection point11extends along a first axis A1. The first keyhole SL1is arranged such that it forms an entrance to the first connection point11. A first cavity15is formed axially behind the first keyhole SL1in the insertion direction. The first thread G1is arranged axially behind the first cavity15. The second keyhole SL2, a second cavity16, and the second thread G2of the second connection point12are arranged as in the first connection point11. FIGS.2and3show the sensor set2for the sensor housing10of the multi-parameter sensor1. The sensor set2comprises at least the first sensor20and the second sensor30. Of course, the sensor set2can also have more than two sensors, for example, four sensors (seeFIG.2). The first sensor20has a first sensor shaft21for fastening the first sensor20in the first connection point11of the sensor housing10. The second sensor30has a second sensor shaft31for fastening the second sensor30in the second connection point12of the sensor housing10. The first sensor shaft21has a third geometric, mistake-proofing feature22for the first connection point11. The second sensor shaft31has a fourth geometric, mistake-proofing feature32for the second connection point12. Owing to the third geometric, mistake-proofing feature22, the first sensor20can be inserted only into the first connection point11. Owing to the fourth geometric, mistake-proofing feature32, the second sensor30can be inserted only into the second connection point12. The first geometric, mistake-proofing feature13is complementary to the third geometric, mistake-proofing feature22. The second geometric, mistake-proofing feature14is complementary to the fourth geometric, mistake-proofing feature32. The third geometric, mistake-proofing feature22is, for example, a third thread G3with the first pitch S1, and the fourth geometric, mistake-proofing feature32is, for example, a fourth thread G4with the second pitch S2. Thanks to the different pitches, a sensor can be inserted only at a specific connection point. Alternatively or in addition to the described embodiment with different pitches of the different threads, the threads have different thread profiles. According to this embodiment, the third thread G3has the first thread profile GP1, and the fourth thread G4has the second thread profile GP2. As shown inFIG.3, the third thread G3and the fourth thread G4each have at least one thread start GA, which has a flank F.FIG.4shows an enlarged cutout, marked inFIG.3, of the third thread G3. The third thread G3extends around the first axis A1. The flank F extends along a second axis A2. Preferably, the first axis A1and the second axis A2are arranged at an angle W between 0° and 75° to one another. The flank ensures that an insertion of a wrong thread pairing is avoided. A user will therefore immediately notice, when the sensor is placed at the connection point, whether the sensor is at the appropriate point or not. Alternatively or in addition to the described embodiments of the geometric mistake-proofing feature, the third geometric, mistake-proofing feature21of the first sensor20has a first key bit SB1with a first bit shape, and the fourth geometric, mistake-proofing feature32of the second sensor30has a second key bit SB2with a second bit shape different from the first bit shape (seeFIG.5). The first bit shape has, for example, a hexagonal shape, and the second bit shape is, for example, round, oval, or has a polygonal shape or a multi-toothed profile. According to this embodiment, the key bit SB1, SB2of the sensors20,30is arranged axially behind the thread G3, G4in the insertion direction. The first key bit SB1is complementary to the first keyhole SL1, and the second key bit SB2is complementary to the second keyhole SL2. In this embodiment, it is possible for the first thread G1, the second thread G2, the third thread G3and the fourth thread G4to be identical, since an mistake-proofing effect is achieved by the keyholes and key bits. In this embodiment, a connection system other than a threaded connection, such as a bayonet connection (not shown), is also possible for fastening the sensors in the sensor housing10. FIG.6shows an alternative embodiment of the sensor set2′. The sensor set2′ comprises a first sleeve40, a second sleeve50, at least one first sensor20, and at least one second sensor30′. The sensors20,30′ differ from the sensors20,30used in the other embodiments in that the sensor shafts21,31′ of the sensors20,30′ each have the third thread G3. If the sensor set2′ has more than two sensors, each sensor has the same thread on the sensor shaft. If the multi-parameter sensor1has further sensors, an individual sleeve40,50,50′,50″ is assigned to each further sensor. The first sleeve40has a first internal thread41and a first external thread42. The first internal thread41is different from the first external thread42. The first internal thread41is complementary to the third thread G3of the first sensor shaft21of the first sensor20. The second sleeve50has a second internal thread51and a second external thread52. The second internal thread51is identical to the first internal thread41, and the second external thread52is different from the first external thread42. The first external thread42is complementary to the first thread G1of the first connection point11. The second external thread52is complementary to the second thread G2of the second connection point12. The same applies to the further sleeves50′,50″. Thus, the first connection point11is suitable for receiving the first sensor20with the first sleeve40, and the second connection point12is suitable for receiving the second sensor30with the second sleeve50. The same applies to the case when further connection points and further sensors are present in the multi-parameter sensor1. The sleeves40,50,50′,50″ are suitable for being permanently connected to the respective sensor20,30,60,70—for example, by means of adhesive bonding or welding. Thanks to the sleeves, it is possible to form a standard thread on the sensor shaft of the sensors, so that the sensors are suitable for use in other receiving devices like the connection points of the multi-parameter sensor1. The first external thread42of the first sleeve40has a first pitch S1and/or a first thread profile GP1, and the second external thread52of the second sleeve50has a second pitch S2different from the first pitch S1and/or a second thread profile GP2different from the first thread profile GP1. The first thread profile GP1is, for example, a metric ISO trapezoidal thread, and the second thread profile GP2is, for example, a steel-armored pipe thread or a pipe thread or buttress thread. | 10,105 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.